►
From YouTube: SIG - Performance and scale 2022-06-23
Description
Meeting Notes:
https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.tybh
A
Okay
today,
so
please
add
yourself
as
an
attendee
and
let's
get
started
with
the
agenda.
A
I
was
looking
at
this
earlier.
It
looks
like
where
there's
an
outage
in
ci
and
I
think
it's.
B
Nothing's
working,
so
I
as
far
as
I
I
got,
they
are
updating
also
they
are
updating
the
kubernetes
cluster
of
the
approach
of
okay,
yeah
and
I
think
they're
investigating.
It
seems
to
me
some
networking
issues
that
I
saw
in
some
problem.
I
have
no
idea,
so
they
are
just
getting
that
and
I
I
think
it's
the
right
two
days
so
hope
they.
A
Yeah,
so
we'll
another
on
the
other
before
this
happened,
though
around
here
was
when,
when
I
published,
we
improved
the
memory,
so
it
looks
like
we
had
a
good
run
here,
a
good
span
where
we
had
some
good
tests.
So
that's
a
good
sign,
so
I
guess
once
once
we
see
this
back.
Hopefully
this
will
stay
green.
I
think
this,
I
think,
we'll
have
enough
memory
now.
I
think
this
this
is
gives
me
some
confidence
that
we're
working.
C
A
That
yeah,
okay,
let's
look
at
some
others
this,
so
your
your
pure
merge,
marcelo
your
request
merged
to
make
to
fix
the
job.
I
still
think
that
I
there
was
this
one
green.
I
was
gonna
look
at
this.
I
mean
this
is
this
is
good
to
see.
I
don't
know
what
these
are.
I
don't
know
why
it's
started.
Failing
again,
we
can
take
a
look.
B
B
I
don't
know
we
wouldn't
check
that.
I
don't
know
so
because
just
actually
this
job
here,
it's
it's
ranging
the
reimpression,
it's
create
like
200,
400
and
600,
and
I
have
the
impression
that
maybe
it's
failing
the
the
last
scenario
that
when
it's
creating
600
and
maybe
it
could
be
related
to
the
to
the
memory
footprint
that
increased
and
maybe
we
cannot
create
600.
I
don't
know
we
need
to
just
investigate
that.
B
B
A
A
What
is
this
like?
What?
Why
is
it
referring
to
the
uis
that
we
created?
What
is
this?
Is
it?
Is
it
because
it's
like
deleting
these,
but
why
are
these
here.
B
I
think
I
think
what
I
know
what's
happening
here.
You
know
just
actually.
This
is
very
good,
so
I
I
wasn't
able
to
reproduce
this
problem.
You
know
until
now,
sometimes
when
I
was
running
many
tests,
you
know
sequentially
at
the
same
time
some
vms
gets
very,
you
know
we
doesn't
delete,
it
keeps
like
stuck
there.
B
So
I
think
that's
what's
happened.
There
are
vms
that
are
stuck
and
it
takes
a
lot
of
time
to
delete
some
vms
and
then
it's
whoa
the
system.
It's
it's
completely
broke.
Now
you
need
to
do
so,
but
they
they
clean
the
cluster
clean.
You
see,
the
cluster
can
fail,
it
wasn't.
Even
the
cluster
maybe
was
not
able
to
clean
up
the
cluster
anymore.
B
So
we
need
to
you
know,
do
like.
I
need
to
go
manually
there
and
and
check
and
clean
up
the
cluster
and
see
what's
happening,
but
this
is
something
that.
B
A
How
come
well
yeah
that
that
it's
we
should
yeah,
but
what's
what
don't
we
do
do
we
do
deletes?
We
have
delete
over
here
right.
We.
B
B
A
A
A
B
A
And
so
I
I
follow
that
that
makes
sense,
but
one
other
thing,
though,
is
like
it's
surprising
to
me
like
what,
if
we
like,
should
we
should
we
have
the
weight
like
above?
Should
we
have
this
wait
for
the
vmis
to
be
completely
removed,
not
just
in
running
phase
like
that's
just
being
removed
from
running
face?
A
B
Yeah
the
problem
is,
for
example,
sometimes
okay,
it
shouldn't
happen,
but
considering,
for
example,
when
we
have,
for
example,
namespace
kubernetes
has
this
problem.
You
know
for
some
sometimes
for
weird
reason:
object
becomes
like
stuck
and
then
the
finalizer
is
never
removed,
and
then
it's
never
deleted.
B
It's
happens
often
for
me.
If
we've
named
space,
we
just
patch
remove
the
final
item
and
we
can
see
here
that
the
the
hoop
virtual
cluster
clean
has
this
logic
to
actually
wait
and
after
a
while,
probably
that
all
the
pods
were
deleted,
but
the
vm
is
still
stuck
there
and
then
it
removed
the
final
license.
You
know
force
delete.
B
Yeah
or
or
we
need
to
like,
do
weight
that
have
just
the
same
logic
that
make
closer
clean
hands
like
wait
a
little
bit
if
it
doesn't
remove.
We
need
also
to
do
this.
You
know
patch
and
remove
the
file
analyzer,
which
would
be
the
same
thing
that
the
cluster
claims
name,
but
so
cluster
cleaning
shouldn't
fail
when
it's
tried
to
patch
of
yam
to
delete
that
and
the
vm
doesn't
exist
anymore.
D
Sorry,
I
I
had
a
question:
is
there
a
way
we
could
change
the
order
of
delete
deleting
the
resources
to
make
sure
that
the
finalizer
goes
to
the
reason
why
I'm
asking
this
question
is
in
my
experience,
whenever
a
finalizer
is
stuck,
it
means
that
the
controller
that
is
responsible
for
removing
that
finalizer
was
deleted
first
and
then
the
resource
was
deleted,
so
the
finalizer
got
stuck
and
we
have
to
manually
remove
that
finalizer.
D
So
if
we
can
make
sure
that
all
the
operands,
that
is
the
vmis
are
deleted
first
and
then
controller
keyboard
controller
and
the
entire
keyboard
stack
is
deleted,
then
this,
if
there
should
not
be
any
reason
for
us
to
delete
the
finalizer
right.
B
It's
a
very
good
point,
very
good
point,
and
I
I
wasn't,
I
didn't
think
about
it
and
it
might
be
easy
as
ryan
roll
up.
We
are
deleting
cooper
and
then
we
try
to
patch
so
yeah.
It's.
D
Yeah
yeah,
so
maybe
the
correct
way
is
to
delete
all
the
vmis
first.
So
we
we
know
that
all
the
keyboard
operands
are
deleted
and
then
delete
keyword
operator
and
then
make
sure
that
other
cleanup
is
completed.
A
A
It's
not
actually
meant
to
do
graceful
cleaning,
so
we,
if
we're,
if
we're
creating
things,
and
we
want
to
measure
you
know
we're
already
making
an
attempt
at
deleting,
I
think
we're
just
we're,
not
we're
not
following
all
the
way
through
to
measure
to
allow
them
to
fully
gracefully
delete
which
we
can
like.
I
think
the
thing
is
like
what
I
was
afraid
was
that
we
have
a
bug
here
and
that,
like
we're,
just
you
know,
have
vms
that
are
sitting
there
or
whatever,
but
I
don't
think
that's
the
case.
It's
just
that
we're
not.
A
D
Sorry
I
was
saying
that
if
we
change
the
order,
that
is
make
sure
that
vmis
are
deleted
first
and
then
keyword
is
deleted
and
still
the
deletion
doesn't
complete.
That
means
there
is
a
bug
in
our
deletion
process
right
and
then
at
that
point
you
should
like
go
root,
cause
that
bob
fix
that
and
then
yeah,
I'm
just
trying
to
say
that
order
should
help
and
if
it
doesn't,
then
there
are
other
problems
we
should
solve.
A
Yeah,
so
I
guess
the
point
is
that
the
the
but
the
the
cleanup
script
is
is
telling
is
actually
catching
us.
It's
it's
catching
us.
You
know
doing
this
using
relying
on
it
for
cleanup
when
we
shouldn't
be
so
like
yeah,
like
you
guys
said
like
we
should.
Let's
handle
the
grace
of
cleanup
ourselves
and
if,
if
it
doesn't
work,
then
we
then
we
have
an
issue
and
we
should
try
and
we
should
raise
that
and
get
fixed,
so
yeah.
Let's,
let's
include
that.
So
it's
just
it's.
A
C
A
Well,
so
the
what's
happening
is
the
the
yeah.
Well,
so
what
I'm
saying
is
like
the
watch.
The
so
what's
happening
is
the
load.
Generator
is
completing
its
work
once
it
sees
that
all
of
there
are
no
vms
and
running
phase.
So
right
we
just
need
to.
We
just
need
to.
We
don't
complete
the
load
generator
until
there
are,
unless
there's
no
vmis
left,
that's
all,
instead
of
so
that
would
change
the
order
layer
that
would
address
that
issue.
A
Yeah,
I
don't
know
yeah
no
problem,
okay,
all
right,
so
that
that
will
give
us
a
follow-up.
Okay,
let's
go
to
the
next
one,
so
you
knew
so
marcelo.
I
wanted
to
hear
from
you
like
if
you
have
any
tracing
results
from
from
this
issue,
I
don't
know
if
you
have
them
anywhere
and
interesting
to
see
if
you
have
any.
B
B
Trace
right
now,
it's
in
it's
another
machine.
I
recorded
that
somewhere,
but
I
can
put
here
the
figure
that
we
were
discussing
before.
A
Oh
yeah,
okay,
yeah.
We
can
talk
about
this,
so
this
was
yeah.
Okay.
How
did
you
generate
this
by
the
way?
What
did
you
what'd
you
do
to
find
us.
A
B
B
Yes,
so
this
is
a
scenario
I
didn't
change,
any
configuration
different
controller.
You
know
it's
the
default
cars
per
second
that
was
before,
and
I'm
creating
two
thousand
vms
in
a
node,
and
maybe
it's
here
enough.
C
B
B
You
know
200
vms
per
hour
or
okay.
I
need
to
double
check
if
it
was
2,
000
or
1000.
Let
me
double
check
that
before
anyway.
So
oh,
I
have
here
in
the
lecture
and
then
I
got
the
controller
log
and
in
the
virtual
controller
log
we
have
the
trace
results
and
the
the
trace.
B
Actually
it's
reporting
the
latency
of
some
functions
when
they
are,
you
know
a
bulls
and
threshold,
and
these
two
functions
there
are
appearing
in
the
world,
the
update
status
and
the
scene-
and
I
got
you
know
just
just-
do
a
person
and
got
all
the
latest
from
this
execution
of
the
one
hour
execution
action
that
I
took
and
from
the
update
started
in
sync
and
when
we
see
like
this
high
latency
here,
it's
where
the
vms
are
being
created.
Okay,
so
it's
two
seconds
you
need
to
process
this.
B
If
they
started
then
sync
was
is
lower
in
the
beginning.
But
after
a
while
it's
you
know,
it
was
not
slow
anymore
and
but
update
starts
remaining
as
well
for
a
while,
probably
when
to
create
audience,
but
again
after
I
increase
the
quartz
per
second,
which
the
the
pr
that
increase
that
that
we
we
can
also
point
here.
B
A
B
It's
a
burst
test,
but
it
has
like
some
rate
limit
of
create,
creating
tool.
20
vms
per
second.
A
A
It's
like,
I
would
one
it
makes
me
curious.
I
wonder
if
this
number,
if
you
increase
to
3k,
if
we
would
see
this
increase
in
level
off
at
like
2500
or
something
you
know
like
just
which
is
sort
of
backs
up
that
theory,
that
this
is
just
a
rate
limiter
and
we're,
and
it's
sort
of
the
amount
of
requests
that
we're
making
it's
just
causing
us
to
slam
the
rate
limiter
and
it's
it's
sort
of
consistently
slowing
us
down
to,
at
the
same
to
the
same
rate,
yeah.
B
Okay-
and
we
can
see
that
like
we
have
like
you
know
one
second
already,
you
know
for
the
update,
starts
and
sync,
you
know
continuously.
A
Okay,
that's
I
wonder
what
else
we
could
well
have
you.
Maybe
what
we
could
do
is.
A
A
A
I
think
in
some
cases,
it'll
be
obvious,
like
it'll,
be
like
an
api
request
or
something,
let's
see
if
we
can
point
these
out
and
that
might
be
give
us
that
might
give
us
something
some
things
we
can
consider
as
like
optimizations
things
that
just
won't
lower
our
amount
of
requests
we
make.
So
here
we'll
do
that.
After
how
we
go
through
this
one.
First,
we
can
come
back
for
good
time.
A
Okay,
this
is
for
controller
notebook.
You
just
too
many
key
requests.
B
B
You
know
increasing
the
cars
per
second
and
it's
the
first
bump
there.
It's
like
the
default
one
that
it's
20
requests
per
second
average
controller.
Then
I
gradually
increase
like
for
100,
200,
400
and
600
pairs
per
second
and
the
verge
controller.
Node
keeps
adding
more,
you
know,
retrying
the
keys.
B
So
when
the
key,
typically,
when
the
key
retries-
that's
what's
you
know
this
happens,
it's
fail
to
prostitute
and
then
it
will
rip
you.
You
know
we
tried
it
so
as
it
maybe
it's
expected
to
some.
You
know
very
small
amount
of
keys,
be,
you
know,
add
back
to
the
queue
because
it
couldn't
process
because
something
happens.
Maybe
it
was
rape
limit,
for
example,
but
the
fruit
controller,
it's
crazily
failing
to
process
something.
B
Unfortunately
andrew
is
not
here
and
he
was
investigating
this.
He
creates
a
plc
and
sent
me
oh
yeah.
He
wrote
something
I
didn't.
I
didn't
read
that.
A
So
by
design
it's
constantly
reviewing
nodes
with
one
minute
delay
the
catch
is
that
the
controller
watches
for
both
node
and
vmi
events.
So,
one
minute
after
the
burst
of
vmi
events,
there
will
be
a
burst
of
req
re-enqueue.
The
reconciliation
is
short
and
simple.
If
the
note
is
responsive
but
still
can
impact
for
controller
performance
to
improve
this,
we
would
reconcile
nodes
and
bmis
put
in
reading
key.
Only
on
note
events.
B
Yeah
it's!
It
was
clear
for
him.
I
I
didn't
visualize
everything
here,
but
he
creates
a
plc
center
in
the
cold.
I
didn't
test
it
yet.
I
would
test
to
see
if
it
works,
so
he
will
submit
a
jar
for
that
maybe
next
week.
So
we
can
discuss
that
better
next
in
our
next
meeting,
but
I
just
just
want
to
introduce
that,
since
we
are
working
on
that.
A
B
B
B
It's
constantly
rekindle
that
queue
one
minute
every
one
minute,
and
maybe
it's
too
short,
isn't
it
so
just
to
check
if
the
node
it's
responsive,
responding.
You
know,
that's
my
interpretation
and
I'm
not
really
sure
what
he's
doing
in
the
plc.
I
need
to
check
the
cold,
maybe
our
give
it
and
unfortunately
he
couldn't
join
today.
He
would
explain
it
better,
but
what
he
was
suggesting
here
is
instead
of
let's
see,
reconcile
this
node
responsiveness.
B
The
vmi
or
to
check
the
vmi
orphan
or
error,
or
our
failed
vermont.
They
need
for
this.
A
B
Yeah,
but
we
can
check
more
details
of
what's
the
solution
and
but
definitely
I
think
that's
that's
the
problem
here
is
we
should
keep
you
know,
riku,
you
know
recuring
well,
can
you
can
you
come
back?
I
don't
know
how.
I
understand
also
how
it's
increased
with
the.
D
Number
of
vmis
yeah.
I
think
what
I
am
interpreting
is
that
it's
watch
plus
poll,
so
the
watch
starts
the
first
time
a
node
or
a
vmi
is
created
on
a
node.
The
node
is
put
in
the
watch
view
and
once
it
sync
after
at
the
end
of
everything,
it
will
re-queue
for
the
next
minute
right.
D
So
what
I
am
interpreting
is
that,
after
after
the
load,
one
minute
after
all,
the
all
the
node
events
will
come
and
fill
the
work
queue
at
the
same
time
and
then
again
after
every
minute.
It
will
continue
to
do
happen.
So
my
question
is:
can
we
add
some
kind
of
jitter
to
the
re-queuing
logic
so
that
it
would
be
requeued
one
minute,
plus
data
delta?
Second,
after
the
initial
resync?
D
B
Yeah,
I
think
this.
This
might
be
something
that
happened.
I
also
wonder
in
here
if,
if,
if
it's
doing
like
you
know,
checking
the
node
responsiveness
for
every
vmi
and
it's
it,
could
you
know
I
don't
know
the
same.
I
don't
know
they
called
me
check
that
I'm
just
supposing
here,
because
if
it's
checking
the
the
node
responsiveness
forever,
you
know
vmi,
we
shouldn't
do
that.
We
should
have
something
that
it's
checking
the
node
times
times
and
it's
independent
from
creating
every
vmi
yeah.
I'm.
B
D
A
That's
a
good
point.
I
mean
because
you're
12.
you're
12
nodes
right
for
this
test.
Yes
yeah,
I
found
it
yeah
I
mean
that's
interesting.
I
I'm
curious
yeah,
it's
interesting!
I
I
don't
know
how.
So
what
led
him
to
this
like
that
that
this
is
affecting
those
I
mean.
How
did
it
hardly
come
to
this.
B
Yeah
or
maybe
related
to
the
vmi
yeah,
so
what
it's?
When
we
increase
now
the
the
the
cars
per
second,
the
this
node
controller
can
do
more
requests
and
then
it's
crazing,
you
know
doing
requests
and-
and
I
forgot
the
name
who
was
talking
before
sorry.
Can
you
say
your
name.
B
Okay,
ali-
I
think
alay
was
saying
that
maybe
we
should
increase
this
interval.
You
know,
or
you
know,
that
it's
checking
you
know
the
delay
between
recue.
This
specific
key,
you
know
to
check
the
node
responsiveness
to
a
larger
interval,
but
I
think
one
minute
is
true.
It's
too
small
to
check
that.
I
don't
know
we
can
discuss
that.
I
don't
know.
What's
other
kind
of
controllers
are
doing
for
that.
A
I
think
it
would
be
sorry
well,
as
I
said
we,
we
should
grab
andrew
for
the
next
meeting,
because
I
I
think
we
have
a
lot
of
questions
and
we're
just
we're
we're
building
a
bunch
of
questions
I
mean
I
want
to
write
some
of
these.
Let
me
write
some
of
them
down
because
I
mean
to
me,
like
I
don't
understand
how
how
we
got
to
like
how
we
got
to
the
node
controller.
Like
I,
I
think
it's
like
it's
like
to
me.
A
It's
like
you're,
running
with
12
notes
for
your
test.
This
is
such
all
no
tests.
How
is
the
node
controller.
D
B
B
So
we
are
creating
like
1000,
vms
or
2000
vms,
and
we
can
see,
as
I
mentioned
some
ex.
We
can
expect
some
review
of
some
keys
like
vm
and
vmi,
but
it's
very
small,
isn't
it
five,
but
which
controller
node
is
eight
four
per
second,
so
it's
generating
a
lot
of
requests
and
we
just
have
like
a
big
impact
to
decrease
the
number
of
requests
that
you
know
to
the
api
server
request
that
kubernetes
and
we
definitely
need
to
decrease
that.
A
I
see
okay,
I
think
then
that
makes
sense.
I
think
then,
okay,
then
I
think
we
have.
I
think
we
have
a
general
idea,
then
what
so
again,
I
guess
what's
happening,
then
it's
just
making
an
assumption
based
on
what
we're
seeing
here
is
that
the
number
is
it's
related
to
the
number
of
vmis,
the
node
controller,
the
the
way,
the
node
controller.
A
B
Yeah,
but
it's
doing
like
yeah,
it's
because
it's
when
it's
recue
it
means
it's
failed
to
check
to
do
like
some
check
and
they
need
to
recue.
But.
B
Yeah,
it
shouldn't
be
eight
per
second,
isn't
it
more
or
less
seven
or
we
can
see
more
or
less
seven
keys
per
second
for
each
node?
What's
more
or
less
what's
happening
here,
yeah,
it's
true!
It's
too
much
it's
too
high.
So
we
need
to
understand
why
it's
requiring
too
much-
and
I
think
this
should
be
if
it's
related,
it
shouldn't
be
related
to
the
vmi,
otherwise
we're
creating
two
thousand
here.
We
would
see
much
more.
D
D
So
because
of
that
it
is,
I
mean,
that's
how
I
am
thinking
that
we
are
getting
to
this
place.
The
problem
is
that
if
we
don't
do
this
vmi
to
node
key
mapping,
how
do
we
know
that
a
particular
node
has
a
vmi
hosted
on
it
and
we
need
to
check
for
its
responsiveness.
D
We
would
have
to
list
the
vmis
on
on
that
and
then
figure
it
out,
so
that
would
be
an
additional
work
as
well.
So
that's
something
we'd
have
to
think
through.
B
A
Would
be
interesting,
I
I
think
what
you
said
about
how
we're
doing
this
and
the
algorithm
that
sounds.
That
sounds
possible.
I
would
like
it
would
be
good
to
have
andrew
here
and
to
see
what
his
findings
are
or
if
someone
also
wants
to
look
at
this
like
it
would
be
good
to
get
a
full
explanation
of
this,
because
I
think
we
also
need
a
like.
We
need
the
explanation
of
this
and
we
need
to
understand
the
use
case,
and
then
we
can.
A
That
would
be
interesting
to
see
we
can
find,
because
I,
I
really
think
yeah
because
I
mean
I
I
didn't
realize
that
this
was
how
this
was
the
highest
one
on
here,
and
this
I
mean,
if
we,
if
this
is
this
seems
like,
doesn't
make
any
sense.
I
mean
if
this
was
like
five
or
something
like
the
rest
of
these.
A
I
I
wonder
what
our
what
our
qps
could
get
slipped
down
to.
I
mean
like
quite
quite
a
bit
lower,
so
that
would
be
so,
and
we
were
talking
about
this
in
put
requests
and
whatever
last
time
it
seems
like
this
seems
to
be
like
our
the
most
interesting
one
here.
So
let's,
let's
do
some
investigation.
Let's,
hopefully
maybe
we
can
get
andrew
in
here
next
time
and
see
what
else
he's
discovered
and
yeah.
Let's
learn
about
this,
this
node
controller
and
see
how
we
can
improve
that
it
seems
to
be
yeah.
A
We
seem
to
be
circling
the
issue
pretty.
I
think
it's
pretty
obvious
what
the
issue
is
here.
It
seems
to
that
sticks
out
quite
a
bit,
yeah,
okay,
good
all
right
that
makes
sense
to
me:
that's
yeah!
Let's
get
some
more
information
on
it
and
let's,
let's
have
a
discussion
and
see
if
we
can
see
if
we
can
improve
this.
That's
that's
good.
A
Okay:
cool:
what's
andrew,
what's
up
here
marcelo,
can
you
take
the
action
item
to
so
ask
andrew
to
join
for
next
meet
see
if
he's
available
see
if
we
can
get
him
to
come
in
and
talk
about
it
and
let's
get
some
more
information,
yeah.
B
A
Okay
thanks,
okay,
so
that's
that
makes
sense,
that's
good,
but
that's
actually
really
promising.
I
think
that's
that's.
Definitely
an
error.
We
can
make
a
huge
improvement
all
right.
Let's
go
back
to.
Let's
go
back
to
the
to
this
one.
Let's
look
at
the
let's
see,
let's
look
at
the
code,
real,
quick
and
see
what
we
can
learn
from
where
the
traces
are
sync.
Let's
see
package,
vert
controller
watch
if
you
wanna
go.
A
A
Okay!
So
here's
sync:
let's
see
how
long
is
this?
Okay?
Not
that
long.
So
it's
like
it's
like
100
lines.
Okay,
let's
just
look
through
this
really
quickly.
Let's
see
okay,
so
this
so
the
level
set.
So
this
is
going
when
we're
creating
first
500
vms
and
we're
creating
two
kvms
on
a
12-note
cluster.
A
We
go
from
almost
two
seconds
to
two
and
a
quarter
seconds.
It
takes
to
run
through
sync,
okay,
and
this
is
not
included
reviewing
anything.
I
don't
I'm
pretty
sure
it
doesn't
like.
That's
what
I
remember
about
the
stuff
that
the
traces
doesn't
count
the
re-cue,
so
this
is
going
to
take.
This
function
is
going
to
take
almost
two
and
a
quarter
seconds.
A
Let's,
let's
see,
let's
start
at
the
top
legal
matching,
pods
vmis
file,
it
should
be
a
quick
check.
We
already
have
the
object
to
the
orphan
attached
pods.
B
C
A
A
B
B
It's
doing
a
lot
of
get
you
know
and
and
pull
to
request
things
like
that
that
we
saw
like
there
are
a
lot
of
boots
requests.
A
Seems
like
it
I
mean.
Basically,
it's
like
we
have
the
so.
Like
I
mean
the
way
it
summarizes
is.
If
we,
if
we're
deleting,
then
we
delete
our
pods
okay,
we're
not
doing
that
if
it's
final,
because
it
has
the
finalizer
it's
in
the
final
state,
we
delete
the
pods,
so
we're
syncing
our
delete
states
orphan,
attach
pause.
C
B
A
A
A
B
C
B
A
Okay,
well
I
mean
it
would
be
interesting
to
see
like
I
mean
some
areas
that
let's
see
if
we
can
get
it
right
there
we
go.
A
If
we
can
like.
A
I
mean
the
rest
of
this
looks
pretty
straightforward.
I
mean
this
looks
very
quick,
wait.
First,
customer
template
or
yeah
I
mean
it's
yeah.
I
don't
think
anything
in
here
looks
like
it's
doing
anything
with
the
api.
It's
just
those
two.
A
And
maybe
marcelo
when
you
do,
if
you
want
to
you,
know
the
next
time
you
do
this
test,
just
throw
a
step
trace,
just
basically
copy.
I
think.
Is
it
like
this?
I
think
it's.
I
think
you
just
add
what
you
do
is
add
it's
it's
not.
I
don't
think
it's
exactly
like
this.
I
think
you
have
to
oh
what
you
have
to
do
is
you
have
to
do
a
you.
Go
to
change.
A
Remove
this
defer,
add
a
step
trace
and
then
add
another
step
trace
here
and
another
step
trace
here
and
but
the
last
one
that
you
do
make
it
a
differ
just
so
that
we
can
so
it
runs
through
the
rest
of
the
function
and
that'll.
Do
it?
That's
it's
basically,
a
copy
and
paste
just
change
the
change
this
to
whatever,
whatever
we're
tracing,
delete,
touch
pods
or
do
the
orphan
touch
pods
or
data
volume
check
and
that'll
that'll.
Do
it
it's
pretty
easy.
A
Okay,
yeah
give
that
a
shout
see.
I'm
really
curious
to
see
we'll
find.
Okay,
all
right
should
we
do
update
status
too.
Then,
let's
we
got
four
more
minutes.
Where's
upstate
status,
update.
A
A
A
A
A
A
Okay,
that's
interesting,
so
this
this
whole
block
is
kind
of
interesting
okay
status
that
I
said,
status
updates,
topology,
topology
conditions.
A
A
A
Oh
here,
yeah:
okay,
okay,
okay
yeah:
where
were
we
public
ready,
update
volume
status.
B
A
And
yeah
it's
fine.
We
cannot.
B
Change
that
and
but.
A
C
A
Yeah
I
mean
this
might
just
be
right.
We're
hitting
the
rate
limit
and
that's
that's
it
yeah,
which
is,
is
what
it
is
like.
We
need
an
update,
that's
fine,
so
those
two
and
then
what
was
the
one
up
here.
Are
there
two
up
here?
Not
two.
It
was
this
one
knows
this.
This
sink
right
here
this
one,
those
three
areas
they
look
like
hot
pass.
A
Yeah,
the
rest
of
it
looks
like
it's
just
removing
conditions
or
just
editing
gamble.
It
looks
like
so
yeah,
okay,
all
right.
Well,
so
then
marcel
may
we
do
your
next
experiment
and
try
try
copying
that
tracing
around
and
let's
see
what
see
what
you
find.
If,
hopefully,
you
still
have
this
tool,
you
can.
We
can
reuse
this
and
it
would
be
cool
to
see
if
we
can
have
another
chart
that
shows
like
based
on
some
of
this
data.
That'd
be
really
cool.