►
From YouTube: SIG - Performance and scale 2022-07-28
Description
Meeting Notes:
https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.tybh
A
Okay,
welcome
scale,
everybody
it's
july,
28th
2022..
A
Let
me
share
the
link
to
the
notes.
Okay,
please
set
yourself
as
an
attendee
okay,
for
today
I
just
have
one
agenda
item
we
can
also
do.
We
might
also
do
the
performance
job
results,
but
I
want
to,
I
mainly
wanted
to
discuss
the
the
q
vert
v1
release
and
what
we
would
like
to
do
with
regards
to
you
know
with
with
sixth
scale.
A
How
so,
basically,
what
I'm
looking
for
is
how,
when
we,
when
we
look
at
the
keywords,
v1
release,
when
it's
officially
done
what
information
can
sig
scale
contribute
to
that
release.
So
this
is
a
bit
of
an
open
question,
because
I
really
think
there's
a
lot
of
avenues
we
can
take
but
kind
of
what
I
was
looking
for
in
terms
of
ideas
like
things
like
you
know,
can
we
say
well,
what's
this
humorous
ability
to
scale?
How
many
nodes,
I
think,
is
a
question.
A
People
would
be
interested
in
how
much
memory
is
this
release
expecting
for
the
vert
launcher,
or
maybe
even
hubert
and
as
a
whole,
like,
I
think,
all
those
things
would
be
viable?
Maybe
we
could
do
a
few
things
on
the
creation
rate
or
something
I
don't
know
like.
I
think
there's
some
some
options
we
have,
but
I
think
what
I
would
I
kind
of
want
to
get
the
idea
like
the
idea
of
starting
to
like
yeah.
A
A
A
A
You
know
that
if
people
think
were
achievable,
we
could
say,
as
part
of
you
know,
keywords
e1,
like
I
still
add
some
of
the
ones.
I
said
like.
A
Number
of
nodes,
what
else
we'll
say.
A
A
A
A
So
scale
I
mean
we
have
a
little
bit
of
performance.
We
have
our
launcher
pod.
I
think
that's.
This
is
so
kind
of
way.
I
look
at
this
is
like
for
launch.
Remember
consumption,
it's
it's!
This
is
going
to
be
like
how
many
like,
if
I
wanted
to
say,
build
a
cluster
and
I
needed
to
know
how
much
ram
I
needed.
A
A
So
it's
gonna
be
helpful
for
my
my
calculations
same
here.
Let's
be
good
for
a
breakdown
in
terms
of
like
how
we
want
to
how
we
want
to
shall
we
change
over
time
like
we
don't
want
this
to
increase
same
here
like
we
with
keyword.
Scale
like
this
is
something
we
want
to
measure
and
we
want
to
see
if
we
can
increase
over
time.
A
C
Yeah,
I
think,
like
we
can
specify
like
because,
like
there
is
this
bullet
performance
and
scale
that
I
think
we
would
need
to
like
say
what
what
exactly
like
creation,
time
or
deletion
time
or
like
like,
as
you
said,
like
crate,
probably
like
also
like
specifying
what?
What
is
the
hardware
that
that
we
we
tested
this
stuff
on,
because
that
that's
that's
also
some
kind
of
reference.
A
Okay,
I
mean,
I
think,
like
I
don't
I
don't.
I
don't
want
to
like
overdo
it,
but
I
just
want
to
get
like
what
are
these
like
the
things
that
like
like
we
could
that
we
feel
confident
about
like
when
we
say
v1
like
this
is
something
that
we
don't
expect
to
regress
and
we
feel
confident
that
things
are.
These
are
going
to
be
accurately
seen
in
the
wild.
A
So,
like
number
of
nodes
I
mean
I
will
have
to
do
like
I
think
like
for
like
how
would
we
get
this
information?
I
think,
let's
see
like,
I
think
I
I
think
we
could
start
with
like
whoever's,
got
the
highest
number
of
of
nodes
like
that
we've
seen
in
production.
I
mean
that
might
be
a
very
simple
way
to
test
this
or
just
as
a
you
know,
just
to
advertise
this
metric,
because
I
don't
think
we
have
a
way
to
test
this
right
now.
A
So
I
think
this
could
just
be
like
okay,
so
we've
seen
x,
number
of
nodes
and
prod
whatever
that,
like.
C
And
maybe
also
like,
because,
like
number
of
notes
can
can
do
like
the
number
of
vmis
running
of
these
notes.
It's
also
like
important
because
I
feel
like
like
you
can
run
one
vmi
per
node,
but
you
can
also
like
10
10
vmis
per
node
right,
and
these
are
like
two
different
different
scenarios.
C
A
C
Is
it
that
way?
I
think
both
I
think
both
are
are
interesting
because,
like
if
you
have
like
really
large
notes
that
that's
also
like
interesting
to
see
how
many
you
can
pack
etc.
Yeah.
B
A
Okay,
so
we
guess
we
can
do
some
sort
of.
Maybe
this
is
the
same,
so
we
do
like
we
have
this.
We
do
like
we.
We
do
like
this.
So
let
me
remove
this
up
here.
A
There
so
we'll
do
like
what's
our
scale,
topology.
A
We
may
we
come
up
with
like
a
chart
or
something
like
that
can
give
us
like
a
sense
of
okay,
here's
just
here's
some.
This
has
been
proven
to
to
work,
and
this
is
what
you
should
expect
to
work
with
v1.
So
I
mean
I
think
what
like.
So,
if
I
was
someone
looking
at
this
here's,
when
I
kind
of
come
white,
take
away.
If
I
were
to
read
this
and
say
like
okay,
here's
here's
like
we'll,
say
we'll,
say
red
hat.
A
You
know,
red
hat
has
like
seen
this
number
of
notes
and
corruption,
and
they
have
you
know
this
number
of
vms.
It's
like
might
take
a
my
takeaway
would
be
that
okay,
I
can
achieve
at
least
this
number
of
nodes.
That's
been
done
by
someone,
regardless
of
how
I
do
the
number
of
vms
I
I
should
at
least
be
able
to
achieve
this
number
of
notes.
I
think
that's
valuable
and
then
the
same
with
this,
like
you
say,
you
have
a
different
number
of
nodes.
A
Whatever
someone
has
seen
this
number
of
vms
okay,
there
should
be
a
configuration
that
this
is
possible
and
then
hardware
and
cluster
kubernetes
cluster
information
again
same
thing
like
so.
This
will
just
give
us
like.
It
won't
be
like
a
very
one-dimensional
thing,
as
in
like
okay,
cuber
can
scale
so
many
nodes,
it's
more
like
we're,
giving
a
recipe,
for
you
know
a
few
things
that
could
get
it
to
scale.
So
I
think
that's
valuable.
I
think
that's
a
good
start.
A
I
think
what
we'll
do
is
we'll
try
and
build
on
this
to
maybe
isolate
some
of
these
variables,
but
I
think
this
is
this
makes
sense
as
a
as
the
first
probably
the
first
way,
we
could
advertise
this
information
just
so
it's
not
misunderstood.
I
think
that's
a
good
idea,
okay.
So
what
about
this?
Do
we
need
to
do
the
same
thing
for
vert
launcher
memory?
Consumption
at
all
like
this
should
be.
This
should
be.
A
C
I
think
we
don't.
We
just
need
to
be
careful
because,
like
vert
launcher
usually
consists
of
of
two
two
processes
like
one
is
this,
I
don't
know
like
watcher
monitor
as
it's
called,
and
there
is
this,
this
main
weird
launcher
process,
and
although
this
monitor
is,
is
quite
small
right
now.
I
think
in
the
the
latest
covert.
C
It's
still,
it
still
adds
up
if
you,
if
you
want
to
start
many
many
vmis
in
in
the
single
node.
So
we
just
need
to
be
be
careful
about
this,
but
I
feel
like
it's.
It's
okay
to
just
just
state
there
on
the
consumption
yeah.
A
So
I
think,
along
those
lines
I,
if
we
have
more,
I
think,
if
I
think
the
launch
of
memory
can
vary
based
on
like
the
number
of
devices.
I
believe
I
think
if
we
have
like,
if
we're
passing
in,
like
more
than
like
three
devices,
that
the
memory
consumption
increases.
A
C
Yes,
but
we
can,
we
could
just
just
say
that
that
the
the
memory
consumption
that
that
we
stated
is
is
on
default
ones
that
that
I
I
think
are
right
now
set
to
10
or
something
like
this,
because,
like
it,
it
depends,
it
can
depend
a
lot
like
if
you,
if
you
change
the
the
number
of
threads-
and
I
feel
like
we-
it's
not
not
necessarily
a,
I
think,
like
we.
C
We
can
spend
a
lot
of
time
on
on
like
trying
to
to
see
how
like
how
many,
how
memory
consumption
depends
on
threats,
but
not
sure
if,
if
that's
valuable
for
for
the
consumer,
someone
just
starts.
A
Well,
so
one
question
about
this:
the
threads:
does
the
number
of
threads
have
anything
to
do
with
the
like
the
number
of
devices,
for
example
the
number
of
like?
How
would
this?
How
would
this
change
like
or
is
it
or
is
it
what
you're,
after,
like
here's,
what
the
expected
thread
count
is,
and
it
should
not
change
over
time.
A
Okay,
so
the
only
thing
with
this
is
we
need
to
figure
out
like
so
I
mean
this
might
be
a
problem
that
we
have
to
address
like
with
like
what
we
at
least
have
to
address
like
with
its
performance,
but
I
mean
is
this
valuable
like
if
I
come
along
as
a
user,
and
I
look
at
this-
I
threads
might
be
too
low
level
for
like
how
I
want
to
consume
this
information
in
a
v1
setting
like
do
I
need
like
I
want
to
know
like
if
I'm
the
user
from
the
customer,
I
want
to
say
like
how
many
what's
my
performance,
what's
my
scale,
that's
what
I
want
to
get
from
this
information.
A
The
number
of
threads
isn't
going
to
tell
me
like
how
many,
how
well
I
can
perform
or
how
what
my
scale
is
it'll,
be
a
comparison
across
releases,
but
it's
might
be
too
low
level
like
can
we
is
there
something
else
we
could
use
like?
Does
this
lead
to
some
other
information
like
causing
it
to
consume
too
much
memory
or
something
like
what
is
this?
What
does
this
cause.
A
Okay,
well
think
about
that.
I
think
I,
the
thing
that
I'd
be
interested
in
is
like
I,
because
I
think
the
number
of
devices
like
if
you
have
a
bunch
of
sriv
devices,
if
we
increase
it
to
a
certain
account,
becomes
it
increases
the
memory.
This
would
be
good
to
measure
and
just
remember
that.
A
A
A
Okay,
we
have
liver
d
memory
consumption,
this
one
okay,
this
is
a
low
level
as
well.
A
A
Okay,
we're
going
to
think
about
these
two
I'm
going
to
pull
them
out
for
now,
though,
we
need
to
investigate
these
two,
because
I
think
I
don't
think
we
can
include
them
just
yet.
A
A
A
Maybe
how
about
like
pvcs,
if
we
like
data
like
if
we
have
pvcs
or
not
pi
creation,
time,
maybe
number
of
api
servers?
I
think
that
all
could
affect
this.
C
I
don't
think
like
we
should
worry
about
like
pvcs,
for
example,
because,
like
that's,
that's
something
kind
of
optional
and
and
also
like,
very
dependent
on
how
like
because
like
if
we
are
starting
bmi,
it
like
it
depends
how
how
the
user
provides
the
the
image
to
to
the
pvc,
for
example
like
this
base
and
base
one,
and
also
like
it
depends
how
how
the
pvcs
or
pvs
are
are
started.
C
So
so
I
feel
like
this.
This
kind
of
may
may
also
like
affect
this,
this
creation
time
and
deletion
time,
but
but
it's
not
helpful
for
for
the
for
the
whole
like
metric
to
to
measure.
A
You
saw,
and
it's
like
p95
is,
I
don't
know
something
and
p50
is
something
you
know
we
don't
have
deletion
time
and
currently
in
that
job,
but
we
can.
We
can
at
least
do
creation
time.
I
think
that's,
that's
okay.
I
think
that
would
just
give
us
like
a
just
a
starting
point.
I
think
just
because
I
think
that's
probably
the
most
we
can
come
into
just
because
it's
we
like
you're
saying
it
gets.
A
It
really
gets
into
some
of
this
and
get
you
know
it,
and
you
know
at
least
we
have
a
job.
That's
that
says
something
that
we
can
so
here.
Actually,
here's
what
I
like
about
this
is
that,
if
with
this
with
this
with
this,
if
we
do
it
by
through
ci,
we
can
you
know
we
ci
is
going
to
stay
consistent
or
should
stay
consistent
as
we
go
through
different
releases
of
cube
root.
A
So
we
should
be
able
to
take
this
metric
and
and
consistently
show
like
the
changes,
say
we're
having
performance
improvements.
You
know
because
of
a
pull
request.
For
example,
you
know
maybe
on
that
release
we
say
performance
improvement.
We
show
it
in
the
p95,
the
p50
and
you
know
in
the
pr
links
to
it
or
something.
B
So
right,
I'm
wondering
if,
like
this
context,
will
lack
the
releases,
so
let's
say
we
released
v1
and
these
metrics
on
on
the
date
of
v1
release
right
at
that
point
in
time.
The
three
months
is
not
actually
v1
release.
A
B
These
numbers
are
out
in
some
form
on
on
the
release
notes
today.
B
B
So
the
the
the
release,
so
it's
not
the
entire
v1
release
right
like
so.
Maybe
we
could
report
from
code
threes
all
the
way
up
to
the
release
or
something
where
it's
it's
more.
The
code
base
is
more
reflected
towards
a
v1
release,
rather
than
a
change
across
our
time.
A
A
So
three
months
go
by
and
we've
been
testing
the
performance
for
those
three
months.
So
would
it
be
so,
do
we
still
have
the
problem?
Then,
if
we've
been
testing
from
v1
to
v1.1,
here's
what
we
saw
throughout
this
time
period-
it's
you
know
it's
the
same
or
something
the
same
performance.
Do
we
do?
We
still
have
this
issue
like
with
the
way
that
we're
doing
the
release.
B
So
I
I
think
that
would
be
more
reasonable.
I
I
think
on
on
an
alternative
side
right
like
what,
if
like,
is
there
any
value
in
having
a
job
that
you
know
starts
maybe
10
performance
and
scale
jobs
with
one
particular
image
that
that
is
built?
Let's
say:
v1
image
build
and
we
report
these
these
numbers
for
from
that
job.
Is
there
any
value
in
any
in
a
job
like
that?
B
That
way,
we
can,
you
know
concretely
say:
okay,
v1
release.
Has
these
performance
numbers
we
v
1.1
release.
Has
these
performance
number
and
like
even
before
we
are
about
to
release
the
v
1.1.
We
can
compare
that.
Okay,
there
is
a
regression.
We
are
significantly
down
on
our
p95s
or
p50s
and
we
need
to
block
our
release
or
we
need
to
take
xyz
action.
A
Yeah,
I
think
what
I'm
hearing
you
say
is
that
we
need
some
sort
of
official
test
that
we
do
like
right
before
we
release
that
will
tell
us
these
numbers,
though
it
tells
us
exactly
these
numbers,
but
based
on
a
code
freeze.
So,
instead
of
looking
at
it
over
time,
we
do
it
right
at
the
end
so
that
we
make
sure
we're
not
we're.
Not
we're
not
mistaken,
like
you
know,
by
say,
like
an
average
over
time.
B
Yeah
yeah,
maybe
I
mean
I
could
see
value
in
both
of
those
so
average
over
time
is
also
important
like
we
can.
We
can.
There
is
one
data
point
that
that
can
be
had
across,
like
average
over
three
months
and
then
the
other
data
point
for
just
release,
management
and
tracking
the
the
numbers
across
the
releases
and
how
we
are
doing
in
terms
of
scale.
A
A
I
think
what
we
do,
maybe
when
we
like
maybe
there's
like
a
patch
that
we
do
like
that's
just
like
official
v
run
release
and
in
it
we
just
run
the
performance
job
you
know
and
the
job
just
whenever
it's
just
a
readme
change
or
something
I
don't
know,
and
and
we
run
or
we
just
run
it
manually
whatever
and
that's
what
tells
us
that's
what
we
use
as
our
final
data
reading
and
that's
what
goes
into
the
release,
notes
or
something
yeah.
I
think
that's
fine,
I
mean,
or
will
you
see,
average
or
whatever?
A
I
think
I
guess
like
from
for
right
now,
I'm
sort
of
because
we
don't
we
don't
have
like
the
we
don't
have
the
we
don't
have
the
first
release
to
base
off
of
to
compare
we're
just
going
to
be
yeah,
I
mean
I
I
don't
know
I
guess
we
could.
We
could
just
do
it.
A
You
know,
based
on
our
last
reading,
that's
sort
of
what
we
can
do
like
the
last.
We
can
run
a
run,
a
job
right
before
we
do
the
v1
and
that's
what
we
report
yeah.
I
think
that's
fine.
I
like
I
think
over
three
months,
whatever
we
can
yeah,
so
I
guess
so
here.
I'll
put
it
this
way.
So
maybe
we
don't
serve
three
months,
we'll
just
say:
here's
our
performance,
here's
our
performance,
job.
A
A
B
Okay,
yeah,
I
mean
if
we,
if
you
want
to
do
a
little
more
averaging,
then
for
from
the
same
same
like
pre-release
image,
we
can
run
10
of
those
and
average
it.
So
there
will
be
like
less.
It
will
be
more
certain
and
that's
like
environment
specific
but
yeah.
It
sounds
good
to
me.
A
Yeah,
I
think,
like
the
reason
I
was
kind
of
thinking
about
the
average
is
because
I
wanted
to
try
and
incorporate
some
people
that
are
doing
their
pr's
and
if
there
is
a
performance
increase,
it
would
be
interesting
to
have
that
data
to
show
and
when
it
happened,
but
I
I
think,
maybe
that's
just
a
formality
like
something
we
do
at
the
end.
You
know
like
here's,
what
every
people
you
know
we
have
in
the
release
notes.
This
is
a
performance
increase.
You
know,
here's
where
here's,
what
the
increase
is.
B
And
we
can
run
this
job
as
an
optional
job
on
on
the
pr
itself
right.
So,
for
example,
marcelo
was
working
on
a
pr
and
that
was
increasing
or
well
that
was
affecting
the
creation
time.
If
we
had
a
job
and
numbers
reporting
like
that,
like
we
could
set
up
a
job
that
could
be
run
on
that
pr
and
report
numbers
there.
B
So
I
mean
that's,
that's
one
way
of
concretely
finding
out
what
we
are
yeah.
I
think
the
pr
specifically
affecting
numbers.
A
Yeah,
I
think
I
think,
okay,
I
think
I
think
the
right
way
to
sort
of
wrap
this
up
is
that
we
don't
have
the
infrastructure
right
now
to
say
like
to
really
get
the
data.
That's
like
okay,
here's
where
we
can
see
the
change
and
come
in,
and
you
know
the
pr
and
whatnot.
I
think
I
think
someone
have
to
tell
us.
I
think
we
just
don't
have
the
infrastructure
to
track
it.
A
So,
let's
just
go
with
what
we
know
is
going
to
work
which
is
like,
let's
run
this
at
the
end
and
that's
what
will
be
our
numbers
and
we'll
compare
it
to
when
we
do
it
the
exact
same
time
on
the
previous
release,
so
I
think
that's
a
good
starting
point.
We
can
always
try
to
improve
to
be
more
specific,
like
to
show
the
different
changes,
and
you
know
maybe
the
different
changes
in
performance
yeah,
because
I
think
it's
a
different
problem.
A
A
B
A
B
Yeah,
I
think
that's
a
very
valid
concern
and
one
thought
that
immediately
comes
to
my
mind
is:
what
can
we
like
run
this
performance
job
as
a
weekly
or
maybe
a
nightly
and
then
process
it
like
in
our
scale
community
calls
like
the
way
we
process
other
other
jobs
that
way
like
over
time.
We
know
what
commits
are
going
in
and
we
can
maybe
report
these
numbers
in
the
community
calls
for
cube.
Word.
B
Might
be
a
lot
of
work,
but
just
a
thought
on.
A
Yeah,
I
think
no,
I
mean
so
I
guess
the
right
direction.
I
mean
we
mostly
do
I
mean
we
do
kind
of
like
I
try
to
review
it
when
when
we
can
but
like
there's,
the
problem
is
like
I
don't
always
remember
exactly
what
they
were.
We
it
has
to
be
automated.
I
think,
like
the
best
way
to
do
this
is
is
to
like,
like
I
that's.
The
difficult
part
is
like
this.
We
need
to
take
these
and
they
need
to
be.
A
B
A
B
I
think
we
should,
I
think
there
is
value
in
adding
deletion
time
as
well,
because
sometimes
the
finalizer
stays
on,
and
it
would
be
good
to
understand
like
when
the
delete
command
was
issued
and
at
what
time
it
of
the
object
has
been
finalized
and
released.
To
me
that
that
seems
to
be
valuable
information.
A
Yeah,
I
agree,
I
think
it
it
might
be
actually
a
small
change.
We
might
delete,
we
might
delete,
but
we
don't.
We
run
our
our
our
audit
tool
after
the
creates,
so
it
might
be
as
simple
as
just
running
this
audit
tool
again
after
the
delete
it.
Might
it
might
it
so
it
might
be
an
easy
change,
but
it
would
be
good
to
have,
but
we
do
we
do
need
to
change,
though,
to
to
address
this.
A
Yeah,
why
don't
we?
So
if
you
want
to
create
like
why,
don't
you
create
an
issue
here?
I'll
do
a
I'll
add
you
lay
yeah
stream
issue,
okay
and
then,
if
you
want,
if
you
have
a
password,
I
can
we
have
it
in
time
for
v1.
I
can
review
it.
I
think
yeah.
I
actually
don't
think
it'll
be
a
ton
of
work.
We
just
need
to
just
need
to
run
our
on
a
job.
A
No
there's
not
there
isn't
a
date
yet
so,
what's
what's
gonna
happen,
so
I
I
think
is
I'm
just
talking
with
some
of
the
maintainers
now
and
trying
to
find
kind
of
the
right
way
to
coordinate
this.
So
I
I
don't.
I
don't
I
don't
think
there's
a
date,
but
I
I'm
just
trying
to
push
to
I'm
trying
to
push
to
find
one
because
it
hasn't
been.
There
hasn't
been
a
date
for
a
while.
So
I
I'm
working
on
figuring
that
out.
So
I.
B
A
B
Okay,
yeah,
I
I
think
that
sounds
great.
I
I
I
should
be
able
to
follow
up
on
that
issue
before
we
won.
A
Okay,
all
right
thanks,
elaine,
okay,
this
looks
pretty
good
guys.
I
I
think
so
we
have
three
things,
so
it's
we'll
do
a
scale,
a
topology
with
some
context,
effort
launch
our
memory
consumption
with
some
context,
creation
time
with
our
ceo
job.
I
think
those
are
really
good
and
we'll
have
to
we'll
have
to
think
about
these
some
more.
A
B
Okay,
one
one
question
I
had
was
so
on
on
the
creation
and
the
deletion
time.
I
think
the
the
concerns
that
you
raised
and
the
automation
is
is
gonna,
be
good
nice
to
haves.
Do
we
want
to
capture
that
somewhere
so
like?
If
somebody
comes
in
and
says
that
okay,
I
have
some
spare
time
and
I
want
to
contribute
like
we
can
for.
A
A
B
We
need
that
it
might
be
to
I
don't
know
if
it
would
be
trivial
or
not.
Maybe
like
first
starting
point
is
just
weekly
emails
or
something
like
that,
but
hey
last
week
it
was
this
today
it
is
this
okay,
that's
one
way
I
mean
to
me.
That
would
be
a
easy
way
to
get
started,
but
then
eventual
goal
would
be
maybe
a
chart
that
someone
could
go
over.
A
Yeah
and
we
could
look
at
yeah-
I
I
those
are
all
good
ideas:
okay,
okay,
that's
a
question
sure
I'm
new,
I'm
new
here;
actually,
I'm
also
from
I'm
from
the
pro
scale
department.
My
manager
wanted
me
to
come
here
to
start
picking
up
some
stuff
mercedes
doing
so.
I
might
start
joining
this
meeting
soon.
B
I
was
curious
why
the
vm
boot
time
is
not
included.
A
A
Okay,
so
the
we
don't
have
a
way
to
measure
that
today
I
don't,
I
don't
believe,
there's
a
metric
for
it,
so
I
think
we
have
a
gap
there.
I
think
we
would
need
to
have
a
way
to
measure
it
first,
so
we
only
have
right
now
with
the
vm,
the
vmi
transition
times
the
phase
transition
times
we
have,
you
know
like
phase.
We
have
like
a
scheduling
schedule.
We're
running
the
transition
from
scheduled
to
running
is
close.
It's
when
the
domain
gets
defined.
A
The
the
vmi
goes
to
running
phase,
so
it's
somewhere
after
that
is
when
we
go
to
running,
but
we
have
no
end
point.
We
have
no
point
where
it
says
like
being
booted
so.
D
C
A
To
we
would
need
something
to
to
report
that
I
think
it's
possible.
We
would
just
need
something
in
the
code
to
capture
these
that
maybe
like
a
would
need
to
be
something
how
we
communicate
between
the
launcher
and
the
handler,
maybe
to
figure
that
out
or
the
guest
agent
probe
or
something
there
might
be
a
way
to
do
it.
But
we
don't
have
it
sure
yeah.
I
was
just
wondering
yeah.
B
So
I
had
a
question
regarding
that:
does
work
launcher
report
any
kind
of
metrics
back
to
prometheus
or
something
like
that.
A
I
don't
know,
I
think
there
is,
but
don't
quote
me
I
don't.
I
haven't
looked
in
a
little
while
so
I
I
don't
remember,
I
think
I
know
there's
a
bunch
of
the
handler
and
controller,
so
I
might
be
something
to
take
a
look
at
land.
I
don't
know.
B
Oh
okay,
okay
yeah,
I
think
so.
If,
if
it
were
to
report
round
matrix,
then
I
was
thinking
that
what
launcher
could
report
the
actual
boot
time
right,
regardless
of
when
api
reports
it
as
running.
A
We'd
have
to
figure
out
if
there
currently
is
a
way
that
it,
the
launcher,
reports,
hey
guess,
booted
or
maybe
liver,
does
or
something
or
maybe
there's
a
way
if
it
doesn't
maybe
there's
a
way
we
can
find
out
and
and
then
report
it,
and
that
would
give
us
and
then
we
just
expose
it
to
prometheus.
I
think
I
think
it's
possible
just
yeah.
We
just
don't
have
it.
A
A
Okay,
so
for
now,
I
think,
though,
we'll
stick
with
these
three.
I
think
that
you
know
since
they're,
I
think
they're
just
achievable.
It
already
exists
now
and
I
think
there's
something
we
can
there's
some
value
in
these
and-
and
I
think,
like
you
know,
we
got
some
a
few.
We
had
this
and
we've
got
these
are
ones
we
could
investigate.
A
While
we
have
v1
as
is
unfolding
in
the
community,
so
I
guess
along
along
that
lines,
is
there
anything
else
that
we
think
so
we
have
three
here
we
we
we
have
and
we
can
report
on.
We
have
two
that
we
think
we
have
some
ideas.
We
that
we
could
also
add.
Are
there
any
other
ones
we
think
are
missing.
A
A
All
right
I'll
take
that
as
a
no.
So,
if
you
think
of
something
we
can,
we
can
bring
this
topic
up
again.
Another
meeting
I
think,
like
I
said
we
have
some
time
to
think
about
it,
but
keep
it
in
the
back
of
your
mind.
If
you
think
of
something,
and
we
just
so,
we
have
some.
If
we
have
some
time
to
implement
these
things,
we
should
we
should
invest
some
resources
in
it.
A
Okay,
that
was
all
I
have
for
v1
then.
So
let
me
see,
I
see
federico
here
federico.
Did
you
want
to
go
over
this
issue
again?
I
don't
know
if
you
have
any
updates
on
this.
D
Okay,
no,
I
have
no
updates,
except
for
the
part
that
it's
I
don't
know
if
we
can
say
this,
but
it
seems
like
that
there
is
a
memory
leak
because
we
saw
that
the
rss
and
on
always
increasing.
D
So
together
with
antonio
and
lugo,
we
are
trying
to
investigate
the
the
sea
park,
the
sigo
part,
with
the
tools
that
are
not
quite
easy
to
install.
But
after
a
while
we
succeeded
on
it.
So
we
are
running
our
memory
test
with
the
in
the
sigo
part,
and
I
hope
that
next
week
we
will
have
some
updates
about
it.
D
Because
we
we
increase,
if
you
remember
correctly,
if
you
remember,
we
increased
the
diverge
launcher
overhead
from
75
to
100
megabytes,
but
the
the
tests
that
we
were
running
exceeded
also
the
100
megabytes.
So.
D
A
D
So
yeah,
when
this
happens,
we
say:
okay,
let's
try
to
investigate
again
and
go
deeper
because
the
gopro
file
doesn't
show
anything,
so
it
should
be
probably
in
the
sigo
part.
But
yeah.
As
I
said,
the
tools
to
invade
for
to
investigate
in
the
sigo
part
are
not
so
easy
to
install
inside
the
rear
launcher
but
yeah.
I
I'm
doing
it
right
now.
So.
A
What
what
tools
are
you
using.
D
I
I
tried
vibrant
and
it
was
quite
impossible
to
install
in
our
launcher
because
it
explodes
basically
also.
It
seems
that
there
was
problem
with
the
permission
and
this
linux,
but
also
running
a
restructure
as
root.
It
doesn't
doesn't
work,
so
I
tried
the
ip
truck.
That
is
a
binary,
but
it
has
some
problems
so
right
now,
I'm
trying
memory.
D
I
will
send
you
the
the
link
and
now
it
it
is
working
on
my
local,
my
local
environment
yeah.
I
had
to
do
some
tricky
stuff,
like
adding
kernel
modules
manually,
but
yeah
now.
A
It's
work,
okay.
I
think
what
would
be
really
interesting
so
take
notes
what
you're
doing,
because
we
might
run
across
this
again
in
the
future
and
if
there's
like
a
a
flag
or
something
if
we
could,
if
there
was
a
way
we
could
implement
like
your
creative
image,
to
do
this
and
yeah
yeah,
you
know
yeah.
We
should
do
that.
D
A
Okay
cool;
no!
That's
good
to
hear
okay,
good,
all
right
so
for.
D
Oh
sorry,
I
I
was
forgiving
and
antonio
was
trying
with
the
j
maalok.
That
is
another
tool
inside
the
inside
go.
But
I
I
don't
remember
I
don't
know
if
I
have
the
link.
If
I
had,
I
will
pause
it.
A
Okay,
yeah,
I
mean
whatever,
whatever
you
guys
come
up
with,
that
makes
that
will
give
us
the
data
that
will
work.
Yeah,
let's
make
it
yeah,
whatever
whatever
you
guys
come
with,
is
fine,
we'll
use.
We
should
just
use
it
as
like
a
tool
we
can
make
available
or
something
I
I
don't
know.
I
don't
have
a
preference.
I
think
this
seems.
Okay,
whatever
gives
us
information,
yeah.
D
Basically,
we
were
trying
all
the
tools
yeah
as
the
fpbf
tool
me
and
lugo
are
very
exciting.
So
I
think
that
we
are
happy
that
that
that
one
was
the
the
one
who
works
so
yeah.
B
Yeah,
I
mean
honestly
because
this
is
ebpf
and
we,
if
we
can
get
one
of
the
ebpf
tools
to
work,
then
there
are
a
bunch
of
other
networking
and
cpu
tools
that
I
know
for
ebpf
that
that
also
can
be
used
for
other
things.
So
that
that's
why
I
was
very
excited.
D
Yeah
yeah,
I
I
didn't
investigate
it
into
it,
but
I
I
I
saw
that
there
was.
There
were
a
lot
of
tools
so
yeah,
absolutely
nice.
A
Yeah
cool
all
right:
that's
that's!
Really
promising!
That's
good!
Let's
make
this
in
progress.
Okay,
let's
see
for
any
other
topics.
I
don't
we
could
do
a
quick
review
here.
I
think
we'll
just
take
a
quick
look.
I
didn't
have
a
chance
to
merge
your
mrla.
A
A
Just
the
periodic
for
performance-
I
don't
know
this
one.
A
Okay,
good,
okay,
all
right
are
there
any
of
the
topics
people
have
before
we
close
out.