►
From YouTube: 2023-05-11 Kubernetes SIG Scalability Meeting
Description
Agenda and meeting notes - https://docs.google.com/document/d/1hEpf25qifVWztaeZPFmjNiJvPo-5JX1z0LSvvVY5G2g/edit?usp=sharing
A
So
this
is
six
collability
meeting
11th
of
May,
2023
and
I
can
see
that
we
have
topics
today.
So
maybe,
let's
start
with
the
cube
field.
B
Yeah
I
I,
I'm,
Ryan
Halsey,
now
I'll
I'll,
talk,
I'll
talk
back
here,
so
there's
a
few
people
here
today
also
who
joined
the
call
from
from
this
group.
So
just
so
you
don't,
if
you
don't
know
the
keeper
project,
we
have
a.
We
actually
have
a
sixth
scale
group
in
Cuba.
We
do.
We
do
some
work,
testing,
scale
performance,
and
so
we
had
a
few
people
joined
today.
B
We
wanted
to
I
met
with
them
with
bush
Tech
and
at
kubecon
and
I'm
gonna
share
a
few
things
of
what
we've
been
doing.
So,
if
you
don't
mind,
I'm
gonna
share
my
screen.
Yeah.
B
B
All
right
so
I
put
together
just
a
little
bit
of
content.
Just
so
we
can
show
you
a
few
things
of
what
we've
been
doing
and
have
a
discussion
around
a
few
ideas.
Okay,
so
the
the
keyword
six
scale.
Let
me
I'm
gonna
move
my
panel
here.
Okay,
there
we
go
all
right,
the
keyword
scale
so
that
our
mission
at
cubic
scales,
what
we've
been
doing
is
so
we
covert
maintains
its
own
API
server.
It's
got
its
own
component.
It's
got
a
scheduler,
we've
got
a
bunch
of
things.
B
We've
got
our
own
workload
that
runs
as
a
pod,
but
we,
you
know,
we
have
our
own
apis,
like
VMS,
vmis
and
so
forth,
and,
and
so
our
perspective
was
that
we
should
have
our
own
scale
and
performance
standards,
tools
and
best
practices
and
so
on,
and
so
that's
really
what
we've
been
focused
on
in
six
skills.
We
we're
focused
on
on
these
things,
building
out
scale,
performance
standards
and
many
of
the
tools
around
it.
B
So
I'm
going
to
talk
a
little
bit
about
how
we
do
the
measure
so
you're
going
to
see
a
common
theme
throughout
this
is
we
want
to
use
all
the
common
tools
about
how
we
measure
and
that's
Prometheus
or
you
know,
having
something
on
a
local
laptop.
So
what
we
do
is
we
leverage.
B
We
have
two
metrics
that
we've
created
Prometheus
that
we
use
heavily
to
do
our
our
measurements
for
performance
at
scale.
First,
one
is
it's
called
phase
transition
times
and
I'll
I'll
go
through
on
more
detail.
In
that
another
slide.
B
The
second
one
is
the
client
go
HTTP
calls
to
the
kubernetes
API
server,
so
the
phase
transition
times
we
primarily
use
to
measure
performance
like
P95,
we'll
look
at
like
how
long
it
takes
for
for
us
to
go
through
a
few
different
phases,
our
virtual
machine
to
go
through
efficient,
different
phases
to
reach
a
running
State,
and
then
we
have
like.
So.
The
client
goes
think
of
like
how
many
hcp
calls
are
we
making
to
the
kubernetes
API
server.
B
This
is
like
how
many
patches,
how
many
times
we
patching
our
virtual
machine
during
its
life
cycle,
and
this
is
important
as
obviously
for
a
scale
right.
We
don't
want
it.
We
want
to
be
a
good
neighbor
in
in
the
cluster.
Keeper
is
just
a
guest.
We
don't
want
to
be
sending
thousands
of
patch
requests
or
unnecessary
patch
requests.
So
we
monitor
these
things
to
make
sure
we
we're
being
a
good
citizen
in
the
cluster
okay.
So
those
are
the
two
metrics
we
use
heavily
so
closer.
Look
at
what
these
look
like.
B
So
this
phase
transition
times
what
we
did
is
so
the
process
or
I
guess
the
I
should
say
the
inspiration
for
this.
Is
we
took
that
idea
of
a
creation
timestamp
that
you
see
on
like
pods
or
other
objects
and
deletion
timestamp
things
like
that,
and
we
extended
it
to
also
include
other
things
that
were
important
to
us.
So
this
is
a
picture
of
a
virtual
machine
instance
and
what
this?
B
What
this
has
is
the
specific
phases
that
the
virtual
machine
will
go
through
before
we're
actually
in
a
place
that
a
person
can
use
the
guest,
and
so
these
phases,
like
pending
scheduling,
scheduled
running
or
all
like,
have
different
steps
like
pending,
will
be
like.
Okay,
we've
we're
look,
we're
waiting
for
the
scheduler.
The
coupon
is
scheduled
to
to
find
a
place
to
for
to
land
this.
To
win
this
workload,
scheduling
like
we're
in
we're
in
the
process
of
being
scheduled
somewhere
and
then
scheduled
means
like
we've.
B
We've
found
a
node
where
this
process,
where
this
guest
is
going
to
land
and
then
eventually
running
running,
is,
is
basically
just
means
that
the
domain
has
been
defined
like
we've
like
we've
actually
created
it
went
first,
doesn't
necessarily
mean
the
guest
is
running
yet,
but
it
just
means
that
we've
actually
defined
the
domain,
and
so
basically
the
the
idea
is
we
have
these
this
information?
We
have
this
granular
look
at
what's
happening,
and
so
now
we're
exposing
it.
We've
basically
taken
this,
and
we
have
we've
sort
of
done
two
things
with
it.
B
We
have
we
post
the
the
the
timestamps
in
the
status
on
the
object,
and
we
do
this
when
we
actually
go
through
the
phase
transition,
so
we
don't
like,
for
example,
we
don't
like
to
two
update
calls
to
do
this.
We
we
take
in
the
same
uptake
or
the
patch
call.
We
will
to
change
the
phase.
B
We
actually
will
post
this
face
or
in
just
some
timestamp
at
the
same
time,
and
the
second
thing
is
we
also
get
in
Prometheus,
and
so
what
this
does
for
us
is
we
end
up
with
stuff
like
this,
like
we
build
dashboards
around
this,
where
we,
we
can
clearly
see
like
okay,
here's
like
what
we
go
through
for
this
BMI,
like
here's
with
those
granular
phases.
B
We
can
see
all
this
different,
all
the
different
data
about
the
phases
and
how
long
they
take
roughly,
and
so
this
gives
us
a
really
good
picture
of
like
how
we're
performing
in
our
clusters.
So
we
see
like
lots
of
patterns,
we
see
sometimes
different
phases
take
longer,
and
it
can
it's
really
easy
for
us
to
tell
okay
exactly
where
we're
getting
stuck
and
also
for.
If
you
know
so
for
running
that
in
production
or
if
we
wanted
to
do
measurements,
we
can
see
like
papers
where
our
bottleneck
is.
B
B
Is
we
created
a
bunch
of
CI
that
will
constantly
measure
this
against
different
PRS
and
we
catch
when
we
change
this
when
when
these
linear
changes,
so,
for
example,
here
two
actually
in
the
last
I
think
yeah
I
mean
this
is
just
in
the
last
five
months
where
we
caught
an
increase
in
patch
requests,
and
these
were
the
two
pull
requests
that
we
got
them
and
then-
and
it
makes
sense
like
these-
are
have
to
do
with
like
cleaning
up
finalizers
and
dealing
with
controller
revisions,
so
they're
they're
important
for
actually
doing
management
of
of
the
virtual
machines.
B
But
but
the
point,
the
important
point
is
that
we
can
see
this.
We
can
see
this,
as
you
know,
as
the
code
base
evolves,
we
can
see
how
this
changes,
and
so
obviously,
when
we
can
see
this,
it
means
that
that
hopefully,
we
can
eventually
apply
a
number
to
this
and
get
us
a
sense
of
like
okay.
You
know
this
is
this
is
how
we
could
affect
scale
like
we're
doing
a
lot
more
patches.
B
Now
with
this,
when
we
pull
on
this
pull
request,
this
could
affect
scale
in
some
way,
so
I
I
wanted
to
open
up
for
questions
or
discussion,
and
this
is
just
these
are
just
some
ideas
of
like
things
that
we've
looked,
that
we
kind
of
look
to
to
want
to
develop
when
you
know
with
with
you
guys
and
for
kind
of
the
question,
I
guess
that
I
get
all
the
time.
It's
like
it's
like.
B
How
long
it
takes
for
like
a
PVC
to
attach
like
I,
don't
I,
don't
know
what
it
is
and,
and
so
that's
kind
of
one
of
the
things
that
we
at
least
initially
was
looking
to
collaborate
with
you,
guys
and
and
I'm
sure.
There's
other
way,
other
ways
as
well,
but
like
find
ways
like
that,
we
could
look
to
measure
some
more
and
look
for
ways
that
we
can
try.
B
And
you
know,
zoom
in
on
some
of
the
different
characteristics
that
Argus
the
different
phases
that
different
workloads
go
through
with
pods
and
or
even
things
in
the
scheduler
of
PVCs.
B
So
these
are
just
some
examples
like
PVC
attachment,
maybe
Network
attachments
or
any
other
pod
conditions
that
we
can
think
of
Beyond,
just
a
creation
time
step
or
some
things
that
we
we
had
in
mind
the
things
that
could
provide
a
a
more
granular
picture
of
like
what
an
end-to-end
flow
would
look
like,
and
so
anyway,
I
wanted
to
open
up
for
questions
or
thoughts
on
the
site.
I'm,
definitely
open
hearing
what
you
guys
have
to
say
and
that's
my
last
slide.
C
Yeah
so
I
think,
thanks
that
that
was
great,
and
that
was
like
super
useful
I.
Think
the
in
general,
like
the
defining
like
what
or
figuring
out
like
what
takes
a
significant
part
of
time.
Is
it's
not
a
problem
that
we
really
solved
well
anywhere
in
kubernetes?
I?
Think
that
the
idea
or
the
path
that
we
are
we
are
trying
to
to
go
with
through
their
project
is
like
integrating
better
with
tracing,
but
we
still
didn't
figure
out
how
to
do
it.
C
Well,
so
we
are
focusing
on,
like
smaller
parts
of
the
system,
now
like
I
paste
it
to
the
chat
like
one
of
the
two
caps
that
we
are
trying
to
pursue,
which
is
like
integrating
cubelet
with
tracing
I
think
this
did
like
there's
also
another
one
that
like
that
I
can
find,
which
is
like
integrating
API
server
with
tracing.
C
This
doesn't
like
address
the
neither
of
those
addresses
like
the
the
like
they
really
cross,
cutting
operations
that
take
number
of
controllers,
and
so
on
and
so
on,
but
hopefully
based
on
the
learnings
that
we
have
from
from
from
those,
we
will
be
able
to
somehow
better
instrument.
The
system
well
I,
guess
like
the
the
the
main
point
that
I
wanted
to
to
say
is
that
we
didn't
really
figure
it
out
well
in
the
project
anywhere.
Yet.
B
Yeah
I
I
from
like
from
my
perspective,
I,
don't
know
the
system.
Maybe
you
can
tell
me
if
it
sounds
crazy,
but
my
understanding
is
that-
and
this
is
just
a
like
I,
don't
know
I'm
just
guessing
here
like
we
have.
We
have
PVC
is
a
bunch
of
phases.
Right
like
this.
Like
I,
don't
know
you
can
I
think
there's
an
attached
phase,
there's
a
pending
phase
or
something
like
that.
B
Well,
I
was
wondering
like
if,
let's
say
we
like,
we
were
to
pursue
something
like
this
kind
of
similar
to
what
I
was
talking
about
with
like
what
we
do
now
with
virtual
machines.
Would
it
make
sense
like
if
we
were
to
you
know
the
moment
when
we
a
PVC
changes
its
face
to
some
other
something
else
attached
or
whatever
we
emit
a
metric?
B
B
Do
you
think
that
sort
of
do
you
think
that
would
sort
of
get
us
closer
to
what
our
goal
is
like
like
answering
that
question
of
like
how
we
can
get
through
like
the
more
granular
phases
of
what
goes
into
getting
a
pod
up
and
running.
D
D
So
the
metrics
aspect
that
you're
talking
about
we
I
believe
we
have
it
at
few
places
already
today,
for
example,
with
the
API
server
for
different
parts
of
the
API
path
and
cubelet.
Also,
for
example,
has
some
metrics
around
Mount
volume,
Mount,
latencies
and
stuff,
like
that,
the
the
thing
with
them
is
they
they
aggregate.
So
it's
you
kind
of
know
more
about
the
trends
and
like
how,
in
general
things
are
doing,
it
depends
on
what
you
really
want.
D
It's
felt
like
when
you,
when
you
showed
the
the
crd
for
the
virtual
machine.
You
want
to
have
the
record
per
object,
so
I!
Guess,
if
you
need
that
level
of
granularity
you,
you
will
need
to
persist
this
kind
of
data
somewhere
now,
whether
you
do
it
on
the
object
itself
or
I
was
actually
just
looking
at
the
the
cap
that
ytek
shared
and
there
it
seems
like
if
I'm
reading
correctly,
they
are
trying
to
use
some
sort
of
a
different
structure
for
actually
holding
the
traces
that
are
disjoint
from
the
object.
D
C
The
names
of
that
were
changing
over
time,
which
is
the
current
one
here:
okay
yeah,
but
in
general,
I
I
agree
with
like
what
shiam
just
just
said
that,
like
it
depends
on
the
goal
that
you
want
to
achieve.
If
you
want
this
for
debugging,
why
my
single
instance
took
longer
or
yeah
took
longer
or
whatever
like
what
was
happening,
then
metrics
aren't
super
useful
for
that.
C
If
you
want
to
use
it
as
a
general
information
for
what
are
the
slowest
part
and
what
we
potentially
should
focus
on
to
optimize,
then
metrics
sounds
like
a
reasonable
option.
B
Yeah,
it's
it's
the
latter.
It's
like
right,
finding
like
if
I
were
to
it's
like
answering
that
question.
Like
you
asked
me:
how
long
does
it
take
for
this
virtual
machine
to
go
from
you
know
the
moment.
I
created
it
to
the
moment.
I
can
hand
it
off
to
someone
to
use
it's
it's
getting
it
down
to
getting
the
granularity
enough.
That
I
could
say.
Okay,
we
we
clearly
have
a
bottleneck
with
whatever
we're
doing
in
storage.
B
E
One
more
thing
I
wanted
to
add
like
the
metrics
or
things
you
are
looking
for
are
mainly
coming
from
cubelet
side
and
they
are
not
from
the
API
server
side
like
Network
attachment
were
and
the
volume
attachment,
so
one
possibility
could
be
like
something
which
API
server
does.
Is
audit
logs
and
start
in
the
recent
kubernetes
versions
in
audit
logs
we
have
started
adding
annotations
for
each
step
so
where
the
latency
is
coming
from,
you
could
see
in
those
annotations.
E
We
could
do
something
similar
in
cubelet.
Maybe
it's
audit
log
or
something
else
with
the
annotations
where,
when
a
pod
is
getting
created
in
cubelet
a
network
ads
Mount
sorry
volume
gets
Mount,
it's
an
annotation
and
network
gets
created,
that's
an
annotation,
so
when
you
are
checking
for
a
particular
pod
in
cubelet
like
why
why
it
was
delayed,
you
could
just
see
those
logs
in
cubelet
and
tell
that
could
be
another
way.
B
The
tricky
part
with
going
with
logs
is
so
now
we
yeah
that
could
work.
No,
but
the
problem
is
that,
like
how
do
we
view
it?
How
do
we
take
this
and
analyze
computer
and
then
and
then
we
would
need
to
have
and
we'll
probably
be
like
Cabana
or
something
that's
which
is
fine
like
we
could
you
could
do
it
that
way?
B
So
what
else
yeah?
What
I'm
saying
is
like,
so
you
would
need
something
to
so
like
like
it's,
it's
aggregating
the
data.
How
do
I
take
the
data
and
analyze
it
I,
guess
that
would
be
the
problem
by
the
way
like
I'm,
not
just
going
through
the
logs
and
picking
out
individual
pieces
and
looking
at
the
latency
item,
I'd
want
to
have
a
tool
to
to
deal
with
how
I'd
need
to
analyze
it
so,
like
you
know,
maybe
Cabana
or
something
that's.
What
I
was
what
I
was
saying.
B
E
All
right
that
makes
sense.
No.
This
is
more
getting
granular
data
on
on
a
particular
pod,
for
example,
but
if
you
or
as
Sean,
was
mentioning
if
you're
looking
for
Trends,
then
the
metrics
would
support
that.
B
D
Also,
like
the
other
thing,
is,
if
you're
capturing
every
single
phase
transition
on
the
object,
and
does
it
mean
that
over
time
your
object
is
going
to
grow
or
you.
D
B
A
fix,
it's
banded,
there's
a
fixed
number
of
them.
This
will
make
and
we're
only
going
to
post.
So
only
these
four
and
then
what
we
do
is
like.
We
use
the
creation
time
snip
and
we
also
have
a
deletion
time
step
and
that's
so
our
starting
phase
creation
time
set
completion
time
steps
then
phase,
and
then
these
four
in
the
middle.
D
All
right,
okay,
yeah
so
today,
in
our
scale
tests
this.
This
is
a
bit
interesting
in
the
sense
that
in
our
scale
test
today,
we
also
have
to
measure
this
the
one
that
we
run
using
cluster
loader.
We,
we
are
kind
of
getting
these
bits
of
informations
from
things
like
events
and
like
modifications
to
the
Pod
itself
like
create
timestamp,
shity
events
and
stuff,
like
that,
your
yeah
you're.
B
Making
it
so
how?
How
do
you?
How
do
you
get
the
the
scheduling
like
when
you
send
like
scheduling?
What
is
that
that's
like
so
the
when
the
Pod
is,
is
what.
D
D
But
I,
it's
essentially
I
think
one
of
one
of
those
two
where
we
have
where
we
either
check
I.
Think
for
the
event
that
scheduler
emits
when
it
when
you
schedule,
support
or
write
something
on
the
Pod
itself,
I
can't
recollect,
which
field.
C
C
The
way,
but
it's
where
we
are
and
for
phases
within
the
world
Cuba,
is
doing
I
think
we
are
relying
on
fields
from
from
the
Pod,
but
we
also
like
extended
the
metrics
on
the
cubelet
and
we
would
like
to-
or
maybe
we
already
did,
I'm
not
following
that
closely
to
be
honest,
but
we
are
trying
to
migrate
a
bunch
of
that
to
to
rely
on
the
metrics
for
for
aggregations
and
for
like
get
for
like
the
aggregated
view
of
the
of
the
like
how
system
is
behaving
and
like
on
those
individual
fields
in
the
pods
or
events
only
for
debugging,
individual
stuff
purposes.
C
F
I
I
had
a
follow-up
question
on
that,
so
I
think
the
the
way
how
we
collect
the
metrics
as
in
is
it
coming
from
the
phase
transition
timestamp
or
is
it
coming
from
the
cubelet
that
really
does
not
matter?
All
we
need
is
a
breakdown
of
let's
say
if
the
creation
to
running
time
went
really
high
after
a
particular
release,
was
it
related
to
a
cubelet
or
was
it
related
to
Something
in
the
cube
Word
Stack?
F
So
keeping
that
in
mind
is
there
a
reason
why
these
scheduling
times
are
collected
from
the
events
and
not
from
the
scheduler
emitting
those
latencies
in
in
form
of
metrics.
C
C
C
It
may
be
hard
to
so
for
scheduler
you
can
you
can
you
can
edit
the
metric?
C
Maybe
that
doesn't
make
sense?
What
I'm
saying
now
I
wanted
to
say
that,
like
you
can
emit
the
metric
when
you
are
making
a
sketch
like
scheduling
decision,
but
it's
probably
what
you
want
from
scheduler
and
what,
where,
when
even
is
happening,
I
I
guess:
I
wanted
to
say
that,
like
we,
we
are
potentially
ignoring
the
the
part
of
like
sending
the
requests
to
the
API
server
and
so
on,
but
it's
probably
work
around
the
boat
anyway
and
it's
probably
not
what
we
are
interested
in
anyway.
So.
D
D
F
We
have
been
following
and
regarding
granularity,
I
think
we
are
also
trending
data,
so
the
implementation
is
such
that
it
comes
on
a
per
object
basis,
but
at
the
high
level
we
are
looking
at
P95
of
creation
to
running
or
or
an
average
right,
so
I
I
think
moving
to
that
scheduler
metric
will
will
then
help
us
align
that
okay
keyword
started
running
on
127
kubernetes
and
it
is
seeing
more
time
being
spent
in
scheduling
phase.
Let's
go
look
at
kubernetes,
127
P95
for
scheduling
and
see.
C
D
A
Also,
you
mentioned
that
you,
you
are
wondering
whether
the
change
comes
from
Cube
fears
or
kubernetes
itself.
So
I
was
wondering.
Maybe
it
would
make
sense
in
your
case
to
actually
like
run
cube
field
with
fixed
version
of
of
kubernetes,
for
example
right
and
then
you.
A
You
are
sure
that
all
the
changes
come
from
Cube
verd,
but
also
there's
also
something
that
we
use
for
testing
various
things
where,
for
example,
with
the
aircon
compiler
and
where
we
have
like
fixed
kubernetes
version,
and
we
just
test
column
compiler,
and
this
can
also
like
just
reduce
the
noise
that
comes
from
both
projects
at
the
same
time
and
help
you
debug
further.
B
Yeah
yeah
we
do,
we
do
do
that.
We
we
test
across
I,
think
three
or
I
think
it's
three
releases,
so
yeah
I
mean
that
is.
That
is
a
point
yeah.
We
would
want
to
see
that
stuff,
but
I
mean
it
goes
further.
Even
like
you
know
like
Elena
is
saying
like
we,
we
want
to
see
deeper
into
you
know
exactly
like
you
know.
We
can
see
that.
Okay,
maybe
it's
slower,
you
know.
Maybe
it's
maybe
it's
because
of
the.
B
If
you
ever
change
and
it's
Cuba
has
changed,
but
nowhere
we're
like
we're
curious
to
see
like
even
deeper
like
what
is
it
that
is
going
wrong
in
the
scheduling
like
you
know,
maybe
we'll
file
a
bug
like
I,
like
that's
kind
of
where
we
want
to
go
with
it.
It's
like
you
know.
We
can
see
that,
like
you
know,
we
we've
got
a
lot,
a
lot
of
large
clusters.
We
run
the
stuff
we
want
to
see
exactly
where
it
is
we
want
to.
B
Yeah,
so
we
I
mean
we
I,
guess
like
what
this
has
been
a
really
good
discussion
by
the
way
and
I
guess
like
the
what
I,
as
a
goal
like
for
for
us
I
mean
this
is
something
that
we've
been
doing
and
driving
within
Hubert
and,
and
it
makes
sense
to
us-
and
it's
worked
well
for
us
and
I
mean
we'd-
be
interested
in
helping
to
contribute
to
this.
This
has
been
something
that
we
care
a
lot
about,
because
it's
it's
like
keyword
is
like
we're.
B
I
guess
the
point
we're
making
here
pretty
over
and
over
again
is
like
thank
you.
It's
just
a
guest
and
and
the
cluster
and
right
like
we
use
pods,
you
know
like
we
use
PVCs
and
stuff,
and
it's
got
It's
got
all
these
components,
but
you
know
we
want
to.
We
also
want
to
make
kubernetes
and
that
we
want
to
make
it.
You
know
improve
its
scalability
and
performance
because
we
rely
on
heavily
so
it's
it's.
It's
definitely
a
shared
goal
on
this,
and
so
I
I
we'd
love
to
help.
B
D
So
one
thing
I
feel:
maybe
you
can
help,
given
your
familiarity
with
the
space
on
the
the
things
that
are
happening
on
the
Node
side
right
today
in
the
test.
For
us,
let's
say
when
we
are
creating
pods.
The
granularity
at
which
we
operate
is
after
the
party
schedule
and
the
and
cubelete
starts
running.
D
We,
for
example,
don't
know
how
much
of
that
time
is
going
into
Parts
like,
for
example,
mounting
a
secret
or
mounting
a
config
map
into
the
into
the
Pod
or,
like
things
like
fetching,
an
IP
for
the
Pod
IP
assignment.
D
So
if,
if
there
is
a
way,
you
can
actually
take
your
knowledge
here
and
kind
of
measure
that
maybe
via
metrics
or
maybe
something
else
it,
it
would
be
interesting
to
see
that
added
as
a
measurement,
which
is
a
concept
in
our
scale
in
our
load
testing,
tooling,
to
kind
of
measure
some
aspect
of
how
the
system
is
behaving,
so
you
could
add
a
measurement
for
that.
You
can
play
around
with
it.
B
Sure
yeah,
we
can
do
that
I.
What
would
you
suggest
is
so
what
we
could
do
is
we
can
write
some
things
up
and
as
what
we
think
could
be
an
approach,
what
would
you
suggest
is
the
right
way
to
purchase?
Is
this
something
we
should
like?
We
can
come
back
to
this
meeting
and
share
our
thoughts
on
this.
If
you
guys
want
or
can
write
an
issue
or
something
or
what
would
you.
D
Suggest
yeah,
yeah,
yeah
I
think
anything,
that's
convenient
for
you
an
issue
or
you
can
come
back
to
this
meeting,
but
also
I,
guess
it's
I'd
say
start
by
playing
around
with
that
tool
and
see
if
there's
ways
in
which
we
can
enrich
some
of
the
some
of
that
information.
With
these
tests
and
okay,
of
course,
you
can
also
kind
of
post
any
questions
you
have
on
on
the
six
scale,
slack
Channel
and
maybe
tag
one
of
us.
B
A
I'm
wondering
about
one
more
thing,
because
we
mentioned
that
there
are
multiple
phases
for,
for
example,
post
scheduling
and
I'm
wondering.
Maybe
we
should
even
have
like
some
brief
summary
of
metrics
that
we
already
have
and
we
can
use
for
debugging
because
I'm
not
sure,
if
I'm
personally,
aware
of
all
of
them
like
with
PVCs,
for
example
or
or
other
stuff.
And
maybe
we
could
just
even
like
start
with
metrics,
to
understand
like
what
we
have
covered
and
what
we
don't
already.
B
Yeah
we
can
take
that
as
part
of
our
approach.
We
need,
we
need
I,
don't
know
the
answer
to
that
either.
So
yeah
and
I
think
we
want
to
know
that
I
think
it's
important,
so
we
can
do
that
as
well.
We
can
come
up
with
some
of
what's
the
current
set
of
things
that
are
available
and
we
can
talk
about
them.
D
Yep
cool,
thank
you
and
everyone
else
we
joined
I
guess
with
over
time
should
wrap
it
up.