►
From YouTube: SIG - Performance and scale 2023-04-06
Description
Meeting Notes:
https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.tybh
A
This
is
April
6
2023,
please
add
yourself
as
an
attendee
okay
for
today's
today's
meeting,
I
really
what
I
want
to
do
is
to
I
I
want
to
just
see
if
we
can
push
this
PR
along
that
delays,
God
or
if
we've
got
any
more
diagrams
that
we
have,
though,
to
view
that
would
be
nice
I
kind
of
want
to
before
we,
like
kind
of
my
purpose
here,
a
second
I'd
like
to
get
this
job,
and
so
that
we
can
start
viewing
this
stuff
based
on
the
graphs,
and
so
we
don't
have
to
look
at
the
the
job
so
prefer
we
we
hold
off
on
that
until
we
have
this.
A
This
ready
so
I'd
like
to
just
look
at
this
for
a
minute
so
and
see
if
we
can
find
any
of
the
ways
to
push
this
along
allay.
Do
you
want
to
talk
about
this
at
all
like
where
you
are
and
what's
what
you
need
next
to
get
this.
B
So
I
have
posted
an
update
for
for
the
code
that
worked
for
me
in
order
to
generate
those
graph.
B
One
feedback
that
we
talked
about
last
time
was
that
we
want
to
be
selective
of
of
what
metric
we
are
putting
on
disk,
so
that
space
is
not
a
concern.
So
the
way
I
imagine
this
tool
to
be
working
is
in
three
phases.
B
Phase
one
will
just
you
know,
do
a
regex
grip
of
of
all
the
build
up,
Dot
txt
and
put
put
the
results
into
a
temporary
folder
which
can
be
specified
from
the
command
line,
so
this
folder
need
not
to
be
exported
because
it
contains
all
the
data,
so
it
can
live
temporarily
in
the
CI
job,
where
this
is
running.
The
second
phase
generates
a
power
week,
so
it
generates
a
weekly
aggregation
of
whatever
metric
you
specify
it
to
be.
So
there
is
a
command
line
flag
which
takes
a
comma
separated
list
of
metrics.
B
B
So
that
way,
because
there
is
a
segregation
of
faces
we
can
have,
we
can
have-
we
can
get
away
from
hard
coding.
This
selection
and
you
know,
do
it
through
command
line,
Flags
and
then
phase
three
we'll
just
go
over
phase
two's
result
and
it
will
generate
a
graph.
So
phase
three
is
a
plotting
tool,
but
the
prerequisite
for
phase
three
is
Phase,
two's
result
and
similarly
prerequisite
for
phase
two
is
phase.
One.
A
Okay,
this,
what
you
have
here
is
with
what
phase
one.
B
Yeah
all
three
have
a
separate
command,
separate
sub
command,
so
I
mean
each
face.
Has
a
separate
sub
command
and
I
have
noted
the
sub
command
usage
in
the
pr
description.
A
A
A
B
Okay,
so
you
remember,
we
had
this
conversation
that
there
are
many
other
use
cases
for
this
tool,
so
I
assume
that
you
can
use
phase
one's
result
to
aggregate
this
on
monthly.
You
know
bi-weekly,
whatever
pay
you
you.
Can
you
want
to
analyze
this
so
This
decoupling
allows
us
to
be
agile
in
the
future,
in
leaves
doors
open
as
to
what
we
want
to
analyze.
A
B
Yeah,
okay,
there
are,
there
is
a
couple
of
open
items
that
I
would
like
to
take
as
follow-ups.
B
So
one
thing
that
that
was
mentioned
in
the
earlier
review
is
we
don't
have
optimization
on
what
jobs
we
look
at
so
that
optimization
I
can
add
as
a
follow-up
and
then
the
other
thing
that
was
mentioned
is
that
this
only
looks
at
the
periodic
jobs
right
now,
we'd
like
to
get
to
a
place
where
we
can
also
add
a
pull
request,
but
it
the
code
seems
to
be
getting
huge
and
I
want
to
make
little
progress
by
by
getting
the
emerges.
A
C
C
I,
don't
have
an
approver
rights
on
the
report,
so
I
will
ask
somebody
else
to
have
a
look
on
it
and
you
can
probably
create
optional
job
for
this
tool
right
now
and
then
we
can
move
to
the
periodics
when
we
have
all
the
optimalization
like
look
only
this
week
and
create
the
plot
only
for
this
week
and
such
I
just
wanted
to
ask:
where
do
you
intend
to
store
the
data
and
the
plots.
B
Yeah
there
is
one
of
the
conversations
I
think
that's
an
open
question.
It
was
suggested
that
we
have
a
separate
repo
like
like
Ci
Health,
that
stores
the
flakes
data
and
the
pr
merges
data.
C
A
So
like
how
you
hear
a
job
based
on
what
you
have
here
like
how?
How
does
this
does
this
get
automatically
triggered
like
what
is
our?
What
is
our
process
for
getting
it
to
the
point
like
where
this
stuff
gets,
Auto
populated,
I.
Believe
that's
what
I've
got
so
it's
happened
to
some
sort
of
weekly,
something
I
don't
know
some
weekly
scrape.
Is
that
so
what
is
what's?
What
would
we
require
to
do
that.
B
So
the
first
thing
is:
we
need
to
merge
this
open
Mr.
My
understanding
is
that
one
once
this
PR
is
merged,
it
will
create
a
an
image,
keyword,
CI
image
with
this
port.
Then
the
open
items
are
to
once
we
have
that
new
repository.
We
need
to
add
a
GitHub
workflow
or
action
to
call
that
basically
to
use
this
image
and
call
these
sub
commands.
B
So
the
first
item
in
that
workflow
would
be
to
generate
results
in
the
subsequent
item
would
be
to
push
the
same
results
on
on
on
the
GitHub
wrapper.
So
this
PR
lays
the
groundwork.
Then
we
need
to
add
it
to
DCI
workflow
to
make
it
automated
on
a
weekly
basis.
B
B
B
Sure
yeah
I
I
was
just
using
GitHub
action
as
a
placeholder
for
some
CI,
but
the
idea
is
to
put
it
in
in
some
kind
of
CI
automation.
I
I
haven't
dug
a
lot
into
what
features
we
need
from
that
CI,
so
I'm
I'm
not
sure
right
now,
which
one
will
be
easier.
C
B
Makes
sense
makes
sense
yeah
once
we
get
to
that
stage.
I
would
like
to
understand
little
bit
more
on
how
to
do
that.
You,
if,
if
you
can
help
out
in
that
we
it'll
be,
will
be
awesome.
A
Okay,
so
it
sounds
like
I
think
I
think
we've
got
a
clear
password,
so
we
just
need
to
we're.
Gonna
spend
some
time
reviewing
and
merging
this,
and
then
I
think
this.
The
rest
is
straightforward.
We
need
to
get
the
hit
repo
and
then
we
need
to
wire
it
up
so
that
this
runs
periodically.
B
With
the
tool
that
collects
list
and
get
data
I've
yet
to
file
file
it,
but
that's
one
open
item
as
well
for
us,
so
the
the
audit
tool
relies
on
the
client,
go
Prometheus
metrics
and
Prometheus.
The
the
client
go.
Prometheus
Matrix
differentiates
list
watch
and
get
in
the
URL
that
differentiation
is
missing:
a
label
selector
and
name
field
selector.
So
if
a
list
call
is
made
using
field
selector
or
a
label
selector,
then
it
miscategorizes.
B
A
Calls
did
not
seem
right
and
that's
is
that
correct,
like
I
I
I
I,
so
my
interpretation
has
been
when
you
use
the
field
selectors.
It
takes
a
different
code
path
than
list,
but
I
don't
know
if
it
was
I,
don't
know
how
to
correctly
classified
so.
I
always
consider
it
to
be
get,
but
so
like
is
so
like
what
is
doing
the
classification
in
this
case
like
like,
so
because
basically
list
is
get
right.
It's
the
same
thing,
it's
just
it's
just
more.
It's
just
like
a
a
less
specific
get
great.
A
B
What
I
expect
yeah
in
a
good?
What
I
expect
is
the
URL
name
is
resource
name
name
space,
whatever
the
cement
Cube
semantic
is,
and
the
name
and
the
namespace
of
the
resource.
That's
the
URL
I
expect
in
a
list
called
the
URL
will
just
be
resource
missing,
namespace
and
name
so
that
will
return
everything
under
that
resource
for
a
label
selector
and
a
field
selector.
It
gets
tricky
because
after
the
resource
you
mentioned,
label
selector
equals
to
something
and
fill
selector
equals.
B
To
something
I
mean
we
can
have
a
separate
categorization,
but
my
understanding
was
that
Cube
semantics
consider
those
as
lists
okay,.
A
A
A
Okay,
all
right
sounds
good
thanks,
Lee
all
right,
I'm
gonna
go
to
the
last
topic
here.
I
have
not
looked
at
this,
but
Brian
did,
let's
see
so
this
merged
I
need
to
look
at
this.
A
A
Okay,
here
we
go
so
we
do
have
it
all
right,
sweet,
so
I
think
we're
back
with
online
with
the
dedicated
cluster.
This
looks:
let's
take
a
look.
C
A
A
A
It
doesn't
make
any
sense,
so
we
we
for
some
reason
we
exit
and
do
make
cluster
clean
right
there.
Oh
okay,
I,
don't
know
what
this
is.
It
seems
a
little
strange,
so
this
looks
to
be
part
of
the
same
log.
A
B
Oh
right,
yeah.
B
A
Same
data
like
like
you,
you
asking
like
how
we,
how
why
or
how?
Obviously
we
came
about
to
get
like
to
get
these
here
like
what
made
us
choose
these.
B
B
A
So
it
has
it's
using
the
same
data,
but
it's
like
because,
like
you
think
about
it,
right
like
we're
just
scraping
Prometheus
and
then
the
audit
tool
just
takes
parenthesis
and
put
dumps
into
standard
out
and
then
we're
scraping
it
from
Center
down
into
building
graphs
from
it.
A
So
basically,
what
we're
doing
is
like
with
Prometheus
here
it's
right.
It's
the
same
thing
like
we're.
We're
just
doing
we're
just
going
through
previous
to
grafana.
B
B
A
A
I,
don't
think
that'll
work
for
the
the
pre
submits
and.
A
A
A
I
saw
something
in
here:
okay,
there
yeah
there
is
I,
don't
I,
don't
know
if
there's
like
oh
hey,
Chase,
on
no
I'd
look
at
this
earlier.
This
is
just
for
the
this.
Isn't
the
data
I
don't
know?
Maybe
there
is
I'm
not
sure
but
I
like
I.
Actually
we
still
need
the
tool
to
to
do
the
the
piece
of
it.
So
if
we're
doing
it
for
the
presents,
may
we
maybe
we
do
it
for
the
the
dedicated
cluster
as
well
like
I,
don't
think
it
I
think
there's
it's
fine.
B
A
A
Okay,
well,
this
is
cool
I'm
glad
we
got
this
up
so
when
we
eventually
get
this
periodic
back.
Whatever
is
going
on
here
at
the
low
generator,
we
can
start
looking
at
that
data
again:
cool,
okay,
wait,
wait!
I'm,
gonna,
link
this
in
here,
actually.
A
Okay,
so
I
was
like
no,
so
we've
got
kubecon
in
two
weeks,
so
I
think
next
week,
so
assuming
we'll
meet
now
next
week
and
then
we'll
we'll
follow
when
it
will
be
canceled,
so
so
we'll
probably
only
have
so
one
or
two
more
meetings
in
April.