►
From YouTube: SIG - Performance and scale 2023-04-13
Description
Meeting Notes:
https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.tybh
A
All
right,
let's
begin
with,
so
we're
not
going
to
go
over
the
the
performance
data.
The
only
thing
I
wanted
to
point
out
is
that
the
the
dedicated
job
still
failing.
A
While
the
performance
job
is
still
looking
good,
so
we're
not
going
to
go
through
those
results
and
I
think
the
way
you've
got
them
down
here
anyway.
So
we'll
go
look
at
those
in
a
second
all
right,
Ola,
your
patchworks
right
I.
Think
I
did
yes,
it
did
great.
So
what's
like
next
steps
after
this
I
think
we
said,
you've
got
this
issue
to
store
the
data
and
then
and
then
we
have
a
place
to
start
publishing
right.
B
Yes,
so
identity
So,
currently
the
tool,
runs
on
Sig
performance
pre-submit
job.
So
there
are
some
ideas:
I
have
one.
Can
we
create
a
similar
post
to
submit
job?
So,
for
example,
I
was
digging
into
Pro
and
you
can,
after
each
PR
merges
to
master,
you
can
run
a
pro
post
submit
job,
so
we
can
understand
impact
of
that
PR
merging
in
master
and
instead
of
us
running
these
data
on
pre-submits,
which
can
be
optional
and
which
can
you
know
it
can
vary
depending
on
what
goes
in
into
the
master
like.
B
B
B
Either
we
need
to
fix
the
regex
to
make
it
work
with
both
or
we
just
need
to
do
a
blanket
export
of
that
artifact
Json
across
all
jobs.
So
we
don't
have
to
do
regex
matches
anymore.
B
B
A
And
this
personal
job,
so
I
didn't
fully
catch.
Everything
like
what
was
the
so
the
changes
we
want
to
add
a
postman
job
to
instead
of
a
pre-submit
job.
Is
that
what
it.
B
A
You're
talking
about
the
well
oh
yeah,
okay,
so
we're
not
sorry
yeah
we're
seeing
when
the
day
you're
scraping
is
from
the
periodic
okay,
so
so,
okay,
so
then
well,
maybe
maybe
you're
talking,
maybe
we're
kind
of
getting
into
is
so
we
eventually
do
want
to
spray.
The
pre-summits,
so
I
mean
maybe
like
I
mean.
Is
it
that
is
that,
where
you're
going
with
this,
it's
like
yeah.
B
A
B
Job
will
become
our
historical
Benchmark,
so
it
will
become
the
Baseline
and
then,
if
any
developer
wants
to
compare
their
particular
PR
result,
they
can
look
at
the
past
two
weeks
post
submit
job
and
compare
it
with
their
pre-submit
results.
A
So
the
I
guess
the
important
thing
about
this
is
that
or
I
guess.
One
of
the
important
differences
is
there's
going
to
be
one
of
these
run.
There
could
be
n
number
of
these
run,
so
we
would
scrape
some
one
of
the
big
differences
we're
going
to
scrape
this
once
and
we
post
this
to
our
historical
data
instead
of
scraping,
however,
and
times
to
get
the
pre-submit
results
and
and
reading
them.
Okay,
okay,.
B
C
A
One
set
of
data
that
adventurous
performance
after.
C
A
B
A
I
wonder
like
I
wonder
if
I
don't
know
if
Google
eventually
joined
but
I
I
was
kind
of
wondering.
If,
because
the
way
the
CI
runs
like
we
basically
want
to
get
is
like
the
last
job,
the
forest
merges
I
think
like
I,
think
maybe
they're
I
think
they're,
equivalent
and
I.
Don't
know
if
there's
a
way
to
do
that,
but
like
that's,
where
I
want
to
ask
the
Lois
this,
if
he
knows
or
Daniel
Daniel
knows
so.
B
So
I
mean
it
so
here's
the
thing
right:
I'm
we
have
three
submit
jobs
that
could
that
will
give
out
results
anyway,
we
are
not
going
to
change
that.
All
I'm
saying
is:
let's
not
use
that
as
data
point
to
influence
our
metrics
over
time.
B
Let's
run
one
job
after
things
merge,
so
that
will
give
us
a
very
deterministic
line
of
things
that
have
changed
over
the
past
and
let's
say
if
you
are
a
developer
that
wants
to
measure
your
the
impact
of
your
changes,
then
you
have
the
historical
data
to
compare
your
changes
with,
so
that
will
give
us
that
will
solve
two
problems:
one.
It
will
give
a
deterministic
result
for
comparison.
Second,
we
we
continue
to
have
the
pre-submits
to
for
helping.
A
A
All
right
that
sounds
good
all
right,
so
I
think
we've
got
a
good
picture,
so
we've
got,
we've
got
and
we
can
probably
label
could
probably
help
us
with
this.
This
is
I,
think
I'm
starting
to
work,
something
up
or
give
us
a
good
example,
or
you
said
there
already
was
one,
but
the
I
think
that
should
be
pretty
easy
and
then
the
because
it's
basically
a
copy
paste
right.
We
just
instead
of
pretty
soon
we
make
it
posts
of
it.
A
And
then
this
one
yeah,
okay,
expanding
the
regex
or
it's
pretty
much
sounds
like
we
gotta
do
number
one,
because
it
seems
like
it's
going
to
give
us
a
historical
data,
but
I
wonder
if,
like
we
can
make
the
yeah
you're
right
that
so
the
data
I
thought
we
used
the
audit
tool
as
part
of
the
scraping
like
is
it
I
know?
The
results
are
different
because
the
way
it
it
processes
the
data
or
goes
through
the
the
different
vmis
and
the
creation
delete
process.
A
B
So
I
think
this,
the
like
the
problem
is,
in
the
logic
I
use
for
regex
matching.
So
what
I
do
is
I
regex
match
VMI,
starter,
regex
right
and
then
all
the
data
comes
that
follows
that
drag
x
match
is
the
audit
tool
output.
B
So
I
know
that
okay,
VMI
results
are
here,
then
I
process,
the
entire
output
and
then
I
finish,
processing
VMI,
then
I
looked
for
VM
results,
starting
point
and
I
omit
that
and
then
follow
the
audit
tool.
Until
the
End
drag
X.
So
that
is
not
the
case
with
periodic
jobs,
periodic
job,
only
outputs,
a
bunch
of
audit
tool
things
it
doesn't
differentiate
between
VM
and
nvmi.
So
there
is
subtle
differences
there
that
that
I
need
to
add
now
to
make
this
work
across
all
all
our
jobs.
Okay,.
C
A
Like
it's
what's
so
here's,
so
here's
sort
of
two
things
that
I
could
say
about
this
is
it
would
be
one
is
that
it
would
be
cool
if
we
could
normalize
our
all
of
our
results,
but
I,
don't
think,
there's
a
problem
doing
that
like
if
we
have
it
between
jobs,
it's
probably
going
to
make
our
life
easier
and
then
maybe
your
regex
isn't
isn't
going
to
require
us
to
work,
and
then
there's
also
a
second
part
of
this,
and
that,
like
I'm
thinking
in
the
way
that
you're
tracking
data
over
time,
like
I'm
thinking
about
sort
of
the
way
so
we're
using
a
regex
right
now
and
I'm
wondering
you
know
if
what
other,
if
there's
something
more
powerful
here
and
then
more
effective
than
doing
it,
that's
that's
not
as
likely
to
be
affected
by.
A
Maybe
this
is
where
you're
going
with
me
with
the
with
the
artifacts.
You
know
like
we're,
we're
not
as
affected
by
the
the
regex
or
having
to
do
pattern
matching
and
then
it's
having
certain
expectations
after
we,
the
pattern
match.
B
Correct
yeah,
the
export
artifact,
that
is
the
second
option,
will
simply
read
a
file
on
disk
and
that
will
Json
on
Marshall
it
into
the
audit
tool.
Api.
So
you'll
get
everything
out
of
that
without
any
like
X
matching,
I
think,
but
but
but
I
think.
The
another
difference
to
call
out
is
that
the
Sig
performance
pre-submit
or
periodic
job,
not
the
density
cluster
one.
It
runs
the
audit
tool
and
gives
out
results
for
VMI
and
VM
I
think
the
density
one
only
runs
for
VMI
right.
It
doesn't
run.
A
A
Yeah,
that's
true:
okay,
I
mean
I,
guess
yeah
I'm
quite
interested
in
I,
guess
maybe
less
interested
in
regex
than
as
long
as
we
don't
lose
any
sort
of
functionality
of
like
or,
and
it's
not
too
much
of
a
pain
than
the
artifact
sounds
interesting.
I
mean
not
only
for
like
it.
B
A
B
Yeah,
so
that's
maybe
what
so!
That
is
third
route.
What
we
could
do
is
start
exporting
that
artifact
directory
starting
today
or
as
soon
as
possible,
and
then
hopefully
we
will
have
enough
time
before
the
releases
to
get
enough
signal
on
the
exported
directory
and
we
might
get
away
from
expanding
that
regex
and
using
what
we
have
right
now.
A
A
So
what
we
could
do
is
starting
going
forward
is
like
supporting
or,
however,
have
the
artifact
exported,
and
then
we
have
the
ability
to
to
scrape
from
from
both,
and
then
you
know,
as
we
we
basically
just
slowly
phase
out
the
regex
I
think
I
think
I
think
it's
just
as
we,
because
we
don't
want
to
lose
that
historical
data.
I
think
it's
like
for
good
reasons.
A
Like
we've
seen
the
changes
in
where
the
you
know
where,
where
there's
been
a
different
changes
in
in
the
past
and
that's
valuable,
but
like
this
I
think
for
even
for
the
one
over
lease
like
so
we've
got,
let's
say
like
two
months
left
before
we
do
that
release.
So
we
can
get
two
months
of
artifacts
where
we
can
point
a
few
things
out,
and
then
we
can
combine
that
with
the
historical
data
that
we've
got
from
the
regex
and
I
think
that
gives
us
the
texture.
A
We
want,
and
eventually
we
we
look
to,
we
just
moved
to
the
artifacts
one
more
time.
Let's
once
the
once,
we
get
an
updated.
C
A
B
Either
lubo
or
Daniel,
it's
they've
mentioned
where
how
to
do
that
in
one
of
our
threads
in
keyboard
yeah,
so
I'll
I.
Can
you
know
start
that
conversation
again.
A
A
Yeah
start
another
thread
and
let's
get
I
will
assign
you.
A
On
yeah,
let's
get
this
started
and
we
can
get
our.
A
A
Okay,
all
right,
we
got
a
plan,
I
mean
we
we'll
just
say
about
there:
okay,
so
you
sort
of
thread.
Let's
get
this
going
and
we'll
yeah
I
think
we
can
get
this
spoken
pretty
quickly
and
then
obviously
cool
okay,
let's
go
to
the
results.
A
B
In
in
the
original
six
scale
document,
you
had
a
list
of
metrics
that
we
would
like
to
collect
right
based
on
our
previous
conversations
in
this
meeting,
so
I
took
all
the
entire
set
in
script
results
for
those.
B
B
So
we
could
I
mean
that
those
changes
are
not
up
yet
I.
Just
did
it
for
today's
call
I'll
have
to
hand
out
changes
there
so
that
that's
what
you
are
seeing
right
now,
but
my
thought
process
is
that
in
or
two
or
three
calls
from
today.
If
this
process
works
for
us,
then
this
is
the
output.
We
can,
you
know,
kind
of
standardize
on
and-
and
you
can
imagine,
this
output
being
automatically
generated
each
week
and
we
just
go
through
these
results
over
time.
B
So
as
you
go
through
these
charts,
if
there
are,
there
is
any
feedback
that
will
help
make
a
reviewing
life
easier.
We
can
you
know,
document
and
incorporate
in
the
tool.
So
that's
one
thing.
A
B
One
one
thing
I
noticed
is
that
the
P95
from
creation
to
running
for
VMI
and
VM
it's
different.
Can
you
search
for
P95.
C
B
Okay,
oh
I,
see
I
I,
see
what
happened.
Okay,
so
I
think
we
are
missing
that
metric.
These
are
all
just
create
so.
B
Yeah
so
I
need
to
add
that,
but
yeah
what
I
was
noticing
is
that
P95
for
a
VM
creating
a
VMI
is
little
bit
less
than
a
user,
creating
a
VMI
I'm,
not
sure.
If,
oh
here's
something
interesting.
A
Looks
like
one
more
patch
call
yeah,
you
know
the
other
thing
we
could
do.
I
was
just
well,
so
I
was
doing
the
like,
I
think
I'm
doing
the
math
from
that
here
we
know
the
create
is
a
hundred
like
so
the
I
mean
well
yeah.
You
could
go
either
way
with
this,
but
so
this
is
you
could
you
could
divide
by
100
here,
and
this
means
this
would
be
like
per
VM
yeah
I
I.
Let's
leave
it,
let's
leave
it
I
think
it's
that's
fine,
but
yeah
I
mean,
let's
just
you
know.
B
A
Hold
on
I'm,
looking
for
the
the
query
for
this.
C
B
A
Is
it
a
patch
on
the
VMI?
Is
that
what
it
is
patch
instances
count
for
VM
yeah?
It
is
so
this
looks
like
it's
a
this
one.
A
C
B
You
know
what
would
be
good
to
see.
We
know
that
before
before
the
start
of
the
Year,
this
patch
call
was
just
one
and
it
has
gone
to
four
I
wonder
if
we
can,
you
know,
take
all
those
changes
and
combined
it
back
into
a
single
single
call.
C
A
C
A
Okay,
we'll
track
that
you
know
me,
so
it
would
be
cool
to
see.
So
this
is
one
of
those
things
where,
like
we
released,
actually
I,
don't
know,
let's
see,
059
was,
would
end
up
being
February.
So
between
the
Zero
five-man
release
and
the
V1
release
there
is
a
I
mean
it's
like
a
what's
this,
a
25
increase.
A
A
A
B
Yeah
so
I
some
of
the
data
you
see
the
list
bar
count.
It's
like
spread
across
the
second
one
yeah
it's
spread
across
all
like
one,
two,
three
four
I,
don't
know
why,
like
this,
is
this
expected.
A
A
B
What
was
the
last
one?
You
saw
the
last
list,
this.
B
Yeah,
so
why
does
we
yeah
that's
another
question:
I
have
live.
Vm
controller
has
to
call
this
resource
like
a
list
call
on
this
yeah.
A
A
A
B
Yeah,
it
has
to
expose
a
bunch
of
keyword,
labels
I,
think
so.
A
A
Yeah
I,
don't
know
okay
well,
I
mean
things.
I
can
keep
looking
at
I
mean
as
long
as
like
I
mean
these
patterns
are
yeah
nice
to
get
rid
of
them,
but
we
just
want
to
make
sure
they
don't
increase.
Doesn't
be
exactly
that
kind
of
things.
We
definitely
want
to
make
sure
don't
change
so,
okay,
let's
looks
good
away,
it's
okay,
cool,
and
so
how
is
this
gonna?
So
when
we
overall,
it's
gonna,
look
for
you!
A
This
is
how
you
publish
and
when
we
go
into
like
so
like
I,
just
want
to
picture
the
way
to
see
it
looking
to
see
how
health
Healthcare.
So
when
we
go
into,
let's
say.
A
C
B
A
No
but
I
like
the
way.
So
to
me
this
is
a
good
I
mean
this
is
a
good
attempt,
so
I
mean
I
I'm,
like
all
for
you
playing
around
with
this
and
like,
if
you
just
want
to
throw
it
up
on
the
GitHub
page,
and
you
say,
like
it
looks
good
looks
bad.
You
can
try
something
else.
That's
fine
with
me.
I
I,
don't
know
like
I,
really
don't
see
a
I
like
or
even
if
we
put
these
into
folders
and
generate
them,
I
mean
that
I,
don't
know,
I
mean
I.
B
B
If
we
can,
you
know,
agree
on
one
metric
that
we
want
to
show
in
the
readme
right
so
let's
say
creation
to
running
VMI,
P99
or
or
to
Max.
Okay,
then
what
we
could
have
is
a
separate
directory
just
for
that
image
and
that
image
will
pop
up
in
the
readme.
B
So
then
the
readme
will
have
that
image
and
then,
if
you
want
like
our
weekly
six
scale,
talks
can
look
at
this
in
depth
performance
Matrix,
so
that
there
will
there
will
be
like
okay,
a
viewer
coming
in
wants
to
get
an
overview
of
what
is
happening.
They
can
look
at
these
graphs
here
and
then,
if
you
want
to
dig
into
details,
go
to
this
the
index
HTML
page.
A
C
A
Yeah,
okay,
okay,
so
I'll,
say:
okay,
then
we
can
just
go
with
this
like
I.
Think
like
where
we
go
with
this
is
maybe,
like
you
just
said,
you
published
a
few
things
on
the
on
the
readme
here
and
then
we've
mentioned
this
page
and
then
that's
the
Kurt
detail.
View.
C
B
Okay
and
does
eight
weeks
sound
good
to
you,
I
mean
or
release,
will
be
three
months
so.
A
Yeah
we
want
to
get
yeah,
we
want
to
do
so.
Can
we
do
so
I
like
a
little
more
inclined
to
do
like
more
than
one
release?
So
if
I
don't
know,
if
we
could
do
that,
but
so
we
can
do
so
what's
interesting
is
like
if
we
were
to
well
I
mean
for
this
time
around.
A
Maybe
we
just
do
three,
five,
nine,
but
let's
say
for
the
one:
the
next
release
after
one
hours
cr5910
and
whatever
that
is
one
one
or
whatever
we
call
it
that
release
we
graph
all
three
of
them
and
we
have
a
comparison
across
the
three
so
the
latest
and
then
the
previous
two,
and
then
we
just
kind
of
because
that's
kind
of
like
our
window
right,
like
our
support
window,
is
like
the
registry.
So
I
was
thinking
we
just
do.
We
just
do
those
three.
B
Yeah
there
I
think
that
will
take
us
to
a
year,
so
there
will
be
52
weeks,
yeah.
C
B
Right
I
think
we
need
to
start
preparing
for
that.
So
in
in
any
other
notes,
I
think
in
the
further
steps
we
will
have
to
add
that
we'll
have
to
start
exporting
the
output
directory.
So
even
if
that
bucket
is
garbage
collected,
we
have
this
historical
data
for
the
current
runs
captured.
A
B
Yeah
they
do
that
will
keep
the
door
open
for,
let's
say
from
a
year
out
now
from
now.
We
want
to
process
this
data.
We
can
plot
it
in
a
graph.
C
A
I
think
that's
where
we
can
go
to
yeah
I,
think
that
makes
sense
for
start
going
this
direction.
Okay,
I'm
going
to
lay
it
so
I!
Think
we
then
so
we
need
to
get
this
issue
sorted
and
then
I
think
we
can
start
doing
your
publishing.
We
got
the
automation
hooked
up
and
then
you
can
have
your
GitHub
page
somewhere
and
then
maybe
we'll
check
into
the
readme.
Then
I
think
yeah.
So
we
start
doing
for
review
and
then
eventually
we
can
look
at
some
of
the
stuff
cool.
A
Okay,
all
right
all
right,
thanks,
Lily
cool
all
right
anything
else.
Then
this
whole
topics
we
got
for
today,
I,
don't
know
if
you're
still
here
Ellie
you
got
anything
you
wanted
to
bring
out.
D
C
D
B
D
So
also
in
that
three
bracket
you
can
I'm
actually
minimize
the
duration,
so
it
how
to
delete
if
it's
more
than
two
months
or
something
like
that,
so
you
don't
need
to
maintenance
all
the
cleanup
and
all
this
stuff,
so
it
can
be
option
to
if
it
can
be
options
so
think
about
it.
B
D
But
you
need
to
maybe
to
do
a
job
that
update
it
automatically
by
GitHub
action
or
something
like
that.
Right.
B
B
D
Pocket
so
you
can
fetch
it
fast
later.
B
B
C
D
It
by
a
actually
you
organize
it
by
daytime.
D
I
said:
do
you
have
a
folder
for
each
one
with
date,
something
like
that
foreign.
B
Yeah,
so
we
need
to
clean
up
the
data
which
is
after
52
weeks,
I
mean
yeah.
That's
a
good
point.
D
B
Yeah,
that's
a
good
idea,
so
I
have
two
phases
in
in
the
current
tool:
first
phase:
it
collects
data
and
does
this
regex
match
that
I
was
talking
earlier,
that
that
outputs,
the
files
into
performance,
job
name,
job,
ID
and
then
research.json?
So
that's
step
one
then
it
it
has
the
date
and
time
of
when
the
job
was
run.
B
Correct
then,
the
next
step
is
to
take
all
of
those
jobs
and
organize
into
weekly
folders.
So
start,
let's
say
this
week,
starting
Monday.
The
folder
date
will
be
starting
Monday.
B
D
C
D
D
So
we
have
a
generic
common.
How
much
read
was
how
much
right
was
so
we
can
see
the
distribution
across
time.
I,
don't
know
if
it's,
if
it's
enough
to
find
the
performance
issue,
because
your
in
your
way
you
actually
drill
down
inside
so
you
can
really
like
microscope
and
see
what
actually
can
cause
the
issue.
So.
B
B
It
is
a
known
problem
that
if
you
come
up
with
a
shrude
enough
list
of
list
calls
it
it
can
take
down
API
server,
so
API
server
can
actually
umkill
on,
let's
say
100
or
1000
list
calls
where
the
list
call
is
large
enough.
The
data
returned
from
the
list
is
large
enough.
So,
even
if
you
have
a
read
and
write
internally,
if
that
read
consists
of
end
list
call
versus
one
get
call
matters
in
comparison
to
10,
get
calls,
and
one
list
call.
A
B
Think
we
have
separated
out
in
into
each
API
call
which
will
help
us.
You
know,
keep
track
of
at
least
the
least
calls,
because
we
know
that's
expensive.
We
know
patch
can
could
be
expensive,
so
I
hope
that
answers
yes,.
D
C
D
Clear,
where
is
in
each
API
call,
we
have
the
exact
problem
and
we
can
see
it
in
in
one,
let's
say
one
screen
and
we
don't
need
to
scroll
up
down
it's
very
difficult
if
there
is
a
way
to
do
summarize
HTML,
so
you
can
see
all
and
after
it.
If
you
want
to
see
it
drill
down
to
enter
it
into
easy
into
it,
so
I
think,
for
it
will
be
very
simple
later
on
to
Monitor,
and
only
if
you
want
to
investigate
something,
so
you
drill
down
into
each
call
separately.
B
Yeah
no
I
I
think
you're.
That
makes
sense
to
me,
so
we
were
doing
this
over
text
right
like
this
is
the
first
iteration
of
doing
this
over
graph.
So
what
in
the
graph?
What
is
helpful
is
that
we
only
drill
down
if
you
see
a
spike
up
or
Spike
down
so
I
think
that
is
something
Ryan
and
I
are
getting
used.
B
D
A
separate
graph,
so
it's
very
difficult
to
follow
up
with.
If
you
want,
you
know
every
day
to
enter
or
something
like
that
or
once
a
week
to
enter,
to
see
if
everything
what
what
actually
if
there
is
any
dropping
any
drop
or
any
Improvement
to
see
it
and
to
understand
and
not
to
roll
down
and
scroll
down
and
up
and
see
which
graph.
B
B
So
the
graphs
you
see
is
has
lot
of
different
meanings.
I'm
I,
don't
have
ideas
on
how
to
effectively
summarize
them
in
into
one
I
mean
that's.
D
That's
the
average
for
each
one,
do
we
can
we
find
average
for
each
run
or
it's
just
you
know
a
bunch.
Each
one
is
separately.
So
if
you
can,
for
example,
aggregate
for
each
API
call
or
for
each
metrics
that
you
calculate
the
average,
so
you
can
at
least
distribute
it
in
one
plot
and
show
with
their
oh.
D
And
after
it,
if
there
is
any
changes,
so
you
can
see
it
and
very,
very
simple,
but
right
I
don't
know
we
we
are
using
the
grafana,
but
I
cannot
compare
it
to
your
way.
So
it's
very
difficult.
So
in
Godfather,
it's
more
simple,
but
it's
very
difficult
to
to
make
you
actually
create
it
open
source.
So
you
can
follow
it
by
weekly
I,
don't
know,
but
I
think
that
for
now
you
can
go
and
see
the
things.
B
Yeah,
that's
a
good
point.
Brian.
D
B
Yeah,
that
makes
sense,
and
that's
where
Ryan
like
we
have
a
grafana
dashboard
as.
D
D
D
A
Layers
that
are
aware
of
no
no,
the
the
it's
I
was
talking
about
at
the
beginning.
Right
now
the
job
is
failing.
So
this
is
what
I'm
saying
is.
We
need
to
fix
this.
You
can
see
here,
so
it's
been
failing
for
a
little
bit.
So
that's
why
it's
not
getting.
There's
no
data
in
here.
It's
just
not
working
I,
don't
even
know
how
long
this
has
been.
What's
the
what's
the
date
of
this
324,
so
it's
almost
like
like
two
three
weeks,
so
there's
nothing
in
here.
D
A
There
is,
this
is
I
think
this
is
published
somewhere
and
net
minority.
A
Know
they're
two
different,
so
they're
two
different
things,
so
what
Elena
is
doing
is
he's
getting
the
periodic
jobs
here.
Any
scraping
this
stuff
and
the
reason
this
is
different
is
this,
creates
and
destroys
a
cluster
every
every
the
three
times
a
day.
So
there's
no
profound
for
this.
We
can't
we
can't
there's
no
way
to
get
this
into
a
dashboard.
So
this
is
the
there's.
A
B
D
B
You
know
ali,
you
are
talking
about
the
organization
of
data
right,
I,
think
there
is
a
pre-step
to
it
where
this
this
instance
does
not
have
the
data
from
what
we
the
scrap,
that
we
are
doing
so
what
I
was
staying
suggesting,
then,
is
that
you
know
step
one
is
to
push
it
push
the
data
into
this
Prometheus
instance,
for
which
we
will
need
access
to
and
then
step
two
is
to
create
a
panel
that
says,
seek
performance,
periodic
review,
and
then
that
becomes
your
entry
point
to
review
this
dashboard.
D
D
B
No,
no,
this
Prometheus
instance
does
not
have
the
data
that
we
are
displaying
in
HTML.
That's
what
Ryan
was
saying.
D
So
let
me
explain
you
what
we
did
in
order
to
save
history
data
we
use
elasticsearch,
we
upload
the
data
to
elasticsearch
and
connect
it
to
the
grafana
and
so
inside
the
grafana.
We
can
show
this
data,
the
history
data
that
you
talk
about
it
or
the
job
that
whatever
you
wish
inside
the
elasticsearch,
so
the
grafana
I
know
it's
related
to
Prometheus
and
it's
real-time
data,
but.
A
Yeah
Ellie,
the
so
I
think
we're
getting
I
think
we're
kind
of
fitting
around
we're.
Talking
sort
of
yeah
I.
Think
we're
saying
the
same
thing
like
well.
Basically,
what
I
think
what
you're
saying
is
you
want
to
see
the
data
that
a
lay
has
here
you
want
to
see
it
here
and
so
in
Elaine
is
saying
that
what
we
need
to
do
is
push
it
into
the
Prometheus
instance
that
that's
backing
this
I,
don't
know
how
we
do
that.
But
let's
just
say
there
is
a
way
to
do
that.
A
If
there
was
then
that
would
be
neat
because
then
we
could
have
all
our
historical
artifacts
tracking
refund
and
then
we
wouldn't
have
to
then
make
a
lot
of
things
easier,
but
I
don't
know
how
how
realistic
that
is.
But,
okay,
we
can
have
that
conversation
if
you
think
it's
possible,
but
I
mean
it
might
not
be
that
easy.
Maybe
maybe
we
need
elasticsearch.
Maybe
we
need
some
other
repository
to
store
the
stuff.
A
There
might
be
a
huge
effort,
whereas,
like
I
mean
this
is
really
good,
and
this
is
already
handy
so
I
don't
know,
I
mean
it
it.
We
need
to
do
a
lot
of
scoping
to
figure
out
if
this
is
the
possibility
of
doing
this,
and
we
also
missed
this.
This
there's
also
other
consequences
too,
like
this
cluster,
like
it's
hosted
by
IBM
and
so
like
what
is
the
I?
Don't
know
what
the
timeline
of
this
stuff
is
like
is
maybe
this
gets
repurposed
at
some
point
and
then
you
know,
we've
run
into
a
problem.
A
I
mean
I
kind
of
like
this
idea
and
I
understand
that.
There's
some
challenges,
but
maybe
what
we
do
we
deal
with
it
by
I
mean
we
deal
with
it
by
waste
of
what
we
have
now
and
if
we,
you
know
like
you're
saying
only
this
becomes
a
problem.
We
start
to
expand
the
metrics.
You
know.
Maybe
this
is
when
we
look
at
because
the
maintenance
we
have
the
problem,
we're
looking
to
you,
know
expanding
it
to
refund
or
something
because
I
I,
just
like
once.
A
A
Okay,
yeah
so
I,
don't
want
to
say
the
North
Coast
on
it,
but
we
we
can
consider
it
I.
Just
think,
though
it
will
take
some
work
to
fully
understand
what
what's
involved
to
do
it:
okay,
yeah!
Well,
it's
a
good
point,
though
so
I
we're
a
few
minutes
over,
so
guys,
I
think
we
gotta
we're
gonna
wrap
this
up.
So
thanks
for
discussion,
we've
got
a
few
action
items
next
time
and
next
week.