►
From YouTube: CNCF SIG Observability 2020-06-23
Description
CNCF SIG Observability 2020-06-23
A
A
A
A
A
A
A
B
A
We
have
only
six
people,
so
let's
do
it
like
this.
We
give
people
a
few
more
minutes
to
join,
but
when
we
start,
we
tell
them
next
time.
We
will
start
at
one
time
and
then
we
actually
start
our
course
I'm,
honestly
somewhat
tired
of
always
waiting.
We
only
have
50
minutes
anyway,
if
we
always
wait.
Five
minutes
we're
basically
at
45
minutes
and
all
of
this
way
too
interesting.
So
your.
A
A
A
A
Your
ears
get
so
cold
because
all
of
the
wind,
which
is
all
of
a
sudden
not
catching
in
your
hair,
it's
absolutely
fascinating
and
your
head
gets
hot
in
random
bits
and
pieces
and
spots
course
your
head
is
trying
to
to
heat
the
brain
and
what
have
you
die
of
cold,
which
is
all
irrelevant
but
I'm
trying
to
fill
those
two
minutes
until
the
five
after
and
then
we
start
and
then
going
forward
and
I'll
also
send
email.
Actually
it
may
take
an
actual
item
on
this
one.
D
A
The
thing
which
I
heard
so,
okay,
it's
far
after
so
let's
start
so
first
things
first
next
time
we
will
start
precisely
on
time
and
I
have
sent
this
out
for
for
future
meetings
too
so
update
on
on
the
toc.
There
was
this
toc
sick
call
last
week
and
we
have
a
sponsor
for
for
both
panels
and
cortex
incubation.
A
She
is
called
katie,
c'mon,
gee,
I,
hope,
I
pronounced
it
correctly
and
she
took
both
of
those
two
judgement
items
for
TOC
I
poked
her
today
and
didn't
get
any
update
yet
on
what
she
thinks
about
you,
diligence
documents
but
I,
don't
also
don't
have
any
kind
of
an
or
anything.
So
for
now
we
just
wait
and
see
what
happens
yeah,
but
we
have
a
QC
person
who
is
looking
at
both
cortex
and
which
is
already
a
lot
more
than
we
had
a
week
ago.
So
that's
nice.
A
E
Really
I
can
essentially
give
some
quick
he'll
hear
me.
Yes,
I
can
give
some
quick
update,
update
what
was
done
from
the
last
meeting
because
and
the
very
end
of
the
last
meeting.
Two
weeks
ago
we
started
this
topic
and
I
actually
kind
of
announced
that
we
are
definitely
looking
at
as
as
kind
of
prom
to
use
community
and
kind
of
announced.
What
are
the
communication
channels
and
what
we
want
to
achieve.
E
Furthermore,
we
are
touching
observability
data
which
in
those
days
we
are,
you,
know,
kind
of
retrieving
and
gathering
for
long-term
times,
like
you
know,
years
and
and
amounts
of
the
data,
so
we
want
to
make
use
of
it.
So
I
guess
we'll
be
having
some
conversation
in
those
in
those
meetings
and
on
our
communication
channels,
so
I
would
I
would
love
you
to
speak
up
even
now.
If
you
have
any
feedback,
so
there
will
be
time
for
that.
I
would
also
maybe
give
some
status
what
happened
outside
of
the
our
meetings
very
quickly.
E
First
of
all,
we
have
prompted
from
a
huge
community
meeting
where
we
touch
this
very
topic.
I
will
kind
of
copy
that
link
in
where
you
can
find
the
video,
and
so
you
can
watch
it
fully,
but
essentially,
as
part
of
this
meeting,
we
talked
about
this
very
topic,
it's
kind
of
in
a
tweet,
but
it's
available,
and
we
spent
some
time
and
what
we
had
here
in
this.
In
this
meeting,
we
had
like
Rob
from
n
3,
dB,
so
kind
of
long-term
storage
matrix
system.
E
They
shared
what
they
are
looking
for
in
terms
of
kind
of
spiking,
this
topic:
how
to
integrate
data
metrics
with
analytic
system,
and
he
mentioned
that
they
are
planning
to
address
some
kind
of
spark
integration
and
presto.
So
this
is
very
kind
of
specific
things
that
we
can
already
try
to
integrate
with
so
you're
welcome
to
check
this
video
and
to
kind
of
know
the
details
and-
and
we
also
have
lots
of
feedback
on
the
kind
of
github
issue-
I
try
to
summarize
this
topic
here
briefly,
so
you
can
kind
of
check
this
out.
E
It's
like
a
summary
of
what
we
know
so
far
what
projects
were
mentioned
and
you
know
how
they,
how
we
can
kind
of
solve
it
solve
integration
from
metrics
to
those
to
those
projects
there.
You
know,
apart
from
spark
and
presto
quick
house
once
mentioned,
druid
all
sorts
of
timescale,
db2
also
sort
of
systems,
and-
and
we
were
looking
for
them
for
the
direction
that
will
be.
You
know
the
most
useful
for
for
the
community.
E
On
top
of
that,
one
more
thing
awesome,
who
I
think
is
this:
on
the
skull
was
super
active
and
and-
and
we
found
you
know,
a
couple
of
cool-
maybe
not
very
popular
project
that
are
already
integrating
prometheus
and
storage
format
and
and
and
kind
of
couple
it
with
I
think
some
kind
of
analytic
API
that
you
can
connect
with
spark
and
Osun.
Do
you
want
to
maybe
give
some
quick
deal,
dr
summary,
of?
What's
the
lightest
of
of
your
findings.
F
F
If
we
can
reuse
that
this
platform,
it
will
be
very
good,
so
I
focus
to
them
spark
integrations,
uses
some
kind
of
internal
Java
and
also
scholar,
internals
of
the
library
still
connected,
and
there
should
be
a
extra
effort
to
maintain
every
each
scholar
and
spark
release
some
elasticsearch
cassandra
coach
place.
Drivers
are
common.
In
this
sense
I
can
say,
but
a
little
dream
or
a
very
crazy
idea,
but
I
see
lots
of
every
fight
on
a
Manticore
you
you.
C
E
D
So
while
we
wait
for
him
to
return,
would
it
be
useful
for
us
to
sort
of
brainstorm
a
list
of
scenarios
that
would
be
enabled
by
having
these
systems
I
don't
want
to
lead
any
witnesses
so
well,
oh
I'll
speak
last
or
later,
but
this
is
an
active
topic
for
at
least
my
day
job
and
we
have.
We
have
exactly
this
squarely
in
our
sights
for
a
variety
of
scenarios,
but
we
don't
already
have
that
curated.
Yet
no
wrong
answer
is
just
brainstorming,
starting
place.
That
might
be
worth
a
couple
minutes.
What
do
you
offer.
G
E
This
kind
of
issue,
I
kind
of
we-
can
gather
the
use
cases,
maybe
from
from
our
point
of
view
like
from
newbies,
who
are
not
maybe
using
this
amazing
analyze
analysis
system,
so
analytic
system.
So
obviously
we
can
be
wrong.
So
definitely
it's
worth
to
have
maybe
short
document
Google.
Look
that
we
can
offline
collaborate
with.
So
maybe
let's
have
that
an
action
item
right
so
I
will
share
some
of
the
common
work
and
focus
on
requirements,
not
necessarily
about
exact
solution.
Already
right.
That
makes
sense
to
me.
I
think.
A
E
D
Many
of
the
commercial
offerings
today,
like
data
dog,
for
example,
do
analysis
of
periodicity
for
anomaly
detection
on
the
one
to
two
week,
maximum
time
horizon,
we
we
run
a
lot
of
machine
learning
and
and
we
serve
a
lot
of
models
for
real-time
auctions,
another
other
fairly,
fairly,
real-time
things
being
an
EdTech.
So
we'd
like
to
apply
some
of
that
same
methodology.
D
To
you
know
the
vast
firehoses
data
we
have
coming
from
not
just
infrastructure
and
various
layers
like
communities
and
the
various
cloud
posted
services
we
use,
but
also
leveraging
custom
metrics
that
are
exposed
as
Prometheus
or
open
metrics
formatted
data.
Today,
you
know
we're
dumping
that
the
cortex
but
we've
explored.
You
know
everything
from
time
scale
to
influx
to
lock,
basically
anything
that
can
take
a
remote
write,
input
stream,
and
so
we
haven't.
We
haven't
solved
this
challenge
yet,
but
that
is
one
of
our
question
areas
so
that
we
can.
D
You
know
like,
for
example,
in
the
insurance
market.
You
know
help
them.
Health
insurance
in
the
u.s.
is
an
open,
enrollment
thing
right
and
it's
usually
in
the
fourth
quarter
of
the
year.
So
we
will
see
huge
spikes
then,
but
in
other
domains
like
financial
services
or
other
places,
you
have
fairly
predictable
absent
flows
of
traffic
that
dramatically
change,
particularly
cloud
native
systems,
so
being
able
to
have
alerting
that's
not
threshold
based
and
is
not
anomaly.
D
E
This
mean
that
it's
not
easy.
There
was
no
kind
of
API
that
allows
it,
maybe
for
like
streamed,
slower
API
that
will
allow
producing
those
reports
right.
So
this
is
kind
of
use
case
that
we
found
problematic
and
that
we
want
to
solve
with
this
analytic,
analytic
forum.
The
second
thing
is
good
discoverability
of
what
dimensions
we
have
available
right.
E
So
one
thing
is
that
we
want
to
be
able
to
fetch
this
data
for
machine
learning
for
other
use
cases,
but
first
we
need
to
know
what
data
we
have
and
in
perfuse
is
not
that
easy,
because
the
best
knowledge
you
have,
if
you
are
producing
this
data
right,
if
you
are,
if
you
know
what
your
application
actually
Chris
allows,
you
exposes
what
metrics
are
exposed
right.
If
you
want
to
kind
of
discover
those,
then
you
are
putting
a
heavy
load
and
there's
no
very
good
API
on
that.
E
So
you
know
something
that
that
this
is
trying
to
solve
is
discoverability
of
of
your
data
and
allow
joining
data
from
multiple
sources.
Yes,
there
is
definitely
you
know
a
use
case
for
having
those
reports
or
anomaly
detection
as
you,
as
you
said,
that
will
combine
not
only
metrics
but
also
logs,
and
maybe
also
you
know,
the
data
that
you
got
from
segments
that
you
know
from
website
and
and
what
users
are
accessing
actually.
So
those
are
further
use
cases
that
we
could
see,
but
definitely
yeah,
let's,
let's,
let's
gather
most
of
them.
I.
D
A
To
make
one
note,
so
everyone
is
explicitly
aware
of
this
that
we
already
have
different
use
cases
and
as
much
as
MIT
was
talking
about
shipping
data
often
doing
analysis
somewhere
else
where,
as
particles
talking
about
doing
it
within
the
observability
stack,
which
is
already
a
huge
difference,
and
with
that
I'm
going
to
show
up
and
require
someone
who's,
not
partic,
met
or
me
to
to
pipe
up
and
voice
an
opinion.
Please.
H
Mean
on
some
level,
I
like
to
keep
things
small
and
composable
so
shoving
a
whole
bunch
of
data,
analytic
stuff
into
an
observability
stack
kind
of
feels
like
creating
a
massive
monolith,
instead
of
a
bunch
of
little
tools
that
compose
to
do
something
nice,
but
that
doesn't
mean
you
can't
have
a
marketing
umbrella.
That's
got
a
bunch
of
smaller
projects
underneath
it
to
go
and
fit
together
nicely.
A
Just
as
a
thought,
you
can
also
have
something
in
the
middle
value
basically
enabled
composability,
for
example,
something,
but
we
again
very
deep
in
permit
Iceland.
If
Prometheus
had
a
batch
API
versus
an
interactive
API,
where
you
can
put
requests
which
basically,
they
can
return
in
an
hour
or
in
a
day
I.
Don't
care
I,
just
care
about
this
happening
at
some
point
and
that's
basically
what
all?
What
on
mainframes
grew
large
with
this
distinction
between
interactive
database
access
and
batch
database
access,
and
this
simple
split
enabled
insanely
powerful
use
cases.
H
I
think
that's
I'm,
a
hundred
percent
behind
that
I
think
that
the
really
interesting
story
is
the
API.
Is
that
enabled
the
external
use
cases?
So,
instead
of
going
and
shoving
a
you
know,
Jupiter
notebook
into
Prometheus
lord
help
us
building
an
awesome
API
that
goes
in
the
enables
those
new
types
of
workloads
is
definitely
a
hundred
percent.
The
UNIX
philosophy,
composability
community
play
yeah.
D
That
would
let
us
kind
of
not
muddy
the
observability
stack
with
this
analytic
stuff,
but
would
still
make
it
very
simple
to
move
and
in
the
same
spirit,
you
know
one
of
the
things
that
I
have
the
least
amount
of
like
concrete
plans
about
how
we're
going
to
achieve
it
is
the
same
with
trace
data.
In
particular,
we
have
a
lot
of
our
services
or
you
know
that
we're
universally
instrumenting
with
open
telemetry
the
link
Rd
mesh,
we're
using
also
can
export
headers
as
well
and
and
in
those
spans.
D
Many
of
our
development
teams
are
finding
it
useful
to
put
additional
metadata.
Like
you
know,
logged
messages
or
stacktrace.
You
know
failure
things
unique
identifier,
so
we
have
this
whole
bunch
of
data,
that's
not
Prometheus
or
which
I
don't
have
the
same
set
of
API
today
to
handle
a
cross
different
trace,
back-end,
so
we're
sending
everything
to
Yaeger
right.
But
how
do
I
get
it
all
that
in
a
good
way
and
how
do
I
use
other
backends
or
whatever
to
bring
that
trace
data
into
scope.
H
For
analysis,
I
really
love
the
mirroring
idea,
because
going
back
to
the
segment
comments
from
earlier,
like
that's,
what
I
use
segment
for
is
literally
just
mirroring
and
splitting
out
all
my
analytics
data
into
a
bunch
of
different
backends,
whether
it's
Google,
Analytics
or
bigquery
or
whatever,
and
like
it's
my
favorite
tool
on
earth.
Specifically
for
that
reason,
because
there's
a
whole
bunch
of
different
tools
that
do
a
bunch
of
different
things.
I
want
a
single
stream
that
comes
in
and
then
splits
it
out
into
other
backends.
That
I
can
slice
and
dice
different
ways.
D
D
Ph
high
and
so
many
SAS
tools
we
can
use,
but
it
requires
a
fairly
you
know.
Fedramp
certifications
of
just
a
lot
of
you
know
talk
to
compliance
auditing.
It's
a
long.
It's
a
heavy-lift
operationally
to
use
some
of
these
tools,
because
the
data
needs
to
stay
within
our
own
pcs
of
networks
so
from
a
requirements.
Perspective
I
think
at
least
for
our
business
soft
hosted
as
if
there's
a
pretty
important
news,
and
when
we
get
into
the
volume
you
know
we
did
some
tests
with.
You
know
a
link
or
D
they
service
match.
D
G
G
You
can
actually
send
it
to
Gnostic
search,
for
instance,
and
Julia
had
six
there,
but
it's
very
it's
not
obvious
out
of
the
box
that
you
actually
you
need
to
take
care
of
that.
In
the
back
end,
we
had
exactly
the
same
question
internally
generated
a
lot
of
trial,
he
traces
a
lot
of
metrics
and
how
do
we
do
an
analysis
on
that
right?
Now,
it's
a
bit.
It
is
not
easily
discoverable.
Basically
yeah.
D
D
Of
these
cases,
and
for
that
we've
been
using
core
fauna,
you
know
to
stitch
together
the
logs
traces
and
metrics,
and
we
expect
that
to
be
a
nice
fit,
but
on
the
analytics
side,
you
know
to
dive
deeper
in
I.
Think
there's
a
lot
of
opportunity.
Does
anyone
know
of
projects,
work
that
might
help
fill
this
gap,
or
should
we
at
least
take
an
action
to
report
up
to
the
TOC
that
this
is
one
domain
that
is
ripe
for
new
projects
to
join
the
ciencia.
D
B
Think
yeah
here
is
probably
the
project
that
does
most
from
on
the
opposite
side.
I
think
the
rest
is
all
in
the
commercial
part
budget
space,
but
people
are
going
to
India
endo
dicks
on
top
and
project
open
telemetry
also
try
to
focus
really
on
the
data
connection,
but
not
the
processing
and
not
the
analytics
part
I
think
that
the
biggest
the
biggest
part
of
analytics
to
is
definitely
a
or
a
special
thing.
A
project
has
done
done
recently,
I
mean
there's
other
questions.
I've
ever
seen
you
come
up
there.
B
So
I'm
not
sure
what
I'm
helping
here
and
I'm.
Following
a
discussion,
that's
where
was
like
listening
in
here
for
most
of
the
time,
but
if
you
want
to
combine
these
different
data
sources,
I
think
that's,
that's
really
what
you
need
to
need
to
agree
on
how
they
coalesce
labels
that
you
can
create
the
cross
of
table
data
sources
and,
on
the
other
topic,
I
mentioned
before
about
processing
an
input
output,
I
personally,
like
them
like
one
source
of
data
or
like
one
stream
of
data
processing
into
different
tools.
B
This
is
also
like
a
lot
of
the
work
that
I'm
focusing
on
right
now,
how
do
I
not
have
to
eat,
or
how
can
we
use
the
amount
of
data
after
wheat?
It's
magic
again,
it's
possible!
So
if
you
see
it
and
one
day
produce
200
or
less
20,
terabytes
of
data
and
the
streaming
it
to
five
tools
and
storing
it
in
five,
schools
is
20.
Terabytes
became
100
terabytes
and
if
we
even
add
more
tools
over
the
Xhosa
store
data
again
and
not
just
the
results
after
processing,
it
just
becomes
more
and
more.
B
So
that's
why?
If
you
wanted
to
look
into
something
but
look
more
into
a
three
data
processing
model,
Internet
data
can
easily
be
processed
in
a
stream
data
processing
type
of
approach,
probably
a
lot
of
people
using
Kafka
for
these
kind
of
scenarios.
But
this
is
I'm
just
not
sure
but
I'm
off
topic
right
now,
where
I'm
part
of
this
I
just
wanted
it
I.
D
How
can
we
be
considering?
Rather
how
can
we,
you
know,
do
as
much
as
possible
where
we're
collecting
all
traces,
but
you
know
at
the
point
where
we're
collecting
and
what
we're
calling
the
things
we
don't
care
about,
so
programming
models
to
do
stream,
processing
that
that
edge
locations
where
the
services
are
you
know
in
cluster
or
on
a
virtual
machine
whatever
before
we
store.
All
that
is,
we
think,
might
be
the
best
approach.
E
Yeah
I
think
we
are
touching
exactly
this
topic
on
there
two
ways:
either
you
move
everything
in
convert
everything
to
the
tools
that
a
target
format,
or
we
have
some
way
of
aggregating
and
stream
those
data
and
just
fetch
what
you
need
from
the
various
data
sources-
and
this
is
yeah
just
just
nicer,
because
you
don't
reproduce
data
but
yeah
I.
Think.
D
One
actionable
thing
that
we
could
like
anyone
on
this
call
could
start
working
on.
Maybe
we
just
make
a
ticket
for
a
test
for
it
and
see
who's
interested
is,
but
you
know
on
this
very
topic
like
I,
have
active
questions
now
that
I
don't
see
easy
answers
to
without
just
trying
it
around
the
jaegers
scalability
like
what
happens.
If
I
take
blank
amount
of
data
and
throw
it
into
a
Jaeger,
back-end
and
I
just
keep
putting
the
at
what
point?
D
Does
it
fall
over
and
not
become
usable
or
should
I
run
lots
of
little
ones?
You
know
so
much
like
we're
psyched
about
cortex,
because
it
provides
this
centralized
fairly,
cheap
way
to
store
a
lot
so
time
series
data
you
know
and
make
it
query
about
all
the
things
the
cortex
does
we
kind
of
want
something
similar
for
Jaeger
I
referred
for,
trace
data
and
I,
don't
I!
D
E
Now
definitely
good
discussions
in
a
younger
team
how
to
scale
it
more
and
there
was
a
recent
spike
on
butter
butter
making
database
and
there
are
much
much
more
what's
going
on,
but
I
think
this
doesn't
so
like
analytics.
A
kind
of
problem
right
now
is
just
another,
maybe
more
high
cardinality
kind
of
database
and
and
the
question
can
we
scale
it,
but
but
still
we
just
another
data
right
and
and
I
think
we
should
have
this
topic,
how
to
scale,
yaker
and
and
and
and
but
I
think
that
might
be
something
we
can
pull.
E
D
I,
don't
even
know
how
to
make
bigger
scale.
I
mean
what
is
a
suitable,
durable
store
for
trace
data
that
could
be
used
for
analytics
purposes
yeah.
So
it's
more
closely
Jaeger
thing
and
more
of
a
like.
We
have
remote,
read
remote
right,
right
and
I
can
have
different
backends,
but
that
to
me
seems
like
that.
E
D
On
the
previous
topic
of
figure,
so
here
I
on
the
stream
processing,
are
you
we
saying
this
is
something
that
you're
considering
or
my
audio
got
garbled
when
I,
when
you
were
talking,
is
their
projects
or
are
their
API
is
already
they
could
be
suitable
here
or
they
could
be
applied
to
dien
stream
processing
on.
B
Okay,
technically
everything
you
do
was
touristy.
There
will
be
processing
because
he'd
get
in
midstream.
He
just
has
to
merge
a
lot
of
data
together.
That's
how
the
idea
we
do
it
at
scale
and
also
one
year
I
mean
we
did
it
before
when
we
exported
data.
It
was
also
street
based
because
you
need
high
performance
out
forever,
washes
the
amount
of
stable,
pricey
crazy,
but
it
is
raised
a
the
process
into
being
sensitive
is
a
mixture
if
you
did-
and
maybe
that
makes
it
so
hard.
B
What
was
it
tango
because
the
mixture
of
sleep
data
processing,
if
you,
for
example,
need
to
calculate
metrics
from
that
stream
of
phrases
that
you
get
in
there
like
service
response
time?
If
you
want
to
collect
them
on
an
on
a
per
trace
level,
which
you
already
don't
have
to
do,
if
you
have
metrics
for
for
for
response
times,
but
if
you
want
to
slice
and
dice
them
and
then
it's
like
really
this
high
cardinality
database
type
of
is
that
you
run
against
these
daggers.
B
But
even
there
we
found
that
stream
data
processing
solves
a
lot
of
issues
that
we
are
analyzed
there,
for
example,
for
problem
patterns
as
we
are
collecting
the
data
and
off
later
on,
I
mean
my
personal
opinion
is.
If
you
do
good
data
analysis
on
trace
data,
you
will
use
the
amount
of
use
cases
where
you
have
to
do
a
lot
of
real-time
queries
on
that
data,
because
that
the
bigger
world
right
now
goes
into
aggregation
of
of
traces.
It's
kind
of
interesting
to
watch
and
because,
as
Jonah
here,
who's
working
a
space
for
yeah.
B
We
started
always
Cindy
traces
until
to
the
point
where
we
realized
that
looking
at
Cindy
traces
is
nice
for
a
developer
to
understand
what
they
called
us
or
for
debugging
intermittent
errors,
but
reasonable
in
large
systems.
Just
have
too
many
traces
and
similarity
and
outlier
analysis
of
traces,
which
eventually
graph
based
data
structures,
is
something
that
you're
usually
looking
for
the
individual.
C
C
Trending
and
other
types
of
analysis
of
the
trace
data
that's
being
collected,
and
ultimately,
you
could
actually
detect
a
lot
of
problems
just
based
on
metrics
versus
having
to
actually
analyze
all
the
traces,
because
what
jäger
it
just
hammers
the
back
end
and
it
becomes
really
difficult
with
elasticsearch
to
do
these
types
of
things
at
scale.
This
is
exactly
what
we're
dealing
with
in
our
system,
as
our
customers
continue
to
send
more
trace
data.
You
know
to
our
back-end
and
it's
definitely
kind
of
a
disconnect
between
traces
and
metrics
that
I'm
seeing
in
the
community.
C
D
Same
approaches,
but
what
one
other
is
why
we've
chosen
Loki
for
some
of
our
log
aggregation
scenarios
in
particular,
prom
tale,
which
is
a
demon
set
that
runs
in
a
kubernetes
context
anyway,
entails
all
the
logs
it.
It
can
look
for
things
and
have
log
derived
metrics
that
are
exposed
as
a
prometheus
endpoint.
That
can
then
be
scraped
by
the
same
in
cluster
Prometheus
to
get
exactly
the
same
thing
to
get
like
frequencies
in
various
custom
processing
it.
G
C
D
On
targeted
workloads
as
well,
but
my
point
was
that
did
the
notion
of
at
the
point
of
collection,
deriving
time-series
metrics,
because
it's
simpler,
sometimes
in
more
expedient,
to
reason
on
the
time
series
than
it
is
to
plow
through
the
full
trace
data
or
through
the
full
log
data.
It's
a
common
approach.
B
B
You
have
to
echo
gate
and
somewhere
again,
because
are
you
also
doing
it
based
on
that
fixed
data?
I,
think
this
discussion
we're
having
right
now
per
se.
What
to
use
for
what
s
is
one
that
has
been
going
on
for
over
ten
years
in
the
modern
community
yep
just
telling
people
what
to
do
is
it
it?
Is
it
good
to
be
like
what
to
use
metrics,
for
what
to
do
is
trace
is
what
to
not
do
these
traces.
B
D
I
guess
that's
why
I
wanted
to
focus
on
scenarios
I've
been
holding
off,
but
I
could
just
really
quickly
just
bounce
through
two
additional
scenarios
that
we
might
want
to
add
to
the
list.
I'm
curious.
If
other
people
have
these
same
ones,
one
is
correlation
of
infrastructure.
Metrics
with
ask,
so
you
know
most
product
riders
that
we
use.
You
don't
give
you
costing
information
and
being
able
to
correlate
those
two
things
there's
two
others
that
we
had
when
looking
at
horizontal
versus
vertical
pod,
auto
scaling.
D
You
know
that
that's
a
place
where
we
would
like
to
run
experiments
capture
the
impact,
particularly
from
legacy
workloads
that
were
inherited.
Shall
we
say
that
aren't
completely
understood,
but
have
you
know
good
test
collateral
where
we
can
filter
some
traffic
to
to
run
experiments?
You
know
when,
when
does
it
make
more
sense
to
scale
vertically
worse,
as
far
as
on
tallien
and
just
capturing
the
metrics
for
all
of
that,
and
then
correlating
it
with
what
the
autoscaler
did
can
provide.
D
You
know
the
ability
to
create
models
to
came
out
how
we
should
better
utilize
hardware
and
then
Carling
all
that
back
to
different
the
cost.
Those
are
those
are
two
scenarios,
at
least
that
our
ways
we
internet
ever
quote
might
want
to
use
analytics
on
and
then
the
third
all
mention
is
that
we
we've
launched
internally
to
brag
a
little.
It's
actually
quite
simple,
just
a
deployed
tracker
service,
so
so
some
of
our
CI
NCD
when
it
makes
deploys
or
when
a
lot
of
hand
deploys
that
aren't
automated,
yet
are
made.
D
We've
got
that
going
to
a
back-end
service,
which
is
then
making,
in
our
case
graph
on
on
annotations
and
also
surfacing
that
via
a
Prometheus
aggregation,
push
gave
a
time-series
metrics
out
of
our
deploys,
so
we
can
correlate
like
deploys
to
anomalies
or
errors
or
whatever
happening
good
or
bad
in
our
infrastructure,
and
that's
another
thing
where
there's
so
much
data
and
so
many
different
points
of
measurements,
different
services
that
applying
a
more
systemic
analytical
approach
that
there
can
make
it
less.
You
know,
needle
in
haystack
kind
of
traditional
debugging.
G
D
G
It's
exactly
something
that's
considered.
Basically,
when
we
doing
the
deploy,
you
want
to
look
at
the
trace
data
to
see
exactly
what
changed.
If
we
can
generate
metrics
out
of
there,
then
we
have
immediately
insights
into
you
know.
Maybe
this
band
took
10
percent
longer
suddenly,
because
we
did
this
deployments.
I
mean
correlate
that
also
to
response
time
on
a
metric
and
it'll
get
a
lot,
some
walking
at
the
same
times,
I,
creating
all
that
it's
kind
of
really
hard
right.
G
Now,
cuz
you
have
to
integrate
with
lots
of
different
systems
and
and
yeah
with
traces
need.
You
don't
have
really
good
tools
right
now
to
do
that
that
can
handle
the
load
and
scale
well,
but
yeah.
One
of
the
things
you
said
was
you
wondered
if
others
were
kind
of
heading
in
the
same
direction.
Definitely
I
mean
with
exactly
the
use
case
right
now,.
E
Yeah
I
think
I
want
to
kind
of
bomb
this
threat
because
we
are
actually
thinking
about
that
in
kind
of
from
Q's
community,
because
with
the
low
key
and
and
hacker,
we
are
kind
of
collaborating
together
and
we
are
already
having
some
POC
of
solutions
that,
for
example,
are
having
strong
links
between
his
data.
So
you
are
still
pushing
you
know,
gathering
metrics,
you
are
still
pushing
blocks
like
he
used
to
you
still
pushing
traces.
E
However,
you,
the
recessive
kind
of
way
worried
only
index
once
because
they
say
kind
of
you
have
actually
the
same
resources
in
both
of
those
signals
right,
and
so
you
can
totally
have
the
same
index
that
will
give
you.
You
know
that
certain
application,
for
example,
so
that's
already
reducing
a
lot
of
space
and
and
complexity
and
and
and
resources
that
you
have
to
use
for
for
those
signals
right.
E
So,
for
example,
you
have
Pro
materials
are
giving
exemplars
that
can
hold
trace
IDs
for
the,
for
example,
slower
some
interesting
thing
that
happened
during
observation
of
latency
of
this
request,
for
example.
So
then
you
can
navigate
to
trace
and
and
have
traces
and
the
same.
You
can
share
the
trace
ID
and
the
request
ID
for
the
logins,
so
we
can
jump
from
that
between
logging
and
tracing
and
if
you
would
and
kind
of
work
together
in
a
way
that
those
databases
will
share
some
kind
of
information,
for
example
indexes
which
are
very
heavy.
E
There
is
a
huge
wing
here,
so
we're
trying
to
go
for
this,
because
you
really
need
to
gather
metrics
and
push
metrics
right,
as
we
agreed
for
the
four
different
purposes
like
for
alerting.
You
have
to
have
reliable,
real-time
metrics
for
traces.
You
are
fine
with
those
and
a
slower
latency
for
kind
of
trace
availability.
So
there
yeah
it's
hard
to
have
just
one
solution
and
just
use
traces
and
forget
about
anything.
G
It's
also
interesting:
it's
you
mentioned,
correlate
everything
and
if
you
use
the
elastic
search
completely
and
I'm
in
no
way
like
trying
to
sell
it
or
anything,
but
you
can
actually
already
do
that.
But
you
have
to
be
completely
locked
in
in
that
system.
It
would
be
nice
to
be
able
to
use
now
the
CNCs
projects
in
the
same
way
that
you
can
correlate
all
those
things
all
together
and
still
keep
you
to
the
cloud
native
kind
of
way
of
working.
Exactly.
D
A
I
think
one
thing
which
we
should
be
doing
is
start
collating.
Use
cases
start
collating
use
cases
in
a
shared
document
are
using
if
you
want
I
can
send
them
on
around
without
any
restriction,
so
everyone
can
just
toss
stuff
in
maybe
let
this
sit
for
a
week
or
so
to
have
a
brainstorming
session
and
then
from
there
try
and
there's
still
more
generalized
use
cases.
All
of
this,
the
other
thing.
A
A
D
It
seems,
like
a
you
know,
figure
it
out
as
you
go
and
I'd
like
to
just
create
provision
what's
needed,
so
there
might
be
some
low-hanging
fruit
there
for
new
country,
new
contributors
or
new
members
of
the
cig.
I
would
I
would
encourage
us
all
to
make
suggestions
like
cooking
slag
or
in
github,
but
I
would
like
to
period
a
backlog,
much
larger
than
the
folks
on
this
call
could
deal
with
so
that
we
can
say
to
potential
cig
members.
Hey
come
join
us.
D
E
I
think
we
can
you
can
do
that
collaborate,
you
kind
of
in
together
right
so
I'm
happy
to
set
up
this
document
because
kind
of
started
this
this
topic
and
kind
of
announced
it.
So
everyone
can
reach
that
the
closest
project
they
are
working
on
and
and
and
give
an
input
on
the
case,
studies
and
requirements
right.
This
is
our
goal
for
now.
A
And
use
cases
or
did
not
catch.
You
seen
use
cases
because
I
think
for
Alan.
For
this
sake,
like
I
think
case,
studies
are
valuable
but
somewhat
orthogonal
in
as
much
as
even
someone
has
something
working
which
they're
able
and
willing
to
talk
about,
but
this
is
largely
disconnected
from
us
coming
to
an
agreement
of
what
use
cases
to
cover.
Oh.