►
From YouTube: 2021-01-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
No,
no
it's
breakfast
time
for
all
yeah
good
to
eat.
Well,
while
we
have
all
these
calls
right,
let's,
let's
wait
for
a
moment
for
folks
to
join
up
hi
bartholomew.
How
are.
B
You
and
david
and
pune
thank
you,
for
you
know
some
of
the
comments
on
the
lambda
blog.
I
really
appreciate
it
good
good
comments.
C
Thanks
for
working
on
it,
it's
exciting
to
see
that
get
polished
and
published.
B
Yeah
I
mean
again,
I
mean
my
I
I
was
super
interested
in
kind
of
you
know
roping
in
some
our
engineers
to
also
participate
more
and
also
you
know
kind
of
get
more
deeply
involved.
So
as
you
as
we,
you
know,
think
through
these
areas
together
really
getting
to
some
better
engineering
and
solutions.
B
I
think
my
jana
said:
should
we
be
right
back,
let's
wait
for
a
few
more
minutes,
then
we
can
get.
D
B
Yeah,
I
just
shared
it
and
we
had
again,
you
know
I'd
added
some
topics
that
were
on
the
top
of
the
you
know,
discussion
in
the
metrics
workshop
that
we
had
done
on
jan
15th.
So
I
just
added
some
topics
that
were
on
the
top
of
the
list
for
us,
but
again
josh,
you
guys
should
should
add
some
of
those
areas
too,
and
and
again
you
know
this
is
obviously
a
lot
of
areas
that
that
we,
we
all
are
interested
in,
but
we'd
like
to
also
kind
of
figure
out.
B
If
we
can,
you
know
prioritize,
dig
in
establish
requirements
and.
F
Did
you
see
that,
like
I
turned
phase
one
into
a
task
sheet,
let
me
share
actually
the
link
here
as
well.
I
roughly
you
know
itemize
like
the
top
level
things
that
needs
to
be
done
so
it
the
prioritization
is
like
my
prioritization
p0
means
it
must
be
done.
P1
is
good
to
have
sort
of,
and
one
thing
that
I
was
about
to
say
just
like.
F
Maybe
we
can,
if
we
agree
on
this
list,
we
can
go
and,
like
you
know,
just
take
a
look
at
this
list
at
least
like
if
we
are
still
okay
with
phase
one
as
it
is.
This
list
should
be.
You
know
containing
all
the
things
that
we
need
to
do
and
then
we
can
prioritize
and
pick
owners.
Some
design
docs
needs
to
be
written
for
some
cases.
So
we
can
you.
B
All
these
things
we
have
to
do
all
of
the
above
anyway,
but
it's
just
more
so
you
know
kind
of
discussing
together
and
figuring
out
what
makes
sense
to
target
first.
You
know
itemize
atomize
the
clear
requirements
as
a
as
a
for
the
baseline
that
is
built
and,
of
course,
you
can
add
enhancements
later.
B
You
guys
also
have
the
the
notes
right
from
the
previous.
F
Spreadsheet
first
or
I
can
present
if
you
want
to,
I
just
want
us
to
like.
Maybe
if
you
want
to
like,
have
a
larger
discussion
about
like
these
phases.
That's
also
fine
yeah,
I'm
not
sure.
If
we're
agreeing
with
all
those
you
know
the
scope
or
what
actually
needs
to
be
done.
If
you're
agreeing
with
that,
I
know
I
can
present
to
phase
one
any
any
comments
on
that.
F
So
because,
like
there's
a
larger
group
here,
you
know-
and
you
know
there
are
a
couple
of
people
who
might
be
interested
in
like
talking
about
like
the
overall
road
map
for
like
data
model
and,
like
you
know
that
sort
of
like
issues
so
please
feel
free
to
like
chime
in
like
what
you
want
to
take
this
discussion
to,
because
the
task
list
that
I
have
is
really
tactical
and
very
clear,
and
you
know
concrete,
there
is
not
much
to
discuss
if
you
ask
my
opinion
there.
F
So
it's
more
of
like
delegating
the
work
pretty
much.
H
H
So,
on
the
on
the
test
list,
I
just
want
to
make
sure
there's
a
consensus
on
all
the
different
pieces
and
understand
like
how
they're
meant
to
be
used.
So
the
only
one
I
see
like
that
jumped
out
at
me
is
up
metrics
did
we
was
there
a
discussion?
I
missed
around,
how
we
want
to
handle
up
matrix
and
open
telemetry,
because
I
would
love
to
know.
F
The
it's
so
it's
going
to
be
part
of
phase
2,
where
we
haven't
done
any
work.
There's
been
some
preliminary
discussions,
but
nothing's
finalized.
I
K
F
So
I
mean
it
will
be,
you
know,
incompatible
behavior.
If
that's
what
you're
asking
richard.
F
H
Sure
that's
fine,
so
maybe
I
guess
the
question
is:
is
there
consensus
on
all
these
these
phase
zero
things
to
do.
F
M
F
F
F
It
doesn't
have
any
like
data
model
improvements
and
so
on,
because
those
are
require
more
discussion
and
we're
going
to
do
it
in
parallel
to
this
work.
This
is
more
of
like
having
open
telemetry
as
a
drop-in
replacement
for
prometheus
server,
so
people
don't
have
to
like
deploy
two
agents.
Like
I
mean
in
agents
and
primitive
server,
there
are
already
like
certain
components
in
the
project:
the
receiver
prometheus
receiver,
that
does
the
discovery
and
scrape
there's
already
a
remote
right
exporter,
and
there
are
certain
issues
with
both
of
them.
F
So
the
list
here
is
just
itemizing.
F
That
we
need
to
fix
the
first
one
is
the
receiver
itself
has
been
like,
take
it
from
open
senses,
so
it
just.
Does
I
mean
it's
another
very
well
maintained
thing,
so
I'm
suggesting
us
to
rewrite
it
and
it
what
it
does.
Is
it
scrapes
things
by
using
the
prometheus,
scraping
in
discovery,
libraries,
but
turns
everything
into
this
like
open
senses,
intermediate
format
and
converts
everything
to
back
to
like
open
telemetry,
so
we
can
just
rewrite
to
just
transform
everything
into
open,
telemetry
and
clean
up
the
code
a
bit.
F
It's
very
unmaintainable.
There
is
like
this.
We
don't
have
any
stale
markers.
We
need
to
do
this
in
order
to
have
compatibility.
We
have
this
other
item.
B
F
Would
be
nice
if
we
figure
out
a
way
to
maybe
other
chart
because
large
clusters,
the
discovery
right
now
on
prometheus,
the
library
that
we
use
doesn't
have
automatic
sharding
cap
capabilities.
So
this
is
something
that
we
want
to
do
at
least
us.
The
the
other
thing
is
we
need
to
label
yeah
go
ahead.
If
you
have
a
question.
F
F
You
thanks.
Thank
you.
Thank
you.
The
target
needs
to
be
you
know
in
the
in
is
edited
as
a
label
in
the
receiver.
We
already
do
this.
We
just
I
just
you
know
just
put
that
as
a
requirement.
This
is
the
only
thing
that
we
need
to
do
in
terms
of
labeling,
because
in
phase
one
we
assume
that,
like
picking
the
right
labels
setting
them
up
correctly
is
like
user's
responsibility.
We
will
only
like
attach
the
target
label.
F
Currently,
the
data
model
doesn't
have
any
up,
so
there's
nothing.
We
can
do
when
we
receive
it
in
terms
of
transformation
it
into
an
open,
telemetry
thing.
So
all
we
are
gonna
do
is
just
like
a
log
something-
and
that's
already
like
that's
missing
right
now.
There
is
no
like
wall
capabilities.
At
this
point,
we
can
consider
adding
them.
This
is
this.
F
B
F
Example,
stateful
set
has
been
more
useful
to
them
in
this
the
exporter
this.
This
is
sort
of
like
what
I
had
for
receiver.
The
exporter
had
some
couple
of
issues
needs
to
be
debugged.
F
F
I
think
maximum,
like
concurrent
outgoing
request
type
of
thing,
it
would
be
nice
if
we
add
some
config
like
allow
users
to
you
know,
configure
these
things,
because
it
depends
on
the
you
know
the
size
of
their
collection,
what
they're,
exporting
and
so
on.
So
we
need
some
flexibility
over
there.
Restri
mechanisms
are
handled
by
open,
telemetry's
retrying,
so
I'm
not
sure
if
it's
sufficient
enough,
we
need
to
take
take
a
look
at
that.
F
If
we
can
make
you
know,
if
we
can
satisfy
the
retry
and
behavior
the
parameter
server
is
doing,
it
would
be
good,
but
you
know
it
might
not
be
so
important.
So
this
was
my
p1.
Also
like
prometheus
server,
has
some
fine
tuning
best
practices
and
so
on.
I'm
not
sure
how
how
much
of
it
is
going
to
apply
here,
but
we
need
to
take
a
look
and
evaluate
basically
what's
going
on.
F
B
F
Just
improve
some
of
those
things,
and
this
is
pretty
much
what
it
is
for
exporter
like
functionality-wise.
We
just
need
to
evaluate
and
fix
some
of
these,
like
you
know,
race,
there's,
probably
some
race
conditions
coming
from
this
bug,
so
we
gotta
we
need
to
fix
it
and
improve
it.
Basically,
that's
what
the
work
is.
The
kubernetes
support
related
stuff-
I
put
you
know
we
need
to
decide
what's
the
deployment
model
and
if
the
prometheus
components
are
going
to
impact
it.
F
So
it
would
be
nice
to
see
a
design
dock
and
you
know
we
need
to
publish
some
reference.
Vmware
files
and
some
parameters
configuration
examples,
and
we
also
said
that,
like
you
know,
we
don't,
we
are
not
in
the
charge
of
adding
any
host
related
like
labels,
so
we
need
to
document
what
the
user
needs
to
do
like
if
they
wanna.
You
know,
we
need
to
make
sure
that
they're
labeling
things
correctly.
F
We
you
know,
let's
also
document,
you
know
how
they
should
label
things
with
the
kubernetes
name,
space,
pod
name
and
so
on
and
so
on.
So
this
was
my
list
for
phase
one.
I
just
wanted
to
keep
it
very
tactical
to
you
know,
get
the
basics
running.
B
Yeah
I
mean
so
that
I
mean
this
is
a
good
really
good.
I
just
I
wanted
to
open
up.
You
know
there
are
several
folks
to
kind
of
review
and
add
any
areas
that
are
missing.
If
you
know
folks
are
seeing
anything,
I
think.
N
I
do
have
one
thing
for
the
for
the
number
12
line.
I
think
I
think
there
is
another
discussion
in
that
document,
which
is
how
do
you
scale
collection?
So
I
think
the
the
diploma
model
is
very
tied
with
how
how
is
the
the
scaling
of
the
collection
gonna
be
possible?
I
know
prometheus
has
already
something
I
don't
know
if
that
is
good
enough
or
not,
but
some
some
investigation
there
is
required.
F
N
F
F
E
B
F
O
O
Sorry
I
was
late
or
I
was
on
the
phone
phone
because
I
was
driving
to
school
and
back.
O
I
have
been
listening
the
whole
time
and
there,
so
we
just
got
out
of
the
technical
committee
meeting
last
hour
and
I
wanted
to
say
that
there's
there's
definitely
going
to
be
increased
focus
on
getting
our
data,
our
otlp
protocol
for
metrics,
frozen
or
stabilized,
and
so
I
think
I'd
like
to
separate
the
issues
in
this
document
here
that
are
like
purely
about
data
model
and
semantics
and
interpretation
from
the
ones
that
are
about
operations
and
scalability
and
fault,
tolerance
and
reliability
of
prometheus
in
the
open,
telemetry
collector.
F
Makes
so
much
sense,
do
you
want
to
itemize
things
here
like
in
phase
two
and
we
you
know
yeah
janna
can.
O
O
O
Document
I
wanted
to
sort
of
say
I
looked
at
the
over
that
list,
and
I
saw
a
bunch
of
them
that
are
to
me
are
mostly
somatic
like
semantic
conventional
questions,
but
there
are
some
that
come
down
to
the
data.
The
actual
protocol,
and
particularly
I've,
been
thinking
about
this
up
question,
and
I
want
to
give
you
all
an
idea
that
has
come
to
me.
Over
the
last
few
days.
O
I
was
trying
to
figure
out
how
to
make
an
up
metric
work
in
the
push
model,
and
so
and
and
josh
cerith
in
a
private
conversation
last
week,
challenged
me
to
think
about
what
the
user
actually
wants,
which
is
to
know
whether
their
targets
are
up.
Of
course,
and
so
we've
talked
about
how
in
prometheus
up
is
just
a
gauge
that
says
I
am
up
and
because
you're
pulling
that
data,
you
put
the
timestamp
the
im
up.
Timestamp
happens
at
the
moment
that
you
scrape
in
our
otlp
model.
O
We
have
this
gauge
point
and
for
all
the
the
cumulative
and
delta
points
we've
given
two
time
stamps.
One
is
the
start
time
and
one
is
the
current
time,
and
so
we
say
that
the
delta
or
the
cumulative
covers
that
time
range.
What
I
think
we
need
to
do
is
give
da
gauges
two
time
stamps
to
fix
this
problem,
so
the
gauge
time
stamp.
The
start
timestamp
of
the
gauge
is
the
moment
when
it
happened.
That's
the
moment
when
it
became
the
current
value.
O
O
So
what
it
means
is
that,
as
soon
as
you
finish
pushing
a
report,
you
immediately
set
a
gauge
right
afterwards
saying
I
am
now
up,
I'm
still
up
and
then,
when
you
report
that
gauge
a
minute
or
10
seconds
later
or
so
on,
you're
then
saying
I
have
been
up
this
whole
minute
or
this
whole
10
seconds.
So
that's
like
an
important
data
model
change.
That's
what
I
think
we
need
to
focus
on.
N
Josh,
I
think
I
think,
let's
not
fix
issues
right
now.
Let's
identify
the
issues
at
this
moment
yeah.
I
think
it's
it's
it's
it's
very
good
for
us
to
to
document
this.
So
maybe
the
phase
two
or
even
phase
one
or
whatever
yeah.
F
Josh,
I
shared
it,
so
you
can
edit
it
now.
H
That
might
be
a
better
task
than
just
debugging.
When
we
see
it
right,
yeah.
I
P
I
I
I
I
think
you
just
it's
one
of
those
trade-offs
you
make
with
push
that
this
information.
Isn't
there
you
kind
of
just
have
to
use
heuristics,
basically
to
figure
it
out.
So.
N
But
also
also
brian
and
josh,
from
google,
there
is
an
uptime
metric
by
the
way
in
in
census,
internally
in
google,
which
was
using
push
model
which
may
have
not
been
as
useful
as
the
up
in
prometheus,
but
was
good
enough
for
people
to,
for
example,
look
at
the
graph
and
say
how
many
targets
are
reporting
data
right
now
you
can
just
look
at
the
the
but
the
the
way
how
it
was
implemented
was
you
you
do
a
plus
one,
so
it
was
actually
a
cumulative,
not
the
gauge
that
was
doing
plus
one
every
time
when
you
do
a
push
and
you
look
at
the
rate
and
then
the
rate
has
to
match
with
the
number
of
targets.
I
N
No,
we
can
call
it
uptime
or
whatever
we
we
can
come
up
with
a
different
name
that
that
does
almost
similar
things,
but
not
not
exactly
this
and
gives
you
semantically
may
give
you
almost
the
same
information
as
that.
We
need
to
to
understand
better
that
and
the
the
the
point
that
I'm
making
here
is
there
may
be
a
possibility
even
for
push
to
come
up
with
another
metric
that
that
can
that
can
satisfy
some
of
these
things.
I.
O
This
in
the
data
model,
for
example,
up
down
some
observer,
is
dispect
out
as
a
defaulting
to
zero.
So
if
we
were
to
use
up
down
some
observer
type,
in
other
words,
if
we
created
this
up
inspected
out,
as
sum
then,
the
default
is
zero
and
when
you
don't
have
a
record,
that's
your
scale
marker,
it's
zero.
We
can
solve
this.
I
don't
think
we
should
try
and
solve
it
in
this
meeting.
H
Yeah,
I
think
that
might
even
deserve
its
own
working
group
to
work
on
up
metrics.
D
Yeah
and
on
a
higher
level
note,
of
course,
we
are
talking
about
prometheus
compatibility.
One
of
the
things
we
should
really
try
very
hard
is
to
try
and
avoid
anything
which
is
already
well
defined
within
prometheus
and
just
reuse,
the
same
name
in
in
the
other
place
because,
like
we
are
discussing
about
up
and
all
of
and
all
the
time,
everyone
is
like
what
definition
of.
Of
course,
there
is
already
a
well-defined
meaning
of
what
up
is
in
the
prometheus
family.
D
O
D
O
D
Okay,
yes,
if
you're
able
to
make
it
exactly
the
same,
then
yes
by
all
means,
call
it
the
same
course.
Then
it
is
the
same
from
the
point
of
view
of
any
system
ingesting
this
data.
Yes,
absolutely
100,
but
up
to
now,
we've
been
discussing
things
which
are
not
exactly
the
same
and
which
might
even
be
impossible
to
make
absolutely
the
same
and
still
calling
it
the
same,
and
that
will
absolutely
lead
to
user
confusion
and
violates
the
principle
of
these
surprise.
That's
my
point.
The
one
which
I
was
trying
to
make.
C
So
just
looping
back
to
to
something
that
j
macd
said
earlier.
Have
we
identified
have
do
we
agree
on
all
of
the
items
in
this
list
that
have
a
potential
impact
on
the
data
model?
I
think
josh
mcdonald
named
a
few
are:
is
it
worth
kind
of
adding
a
column
in
this
spreadsheet?
To
say
these
are
the
ones
the
title
data
model,
yeah.
B
B
I
Section,
that's
missing
as
a
line
item
here
and
the
whole
thing
about
configuration
settings
support
like
job
name,
scrape
intervals,
great
timeout,
tls
and
so
on.
I
just
don't
see
that
on
this
list,
I'm
not
sure
if
that's
on
on
purpose
or
not.
F
I
I
didn't
edit
because
we
are
using
the
you
know
the
scraping
and
discovery
libraries,
so
it
already
provides
you
know
compatible.
Let
me
add
an
item
ensure
configuration.
F
F
Josh,
I
want
you
to
you,
know
itemize,
maybe
the
most
important
stuff
right
now,
because
everything
is
being
finalized
and-
and
you
know
it
like
some
items
related
to
those.
N
Yana,
I
would
add
another
another
line
here.
One
of
our
my
colleague
pointed
there
is
already
a
very
good
discoverability
mechanism
in
in
prometheus,
but
for
good
or
for
bad.
F
We
we
actually
the
reason
that
the
discovery
is
so
complex
in
the
receiver
is
it
tries
to,
like
you
know,
add
some
labels
related
to
target.
If
that's
what's
been
mentioned,
we
will
keep
doing
that
because
we
need
to
add
like
target
related.
You
know
labels.
B
F
If
that
what
your
bogdan's
co-workers
feedback
is
coming
from
I'll
add
an
item.
N
I
I
I
do
need
to
mention
there
is
one
possible
breaking
change
coming
in
for
performance
reasons
particularly
related
to
kubernetes,
but
it
sounds
like
you're,
primarily
just
going
to
be
reusing
all
the
prometheus
scrape
code,
in
which
case
this
will
all
just
work.
N
But
we
need
discoverability
for
other
components
that
we
have
so
so
not
only
for
me
for
prometheus
targets,
but
we
want
to
use
some
discoverability
for,
for
example,
to
discover
redis
instances
and
then
from
there
to
script
in
the
redis
format.
That
is
not
prometheus
and
we
have
a
specif
specific
code.
That
does
that.
So
we
we
need
these
discoverability
components
to
discover
not
only
prometheus
targets
but
other
targets
as
well.
As
I
mentioned,
like
an
example
from
redis.
E
N
Happy
to
do
that,
but
what
I'm
trying
to
say
is:
is
it
a
plan
to
make
that
a
standalone
library?
So
then,
then
there
is
a
clear
separation
between
discoverability
components
and
prometheus
and
the
fact
that
you
don't
need
to
use
prometheus
to
have
all
the
discoverability
and
stuff.
I
know
I
know
it's
code
and
we
can
import
it
and
that's
it.
But
what
do
you?
What
do
you?
What
do
you
want
out
of
a
standalone
library
that
you
don't
have
to
have
at
the
moment?
N
Q
N
N
Well,
even
if
it's
github
prometheus
org,
but
it's
a
it's
a
standalone
project
that
right.
Q
N
For
me,
for
me,
is
the
following:
if
it's
a
library,
if
it's
a
go
module
independent
goal
module,
it
has
its
own
versioning
its
own
thing,
I
I
can
consume
that
go
module
with
with
the
and
you
follow
the
call
module
standard
which
says.
Okay,
you
cannot
do
breaking
change
unless
you
bump
the
version
and
so
on.
So
so
you
do
it
as
a
standalone
go
version.
N
Then
it's
not
going
to
be
easy
for
you
to
break
the
protocol
or
you'll
kind
of
define
very
well
the
interface
between
the
consumer
and
the
producer
of
these
rules.
I
feel
like
once
when,
when
is,
is
still
in
the
same
repo.
It's
very
easy
for
you
to
break
this
contract
and
to
to
to
make
changes
that,
for
others
will
be
breaking
changes
without
observing
them.
N
The
other
thing
may
be
the
other
thing
it
may
be
in
the
future.
You
can
think
of
right
now
you
have
everything
embedded
into
one
module
with
which
brings
all
the
dependencies,
but
what
about
we?
You
have
an
interface
that
is
implemented
by
kubernetes
by
docker
by
by
couple
of
of
environments
that
you
discover
and
then
I
can
import
separate
them
based
on,
for
example,
if
I
don't
want
to
have
all
the
kubernetes
dependencies
and
yana,
I
think
you
are
showing
us
your
chime.
N
If
I
want
to
import
only
the
kubernetes
and,
let's
say
docker,
but
I
don't
care
about
other
other
discoverability
that
comes
with
a
lot
of
dependencies,
can
I
do
that
again,
not
necessarily
critical,
but
it
will
be
a
nice
to
have
for
us
if
we
standardize
on
these.
A
lot
of
that
is
also
also
possible.
Q
N
Okay,
I
will,
I
will
look.
I
will
look.
I
looked
six
months,
probably
more
than
six
months
ago
on
this,
so
I
need
to
double
check
on
this.
How
is
the
current
status
and
in
that
so
yeah.
Q
We
very
much
want
to
make
the
existing
code
usable
by
other
projects
and-
and
I
in
fact
use
it
in
prom
tail.
We
use
it
to
discover
a
lot
of
alert
managers,
it's
it's.
It
is
quite
a
good
generic
service
discovery,
library
and
it
has
very
minimal
requirements
in
terms
of
imports
and
especially
with
the
planned
go
changes
that
won't
just
import
the
rest
of
the
go
universe
as
well.
I
think
it
probably
is
going
to
be
quite
usable
as
it
stands.
I
hope.
I
Yeah,
the
other
thing
is
that
code
hasn't
changed,
that
api
hasn't
changed
in
years.
I'm
not
sure
since
it
was
created
like
five
six
years
ago
and
like
I
think
we
had
once
so,
making
it
up
stayed
all
into
modules
was
breaking
changes
and
obviously
the
I
think
was
only
twice.
We've
changed
things
and,
as
I
said,
there's
one
change
that
looks
like
in
his
future
and
it's
actually
just
changed
the
data
model
because
it
turns
out
that's
got
some
efficiency
issues.
N
And
how
willing
are
you
to
keep
the
discovery?
I
mean
if
it's
still
in
the
whole
prometheus
project?
How
willing
are
you
to
keep
keep
it
up
to
date
with
dependencies?
So
one
of
the
problem
that
we
started
to
observe
is
because
of
the
project
is
very
large.
We
we,
we
have
a
problem
with
dependencies,
so
I
think
the
easiest
way
to
solve
this
is
if
we
try
to
keep
everything
up
to
date
and
how
how
how
is
that
story
on
your
side?
Is
it
something
that
you
actively
do?
N
I
So
in
general,
dependencies
are
updated
every
six
weeks
as
part
of
the
committee's
release
process.
In
rare
occasions
we
have
been
unable
to
upgrade
because
there
was
a
brick
there's
some
breakage.
We
have
to
stick
in
an
older
version
for
one
dependency,
that's
happened
that
happens
once
a
year
or
something,
but
in
general
you
can
expect
we'll
update
the
latest
go
module
once
every
six
weeks.
Okay,.
N
Perfect
and
the
last
question
is
that
okay,
no,
there
is
no
more
question
thanks,
that's
that's
it.
I
will.
I
will
give
a
try
to
the
new
interface
and
see
if
there
is
anything
missing
there.
D
And
ideally
when
you
or
if
you
can
find
something
which
you
don't
like
or
don't
understand
or
want
something
different
or
so
could
you
maybe
make
a
half
page
or
so
where
we
can
just
walk
through
it
and
then
discuss
it
within
team
because
we
don't
have
quorum
here
or
anything.
So
we
can
promise.
Yes,
we
are
willing
to
talk
about
it,
but
we
can't
make
decisions
here.
So
ideally,
a
half
pager
would
be
nice.
Yeah.
B
I
mean
I
think
we
can
have
a
github
issue
for
tracking
each
one
of
these.
F
Can
I
can
I
ask
another
item
related
to
the
scraping
libraries,
the
appender
interface
is,
you
know
everybody
is
trying
to
implement
it
and
everybody
is
struggling
with
it.
Is
there
a
way
for
you
to
like
publish
a
reference
like
for
histogram,
for
example?
How
do
I
you
know
reconstruct
a
histogram
from
their
pandora
apis
like
we
have
this
issue
on
open
telemetry
on
cloud
watch.
You
know
consuming
that
api
is
a
bit
difficult
for
people
who
have
no
context
about
prometheus,
so
it
would
be
so
nice
if
there
were
some
references.
Q
This
is
something
we
can
take
and
and
definitely
like,
produce
either
examples
or
something
on.
I
think
it's
a
good
point.
F
Yeah
yeah
yeah
and
I
mean
there's
so
many
cases
like
I
don't
know
like
if
a
reference
is
actually
like
produceable
because
of
the
cases
that
you
want
to
handle
and
so
on.
It's
just
like
I
mean
for
someone
who
is
coming
in
prometheus
with
absolutely
no
background.
I
found
it
very
difficult
to
use
this
interface.
Maybe
we
can
get
your
help
case
by
case
rather
than
asking
you
to
you
know
document
things
that
will
be
on
also
an
option
if
you
can
help
us
reviewing
some
of
that
stuff.
That
would
be
super
useful.
I
D
I
I
Because,
like
I
was
like
this
is
not
something
prometheus
does
because
it
doesn't
need
to
like
if
you
want
to
know
how
the
code
works
I'll,
be
happy
to
walk
you
through
it.
But
yeah
like
I
suspect.
What
you
really
are
looking
for
is
an
open,
metrics
parser
in
go,
which
does
not
let
which
does
not
exist
yet
yeah.
It's
really
at
least
a
full
one.
What
if
it's,
how
I
did,
but
you
can
half
do.
I
F
Anyways,
that
was
a
side
note
like,
even
if
you
can
help
us
with
just
reviewing.
Sometimes
there
are
ambiguous
things
that
we
don't
understand
because
of
the
api
is
not
that
expressive.
If
you
can
just
you
know,
review
a
couple
of
things
once
a
while.
That
would
be
also
super
helpful.
E
S
Have
we
gone
through
all
the
items
on
the
list?
Janna
did
you
want
to
share.
F
I
going
back
to
the
you
know
the
timeline
issue.
What
what
are
some
of
the
like?
You
know,
big
items
that
we
need
to
address
where's.
My
oh
man.
O
The
question
that's
come
up:
the
most
is
that
we
are
trying
to
to
finish
the
trace,
spec
and
release
things,
and
people
are
going
to
really
want
that
collector,
and
so
the
number
of
questions
about,
where
is
the
metrics
support
in
the
collector
today,
will
be
jumping
and
we
better
be
ready,
for
that
is
probably
the
most
important
thing.
O
B
F
Yeah,
so
I
personally
want
to
you
know,
also
figure
out
like
we
want
to
be,
you
know
the
owners
and
the
priorities.
Everybody
was
confused
by
the
earlier
priority,
but
I
can
you
know
just
mention.
B
F
Needs
which
stuff
needs
to
be
done
earlier
and
so
on.
How
do
you
want
to
continue
from
this
point
on?
It's
like
part
of
this
three
will
require
design
docs,
so
some
people
will
need
to
go
and,
like
you
know,
work
on
the
problem.
If
they're
any.
I
wonder
if
any
volunteers,
if
there
are
any
volunteers
to
you,
know
tackle
any
of
these
items
here.
B
N
C
C
B
F
C
I
think,
just
as
a
straw
man,
if
we
could
ask
people
to
try
to
get
something
on
here,
you
could
open
up
either
comment
or
edit
more
broadly
and
we
could
and
people
could
try
to
get
something
on
here
in
the
next
day
or
so.
B
F
H
Can
I
throw
out
an
idea,
so
I
think
if
I
understand
correctly
sorry,
I
had
to
turn
my
camera
off.
I
kid
things
I'm
doing
so.
I
think,
if
I
understand
correctly,
we
agree
that
this
is
a
good
set
of
tasks.
Why
don't
we
open
these
up
in
github
as
like
a
a
group
of
tasks
to
do,
and
then
let
people
take
ownership
of
them
there
right
yeah.
F
That
makes
more
sense.
Actually
I
that's
why
I
created
this
tracking
issue
column
because
we
wanna,
you
know
ideally
wanna
open
a.
E
Looking
at
this
on
a
higher
level,
we've
got
the
prometheus
receiver,
scraping
the
data,
and
then
we've
got
the
prometheus
remote
right
exporter.
F
Yes,
I
mean
it's
a
drop
in
replacement
initially,
because
people
are
don't
want
to
like
deploy
prometheus
server.
Just
export.
You
know
remote
right,
so
if
the
open
telemeter
can
help
that
case
initially,
that
would
be
really
nice,
but
the
overall
goal
is
prometus
compatibility
which
you
know
requires
us
to
have
like
data
model
compatibility,
so
you
can
scrape
prometheus
and
export
to
any
exporter.
That's
the
overall
like
long-term.
F
Goal
so
yeah
right
now
it
looks
like
it's
just.
It's
we're
basically
re-implementing
prometheus
server,
but
that's
like
initially
people
are
blocked
on
this.
So,
like
that's,
that's
why
we're
doing
it?
And
it's
very
tactical.
O
There's
this
question
about
prometheus
recording
rules
that
I
think
is
probably
hanging
over
us,
which
is
something
that,
where
there's
a
question
of
whether
open
telemetry
collector
will
begin
to
to
have
that
type
of
functionality
for
the
other
signals
or
the
other
magnetrix
ingesters
and
the
other
exporters
as
well.
Q
The
recording
rules
evaluate
entire
queries,
so
you
know
in
a
particular
query
language
josh.
What
what
are
you?
What
are
you
envisioning
there.
O
Well,
so
yeah
something
a
little
different.
Actually
bogdan
came
up
with
this
design
for
otlp
over
the
summer,
which
I
have
come
to
appreciate
very
much,
and
the
idea
is
that
every
data
point
has
describes
sort
of
self-describing
its
own
aggregation.
That
was
done.
We
are
able
to
perform
re-aggregation
of
these
data
points.
Semantically
at
least
we
think
so.
Potential
for
recording
rules
equivalent
in
the
otlp
model
is
that
you
could
have
an
aggregation.
O
That's
removing
labels
and
outputting
a
new
series
somewhere
in
your
hierarchy
so,
and
I
think
that
that's
roughly
the
the
spirit
of
what's
being
done
with
a
prometheus
recording
role,
is
that
we
can
take
these
aggregates
and
and
compute
multiple
aggregates
and
output
them
as
as
on
the
right
path
as
new
series.
That's
what
I
mean
and
I
think
that's
what
people
are
looking
for
when
they
start
to
use
recording
rules.
O
It
is
a
whole
query,
but
is
a
whole
query
over
one
prometheus
node,
not
over
the
whole
data
set,
and
I
think
that's,
roughly
speaking
what
I'm
describing
when
I
say
we
can
re-aggregate
these
data
points
in
otlp.
Q
Q
Yeah,
I'm
not.
I
wouldn't
be
convinced
that
that's
even
like
an
80
subset,
you
know
I.
I
definitely
encourage
you
to
check
out
things
like
the
the
sla
based
error
budget
based
alerting
that
tends
to
rely
heavily
on
recording
rules
and
that
I
don't
believe
would
fit
into
the
model
you're
suggesting
but
it'd
be
access.
O
N
Q
Q
Say
you
do
you
mean
the
grafana
cloud
agent
or
yeah?
We
don't
touch
recording
rules
in
the
cloud
agent
right.
We
do
them
server-side,
okay,
which
is
why
I
kind
of
my
piqued
my
interest
when
you
said
recording
rules
in
the
agent
because,
like
it's
not
technically
feasible,
to
do
that
right,
like
in
such
a
general
way,
but
I
think
that
kind
of
some
of
the
straight
time
aggregations,
you're
talking
about
it's
something.
E
Q
E
One
tool
that
the
community
developed,
where
you
can
essentially
put
in
a
couple
of
factors
and
you
can
generate
some
prometheus
rules
or
slos
with
multi-burn
rate
multi-window,
run
rates,
etc.
So
that's
an
example
that
you
could
check
out.
O
We
thank
you.
Q
That's
really
interesting.
Definitely,
you've
mentioned
the
target
label
a
bunch
here,
and
is
it
worth
at
this
point
talking
a
little
bit
more
about
kind
of
the
impact
of
having
this
this
target
metadata
on
metrics.
In
terms
of
you
know,
this
is
where
I
struggle
with
the
separation
of
the
discussion
about
changes
to
the
model
and
the
discussion
about
changes
to
the
deployment,
because
they're
so
heavily
interrelated.
With
this
target
label
target
metadata.
F
Yeah,
true
yeah:
do
you
yeah.
Q
Well,
I
guess:
does
everybody
know
what
I'm
what
I'm
referring
to
here
or
should
I
give
a
bit
more.
B
I
think
tom,
you
should
go
into
a
bit
more
detail.
Q
I
mean,
and-
and
please
do
correct
me
if
I'm
wrong-
I'm
definitely
not
an
expert
on
the
the
open,
telemetry
metrics
format,
but
my
understanding
is:
there's
there's
some
special
metadata
attached
to
every
metric
that
says
effectively
what
what
host
it's
come
from
is
that
is
that
correct.
F
There
are
some
like
resource
attributes,
but
they
are.
Are
there
like,
mandatory
or
not
like
josh
or
bogdan.
N
Depends
on
the
the
the
semantic
conventions
that
we
have
for
for
resource
attributes.
We
require
only
the
I,
mostly
only
the
identifier
to
be
mandatory.
The
other
things
are
optional,
but
we
do
have
semantic
conventions
and
definitions
for
all.
These
hosts
odd
container,
you
name
it
probably
100
or
something.
F
N
I
don't
know
exactly
what
prometheus
target
has,
but,
but
I
would
encourage
us
if,
if
we
scrape
from
prometheus
and
we
get,
for
example,
ip
pod
id
or
whatever
informations
we
get
from
prometheus
from
discoverability
or
from
from
from
the
the
target
itself,
then
then
we
should
put
all
of
them
into
the
resource.
Indeed,
we
have
this
service
name,
which
is
the
job
and
the
service
instance
id,
which
is
the
instance
I
think
in
the
target.
So
we
do
have
these
two
as
well
in
our
resources
and
they
are
mandatory
for
for
us.
E
So
just
to
make
sure
that
we're
talking
more
that
we're
defining
things
in
the
same
way
for
prometheus
a
target
is
the
unique,
unique
combination
of
labels
that
that
a
target
has,
and
then
there
are
a
couple
of
special
labels.
Let's
say
one
of
which
is
the
instance
label
and
the
job
label
and
the
job
label
is
a
rough
grouping.
Let's
say,
and
the
instance
label
literally
identifies
like
should
actually
be
unique.
N
Okay,
yeah,
we
we
do
have
service
name,
which
is
equivalent
of
job,
if
I
understood
correctly,
because
it's
a
combination
of
multiple
instances,
but
we
also
have
a
third
level,
which
is
a
service
name
space
which
may
you
may
have
cassandra,
deploy
in
serve
in
a
space
full
and
in
a
space
bar.
So
we
have
three
of
them,
but
we
also
have
a
knowledge
of
empty
namespace,
which
is
the
global
which
will
be
equivalent
with
what
you
have.
P
We
attach
those
that
resource
attributes
like
based
on
like
pod
name
pod
labels
like
we
put
that
inside
the
resource
so
like
it
has
some
connection,
like
we've.
Probably
just
discussed
this
with
the
observer
thing
as
well.
There's
discovery
stuff,
because
I
think
it
has
kind
of
it's
kind
of
related,
because
you
use
the
discovered
information
to
to
kind
of
enrich
the
metrics
that
get
emitted.
I
It's
a
kind
of
a
choose:
your
own
adventure
to
some
extent,
and
so
normally
you're
going
to
end
up
with
well.
The
instance
table
is
either
going
to
be
an
ip
port
or
the
pod
name.
You
have
your
job
labels,
your
service,
some
companies
decide
to
chuck
in
the
kubernetes
namespace
as
well.
It
might
be
other
things
and
but
in
general
you
kind
of
want
a
pretty
minimal
set
and
other
resource
attributes
would
end
up
as
infometrics.
Basically.
Q
In
particular,
I
understood
that
the
current
collector
had
to
be
deployed
as
a
daemon
set
because
it
used
the
api
to
get
this
metadata.
And
but
I'm
not
sure
if
I
understand
that
correctly.
N
N
I
think
that
was
probably
a
bit
of
mistake,
because
we
never
thought
very
well
about
how
we're
gonna
interact
with
scripting,
prometheus
and
points
and
so
on.
So
so
I
think
that's
a
separate
we.
We
need
to
discuss
this
and
maybe
maybe
revisit
that
design.
B
F
Share
this
spreadsheet
with
a
couple
of
folks,
so
you
know,
keep
adding
items
we
can
prioritize.
Maybe
we
could
have
another.
F
This
week,
if
it's
required.
B
Yeah
I
mean
we
can
take
this
into
the
metrics
discussion.
The
metric
sig
if
needed,
but
we
have
another
upcoming
next
meeting-
is
next
in
next
week.
So.
B
S
B
G
G
Hey
lalit
is
this:
the
right
link.
G
G
G
G
For
some
reason
I
had
some
issue
with
the
zoom
link.
Did
we
change
that?
Because
in
my
old
invite,
I
guess
there
was
an
old
link
that
was
asking
for
a
password.
V
G
V
Let
me
let
me
just
just
couple
of
minutes
share
this.
I
W
It
works,
no,
it
works.
I
mean.
V
Okay,
I
think
probably
we
can
just
add
our
alendras
and
then
probably
we
can
discuss.
U
Hey
I'm
just
looking
what's
going
on
here.
V
Yes,
probably
I
think
I
can
start
with.
I
mean
while
we're
updating
the
agenda.
I
think
I
can
just
start
with
the
change
I
mean
so
I
mean
yesterday
I
updated
the
finally
updated
the
compliance
matrix
for
c
bliss
plus.
I
think
it
was
something
which
was
not
updated
for
long
back.
I
mean
from
past
six
months.
V
W
V
Yeah-
and
I
think
it
didn't
even
reach
it
gets
merged
by.
I
think
there
are
some
other
reviewers.
G
We
go
over
the
internal
each
row
and
discuss.
V
This
I
just
went
through
the
code
and
just
saw
what
all
things
I
I
mean,
what
are
the
things
which
we
are
compliant
I
as
of
now,
and
I
sent
that
just
for
confirming-
I
also
sent
it
to
your
hands
and
I
think
your
hand
had
a
quick
look
on
that
and
he
he
gave
me
some
suggestions
and
then
I
think
I
just
read
the
pr
after
that,
but
it
would
be
good
to
really
go
through
that
and
it's
something
we
are
missing
here.
I
think
good
to
raise
a
pr.
G
I
have
question
about
get
active,
spawn
and
set
active
span.
I'm
gonna
send
an
apr
which
actually
implements
this
for
the
etw
exporter,
and
I
was
searching
for
examples
like
how
other
exporters
are
doing
that.
So
my
question
is
when
we
say
that
we
do
that
we
are
compliant.
G
Right
like
the
thing
is
in
order
to
actually
like-
let
me
elaborate
on
this,
so
let's
say
when
I
get
start
spawn
and
spawn
get
tracer,
and
I
write
some
exporter
that
is
handling.
This
is
expectation
that
the
caller
manually
does
this
or
is
expectation
that
the
sdk
automatically
manages
this,
like
nesting.
G
What
the
span
is
current
because,
technically,
when
I
obtain
a
tracer-
and
I
start
the
span,
I
kind
of
need
to
expect
that
this
span
is
now
active.
U
The
the
the
active
span
currently
is
not
managed
by
the
sdk
but
managed
by
the
api.
So
there's
this
context,
part
api
and
with
this
nested,
basically
we
have
the
nested
context,
structure
and
those
context.
Objects
can
hold
like
active
span
entries.
G
With
this
is
the
thing
is
that
does
it
imply
that
the
user
actually
has
to
call
with,
like
with
current
spam
api?
Yes,
currently,.
G
I
see
I
see
okay
yeah.
This
is
where
I
had
the
confusion,
because
I
was
gonna
send
in
another
iteration
of
htw
exporter.
That
automatically
starts
with
active
span.
Whenever
you
actually
start
the
spam.
U
Fact
that
the
user,
that
the
user
does
this,
and
also
that
maybe
then
instrumentation
libraries
kind
of
I.
G
Got
it
got
it?
Okay,
sure
I'll
often
the
pr?
Maybe
we
can
discuss
in
that
yeah
because
I
see
some
some
scenarios
where
it'd
be
convenient
to
actually
start
the
span
and
assume
that
when
I
started
it,
this
is
the
actual
current
active
spine.
W
Yes,
as
a
spec
matrix,
yeah
shows
the
gets
that
active
spine
are
on
systems
required
for
for
trees
and
the
trees
provide,
and
so
you,
I
think
in
its
water.
It
is
not
required
by
the
matrix.
V
Okay,
so
just
just
moving
ahead,
I
mean
please
go
through
this.
Just
please
go
through
this
complex
matrix
for
c
plus
plus
I
mean
in
case
there
is
something
which
we
already
support,
and
it's
not
there.
I
mean
we
need
to
raise
a
pr,
and
if
something
is
we,
we
don't
really
support
and
mention
it
as
a
supporter.
So.
G
U
I
can
explain
that
there
is
remote,
that
is
a
flag
on
the
span
context,
and
this
bank
context
basically
contains
span
id
trace
id
and
trace
flags,
and
if
we,
for
example,
we
get
like
like
is
the
first
band
in
a
service
that
we
started
and
we
get
basically
the
context
via
http
headers
from
like
an
other
process
w3c,
and
we
create
a
span
context
from
this
w3
c
trace
headers
that
come
from
another
service
and
then
this
context
that
create
will
have
is
remote,
equals
true.
U
Do
we
have
a
sample
of
that?
I
think
I
am
not
sure.
I
think
the
only
thing
there
is,
I
think,
but
this
is
used
in
my
doubles
w3c
tests,
but
I
think
we
don't
really
have
an
example
of
kind
of
a
a
multi-service
example.
We
only
have
single
services,
we
don't
have
any
services
to
link.
I
mean
that
could
be
a
nice
task
actually
to
provide
an
ex
with
cash.
G
Because
exactly
it's
like
when
I,
when
I'm
going
to
push
some
changes
for
etw,
and
I
noticed
that
I'm
not
really
using
that
flag-
and
I
was
thinking
like
what
is
the
use
case,
how
do
I
showcase
that?
Okay,
sure
that
looks
good
yeah?
I
I
saw
that
on
surface.
I
was
just
not
sure
if
it's
supposed
to
be
handled
by
the
tracer
provider
or
where
or
what's
the
end
to
end
the
use
case.
For
that.
V
U
U
G
U
W
C
W
C
V
U
V
That
was
something
I
think
this,
as
I
said,
just
go
through
this,
and
apart
from
that,
I
think
there
is
another
pr
which
is
raised
for
just
just
in
continuing
with
the
compliance
matrix.
There
is
another
pr
which
is
raised.
B
V
V
Exporters
binary
protocols,
json
protocol
are
optional
and
chipkin
also
anyway,
we
are
not
supporting
version
one.
So
I
think
we
are
good
here.
We
didn't
plan
to
implement
that,
but
I
think
for
jagger.
I
think
tom
you
may
want
to
look
into
that,
because
this
is
made
optional.
This
is
also.
V
And
probably
this
is
something
they
want
to
have
that,
so
I
think
over
udp
not
sure
why,
but
probably
slipped
over
udp
and
http
is
something
which
is
optional
in
photovoltaic
grpcs,
so
something
something
different
from
what
we
have
been
thinking
till
now.
W
V
Yeah,
but
it's
good
to
have
already
we
already
have
this-
I
mean
this
optional,
but
if
we
already
support
it,
why
not?
Let's,
let's
yeah.
V
U
We're
talking
about
the
spec
matrix,
maybe
we
can
then
for
or
for
all
the
features
that
we
don't
have
had
that
we
don't
have
yet
that
we
just
create
issues
on
our
site.
Yes,.
V
V
U
V
V
Okay,
yeah,
that's
that's
all
and
let
me
open
the
where
I
kept
it
now
yeah.
I
think.
G
G
B
G
It
only
for
the
logging
api
or
how
do
we
expect
to
handle
the
ad
event
on
spam
so.
G
So
you're
handling
it
like
in
the
jagger
exporter.
V
X
V
U
G
It's
it's
an
interesting
topic.
Like
I
looked
at
how
some
of
the
microsoft
exporters
work
like
event
flow,
for
example,
they
do
have
custom
metadata
attributes
that
allow
to
identify
individual
event
as
a
routable
or
not
like
to
the
back
end
and
by
default
they
all
flow
as
traces,
but
you
can
actually
highlight
what
exactly
you
want
to
route
as
log
or
as
a
custom
event.
So
it's
like
there's
yet
another
dimension
of
configurability
next
to
open
telemetry
sdk
somewhere
in
the
intermediary
exporter,
which
may
also
handle
similar
aspects.
G
That's
why
I
was
kind
of
looking
at
this.
I
guess
it's
just
when
you
have
sdk
and
exporter
in
the
sdk
that
exporter
inside
the
sdk
has
to
handle
that
that
bought.
Otherwise,
it
may
also
be
handled
by
intermediate
entities
such
as
an
outpost
sending
agent
or
something
anyways
yep.
Thanks
for
clarification
on
this.
V
V
I
don't
really
understand
what
exactly
is.
Let
me
know
when
the
talks
for
the
visual
release
usage
are
needed.
Talking
about
the
documenting
the
bazel
build
and
deployment.
Is
that
he's
talking
about
for
the
release.
G
Command
line
build
with
cmake
as
well
as
I
think
we
already
had
some
users
for
the
ide
build
like
visual
studio
based
build.
It
would
be
nice
to
have
some
write
up
just
for
people
who
are
not
very
familiar
with
that
to
quickly
on
board
and
try
it
out.
G
Maybe
I
would
suggest
that
if
we
come
up
with
any
cmake-based
instructions,
I'd
rather
focus
right
now
on,
like
latest
visual
studio
id,
because
older
one
may
have
its
own
quirks
and
differences,
and
it's
like
we
can
tell
how
to
do
it
on
latest
and
maybe
help
out,
but
not
necessarily
how
to
do
it
with
another
visual
studio
2017,
even
though
we
know
that
it
works
I'd
rather
focus
on
just
latest
for
now.
G
Cmx
right
now
for
the
c
maker
now
not
for
the
bezel.
Yes,
a
c
make
yes,
because
I
just
mentioned
this
for
bazel,
and
I
think
we
need
to
have
something
like
this
for
the
c
maker
as
well,
to
make
command
line
and
cmake
support
and
visual
studio
id,
because
many
users
who
just
need
to
get
things
done,
they
would
use.
G
W
G
V
G
Repo
is
a
good
question,
so
it's
like
end-to-end
example
how
to
get
started.
Instrumenting
a
simple
app
installation,
whatever
dependencies
where
this
is
flowing
and
maybe
ray
references
links
to
like
collector
dogs,.
G
Reason
why
I
want
to
mention
this
as
the
main
part,
because
I
have
other
example
in
mind
already
end
to
end
with
etw,
but
folks
are
going
to
say
that
it's
windows
centric
and
I
don't
want
to
promote
that
as
the
default
example.
You
see
what
I'm
saying
it's
like.
We
should
focus
on
vendor
and
a
platform
neutral
example.
First.
K
W
K
V
V
W
Okay
and
yeah
yeah,
I'm
I'm
wondering
the.
I
think
I
have
already
discussed
this
with
slalom
a
little
bit
and
I
think
we
need
grpc
integration
for
open
telemetry
to
make
it,
I
think,
to
be
useful
for
in
real
scenarios.
I
think
the
one
http
and
the
jrpc
is
the
most
common
way
to
build
a
server
for
with
our
open
telemetry
class
class,
and
I
also
do
just
look
at
them.
W
This
integration
for
for
other
languages
like
python
I
mean
and
go
and
same.
There
is
also
separate
ripple
like
grpc
grpc
ecosystem,
some,
something
like
that
and
all
the
and
these
integrations
are
part
of
them
like
plug-in
for
grpc.
So
they
are
attracting
to
some
separate
ripple
and
I'm
wondering
do
we
do
we
need
to
do
the
same
thing
for
for
jeff
for
our
safe
place
fast
or
we
can.
Another
idea
is
to
make
it
a
component
in
our
country
like
provide
a
provider
jrpc
plugin.
V
G
I
have
a
question
on
this
guys,
so
wasn't
it
mentioned
as
optional
in
the
big
matrix
already
and
then
here's
my
logical
reasoning.
If
we
mention
something
as
optional
in
the
big
matrix,
that
means
we
acknowledge
its
existence
and
we
optionally
encourage
the
implementers
of
sdk
to
have
it
because
otherwise
it
wouldn't
have
been
mentioned.
It's
like
my
football
exporter,
belongs
to
country,
but
grpc
is
something
like
more
common
and
logistically.
G
V
G
And
I
would
still
advocate
you
know
when
we
bring
this
example
of
what
belongs
to
khandrif.
What
not
I
would
like
to
illustrate.
G
Hopefully,
once
we
get
the
htw
exporter
merged,
how
it
is
useful
in
a
vendor
agnostic
way
with
elastic
as
well
as
I
see
that,
for
example,
for
event
flow,
you
can
have
google
cloud
exporter,
which
means
that
anything
that
benefits
most
of
people
out
there
in
a
vendor
agnostic
way
and
is
prominent,
belongs
to
me
anything
that
is
my
football
company
exporter
should
probably
go
to
contrib
and
I'd
see
country
buzz
incubating
place.
G
If
we
see
something
that
is
incubating
and
becomes
relevant,
then
we
can
advocate-
and
maybe
vouch
for
bringing
this
to
the
main
and
maybe
even
recommending
this
to
be
added
to
spec.
V
Yeah,
I
totally
agree
with
you.
The
only
only
concern
I
see
is
that
it
should
not
reach
to
the
point
where
the
main
repo
exporter
package-
it's
quite
a
big
size,
somebody
who
just
want
to
use
one
jeepkin
or
something
he's
kind
of
forced
to
really
download
the
complete
big
jungkook
package.
So
that
that's
the
only
concern
I
see
here,
but
otherwise,
I
think
definitely.
G
I
think
the
issue
is
probably
not
our
code
but
more
of
what
dependencies
we
take,
because
it's
like
in
prometheus
example
in
prometheus
exporter,
we
take
a
a
dependency
on
prometheus
cpp
client,
which
recursively
takes
dependency
on
another
500
meg
of
source
code.
So
it's
like.
Probably
we
have
to
be
careful
in
in
those
kind
of
cases,
but
something
that
is
prominent
and
self-sufficient,
and
I
think
for
grpc.
We
already
have
this
dependency
anyways,
I
mean
we
need
this
elsewhere
for
a
tlp.
G
So
pure
addition
of
our
code
to
repo
should
be
rather
small.
V
But
just
to
talk
about
grpc
integration
I
mean
I
just
did
I
mean
once
I
had
discussion
with
tom,
I
mean
I
just
did
some
research,
I
mean.
The
problem
here
I
see
is
that
there
is
no,
so
grpc
has
to
provide
some
support
to
really
provide
some
call
back
mechanism
or
injectors
where
we
can
get
their
request
response
objects,
and
we
can
use
that
and
you
we
can
create
the
traces
on
top
of
that
which
this
is
something
which
they
provide
for
most
of
the
languages.
But
it's
not
yet.
Therefore,
c
plus.
V
It's
their
experimental,
but
I'm
not
sure
how
good
it
is.
As
of
now
it's
something
they
say
that
in
sector
class,
grpc,
experimental
interceptor
class,
which
I
haven't
really
seen
any
in
any
of
the
documentation
they
have,
they
do
mention
in
their
documentation
about
python.net,
golang
and
other
languages,
but
somehow
it
is
missing
as
of
now,
probably
just
because
it's
experimental
they
are
already
mentioning
it
anywhere.
V
V
Channel
filters
where,
through
which
they
are
able
to
able
to
inject
spams,
but
that
that's
something
probably
in
the
grpc
specifically
for
open
senses.
It's
not
for
open
tracing,
and
I
saw
ryan
in
one
of
the
discussion
ryan
really
fighting
about
why
it
is
something
therefore
sentence
and
why
it
is
not
generic
enough
so
that
it
can
be
used
for
open
tracing
but
yeah.
This
is
something
probably
we
have
to
see
more
into
that,
but
that
could
be
a
bottleneck
here
to
really
have
some
kind
of
instrumentation
available.
V
But
I
mean
talking
about
the
instrumentation.
I
really
struggle
hard
to
really
see
the
use
cases
for
c
plus
plus
for
really
instrumenting.
If
we
can
put
the
way
other
languages
are
providing,
whether
we
do
have
some
scenarios
where
we
can
provide
the
instrumentation
like
grpc
could
be
one,
and
I
struggle
to
see
other
in
scenarios
where
so
so.
Those
those
frameworks
should
provide
some
extensions
for
us
to
inject
our
api
yeah,
which
is
something
missing.
As
of
now.
U
Oh
yeah,
so
a
lot
of
you
you,
you
say
you
use
you
missing,
use
cases
for
for
c,
plus,
plus
c
open
the
diameter
plus
use
cases
I
mean
I
I
would
say
engine
x
would
be
probably
a
very
big.
V
Use
I
already,
I
think
we
discussed
me
and
tom
were
discussing
offline
on
something,
and
I
think
nginx
was
something
a
use
case
and
I
thought
of
really
doing
something
on
top
of
that,
because
even
in
not
just
anything
apache
web
server
also
has
it
provides
writing
the
dynamic
loadable
modules
just
like
nginx.
So
we
can
write
those
modules
and
we
can
inject
it
dynamically.
So
I
think
these
two
would
be
one
of
the
use
cases.
G
G
I
realized
actually
it
deserves
more
formal
definition,
because
the
quick
examples
is
great.
Anybody
can
map
from
c
to
c,
plus
plus
in
terms
of
the
real
functional
or
projection
layer.
G
G
We
can
start
with
canteeb
and
see
how
we
can
shape
something
in
country,
people
and
if
it
looks
reasonable
to
most
of
us,
we
can
come
up
with
the
idea
of
either
starting
a
new
sig
or
assuming
the
ownership
of
c
slash
c,
plus
plus
in
this
city.
V
Yeah
yeah,
that's
that's
mix
and
let's
try
out
something
I
mean
I'll
still
prefer
to
have
a
different
sec
so
that
we
are
more
focused
to
have
c
plus
plus
base
implementation
in
this
and
probably
have
a
separate
one
for
c.
Instead
of
creating
a
wrapper
on
top
of
secret
place
for
c,
let's
have
something
using
the
c
runtime
directly.
G
Sometimes
you
see,
even
if
we
have
a
wrapper,
you
can
actually
provide
the
alternate
implementation
and
prc
it's
the
api
that
is
relevant
because
to
quickly
experiment,
you
can
come
up
with
the
projection,
try
and
say:
yes,
this
is
usable
and
nice,
and
now
this
is
the
pure
re-implementation
of
that
same
api,
sdk
and
c.
G
And
I
agree
that
engines
is
a
very
nginx
is
very
prominent
example,
because
I
think
we
all
already
saw
two
questions
about
it
in
getter,
so
people
are
naturally
interested
about
it.
U
Yeah
and
when
you
can
say
yeah
engine
x
is
instrumented
via
open,
telemetry
c
plus
plus.
That
would
be
a
big
kind
of
also
marketing.
That
would
be
very
good
marketing.
G
Yes,
the
other
question
was
about
mysql
proxy
mysql
proxy
itself.
I
think
it's
also
in
c,
not
in
c
plus,
plus
so
at
least
two
other
scenarios
where
people
asked
about
c.
G
Plus
the
entire
kernel
driver's
space
for
now.
I
guess
it's
not
really
that
prominent
in
distributed
tracing
itself,
but
there
you
would
normally
use
c
or
ic
plus
path
with
pretty
much
no
stl,
no
standard
library
classes
a
separate
niche
which
we're
not
covering
right
now.
W
G
Like,
even
if
you
have
surplus
plus,
you
may
not
have
the
full
blown
standard
library
right.
G
W
G
Right
right,
yes,
and
for
linux,
you
typically
deal
with
like
c,
mostly
first
rather
than
c,
plus
plus.
G
Yeah,
I
think
max
the
last
one
I
I
really
wanted
to
check
who's
gonna
be
willing
to
be
maintainers
or
brewers
in
the
country
by
default.
I
put
the
same
logic
as
done
for
open
telemetry.net,
where
the
same
main,
repo,
approvers
and
maintainers
listed
as
ones
for
the
country
riley
had
a
question
on
this.
G
So
I
guess
we
just
need
to
make
sure
that
for
now,
maybe
if
we
agreed
to
flow
with
the
same
people
and
if
anybody
is
coming
up
and
saying
like
they
want
to
join,
I
guess
we
should
allow
them
as
long
as
they
have
good
intentions.
V
G
So
again,
my
selfish
interest
at
this
point,
while
etw
exporter
is
written
in
c
plus,
plus
the
test
infra
and
listeners
and
flows
that
showcase
it.
They
are
mostly
written
in
c
sharp
because
it's
easier
to
process
that
in
c
sharp,
so
I'd
like
to
contribute
it
as
an
example
in
the
contrib
repo
that
pairs
with
the
c
plus
plus
code
in
the
open,
telemetry,
plus
plus
sdk,
that
I
can
say
hey.
This
is
the
sdk
how
you
on
board
this
is
the
agent
or
like
forwarder
that
you
run.
G
This
is
how
you
forward
the
application
inside
your
monitor,
google
cloud
elastic
like
pretty
much
step
by
step
and
saying
hey.
This
thing
is
vendor
neutral
and
then
they're
agnostic,
vendor
friendly
with
whatever
cloud
you
use
it
this
way
or
this
way
or
this
way
or
the
other
way
and
like
feel
free
to
to
enjoy
it.
G
V
Is
it
is
it
I
mean,
if
I
understood,
is
it
non
c,
plus
plus
c
sharp
code
right,
yes,
yeah
so,
but.
G
It
is
needed
to
like
okay,
I
can
script
it
to
to
the
point
where
you
don't
actually
have
to
clone
the
code.
You
see
what
I'm
saying
it's
like,
I'm
not
going
to
add
it
as
a
sub
module
yeah.
You
need
to
do
the
write
up
and
explain
like
you
launch
this.
This
will
deploy
your
listener.
There
might
be
a
two
source
code
files
or
about
that,
maybe
in
c
sharp
under
the
examples.
V
Yeah
everything
should
be
okay.
In
that
case,
probably
my
only
suggestion.
One
suggestion
I
mean
not
not
related
to
this,
but
in
general,
would
be
for
content.
Repo
would
be
b.
Let's
maintain
some
similar
kind
of
structure
which
we
have
for
main
repo.
Like
we
had
exporters,
we
have
propagators,
we
may
have
instrumentations
and
not
have
it
as
a
I
mean
going
ahead,
not
not
not
related
to
this
one
totally.
V
G
Anywhere,
I
I
agree,
that's
why
I
thought
that
we
should
start
with
the
structure
if
you
guys
think
that
something
is
missing
in
that
structure,
and
I
think,
since
I
couldn't
add
an
empty
directory,
I
did
the
readme
in
each
directory
to
make
sure
that
this
layout
is
kind
of
created.
You
can
take
a
look
at
the
original
branch
to
see
the
layout.
If
you
think
that
something
is
missing,
we
can
add
it
sure
I
think
probably
I
can
just
have
one.
G
I
was
intentionally
trying
not
to
enforce
any
strict
rules
yet,
for
example,
code
formatting
rules
or
a
ci.
G
I
think
this
is
something
that
we
need
to
incrementally
add
and
in
a
way,
I
think
we
should
also
have
ability
to
exclude
something
from
code
formatting
rules,
because,
let's
say
example
like
whatever
google,
app
or
microsoft.
Amazon
contributes
something,
as
example,
and
they
may
use
their
own
coding
style
for
their
example,
and
that's
where
I
think
we
should
have
a
bit
more
flexibility
on
the
contributor.
G
Yeah
this
is
pending
this
current
discussion
like
who's
going
to
be
the
owners
right,
yeah
I'll,
add
that
file
for
sure.
V
V
From
one
of
the
developers,
I
think
this
same
colors
was
there
probably
is
not
not
in
the
call
anymore,
but
probably
if
you
can
see
it
looks
okay
to
me
as
of
now.
We
definitely
have
some
issues
in
terms
of
validation
of
trace
id
which
we
are
getting
from
headers,
but
probably
if
we
can
just
look
into
that
and
if
it
looks
okay
should
be
it's
something
which
has
been
most
of
the
code
is
inherited
from
w3c
trace
context.
V
G
Okay,
look.
I
I
have
a
quick
question
on
that:
the
string
you
change
but
yeah
I'll
I'll,
add
it
to
the
pr
nodes.
V
G
My
current
question
on
this
one
was,
I
didn't,
put
it
in
notes
if
this
method
is
standard
like
because
I
have
that
another
layer
of
building
everything
with
the
standard
library
which
I
still
need
to
add
to
ci.
I
just
don't
want
to
regress
by
adding
any
non-standard
methods
to
to
our
standard
classes.
V
Okay,
oh
you're,
talking
about
adding
the
find
method,
yeah
in
string
view.
G
V
V
W
I
think
the
changes
are
very
small
and
I
think
are
also
necessary
for
for
our
semi
package
to
be
installed
in,
because
these
are
validated
I'm
working
with
with
an
internal
team
member
and
validated
all
the
changes
which
which
are
required
for
consuming
because
they
make
package
consuming
for
our
repo.
G
Yeah,
I
guess
you
can
merge
it.
My
only
concern
was,
I
think.
Maybe
you
answered
that
that
we
use
right
now
this
for
yes,
okay,
the
for
http
server
as
well
for
z
pages.
V
W
V
G
So
probably
yeah,
maybe
we
can
make
it
at
the
top
level
generic
and
then
pretty
much
assume
that
it's
available
to
opera
for
all
projects
under
because
we
already
have
at
least
three
that
use
this.
G
That's
my
only
feedback
right
now,
but
otherwise,
as
is
it
looks,
okay
to
me,
as
also.
V
Okay,
thanks:
I
think
we
are
good.
I
think
we
get
six
minutes
back.