►
From YouTube: 2021-02-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
So
I
was
not
able
to
join
the
tuesday
9
a.m.
Data
model
meeting
because
I
have
a
conflict,
but
I
try
to
reschedule
my
existing
meetings
so
hopefully,
next
week
I
can
join
that
so
the
topics
here
I
want
to
keep
it
very
short,
so
number
one
thing
is,
as
we
agreed
we're
going
to
take
some
like
scenarios,
I
send
an
old
tab
here.
I
put
that
in
the
in
the
meeting
notes.
So
please
take
a
look
and
and
give
your
comment.
A
A
Probably
it's
fine,
let's
move
ahead
and
make
the
sdk,
so
we
can
have
an
entrance
scenario
and
once
we
prove
that
working
we
can
try
to
add
all
the
other
apis
by
replicating
the
success,
I'm
trying
to
define
the
milestone
and
then
later
in
the
old
type.
I
described
the
two
scenarios
that
I
explained
in
the
previous
meeting.
So
one
is
simple:
you
have
a
store.
A
developer,
just
writes
everything
monolithic
using
both
the
api
sdk.
The
second
one
is
you:
have
the
separation,
a
developer
working
on
the
library
they
just
use
api?
A
They
have
no
idea
about
the
customer
and
the
end.
Customer
is
taking
the
sdk
easy
in
their
application.
They
use
the
open
time
choice.
You
can
do
the
configurations.
I
try
to
describe
the
scenario
and
I
already
see
based
on
the
feedback.
There
are
many
different
voices.
I
can
see
a
group
of
people
make
it
more
comprehensive.
They
want
to
cover
some
scenario
that
originally
I
didn't
cover,
which
is
good.
Another
part
people
are
trying
to
say
we
shouldn't
cover
this
because
we're
going
to
boil
the
ocean.
A
So
my
suggestion
is
for
the
scenario
we
still
try
to
have
a
complete
story,
but
when
we
do
the
design
we
got
to
stage
that
the
first
stage
is
we
just
work
on
core
api.
We
don't
care
about
sdk,
just
the
instrumentation
part
and
I
described
in
old
tab.
So
please
put
your
comment
and
I'll
collect
feedback
and
and
put
my
so
I'll,
get
the
feedback
and
put
my
observation
like
it
seems.
A
Most
of
the
folks
would
prefer
to
do
this
so
I'll
update
the
old
tab
by
putting
a
portal
and
we
can
iterate
on
that.
I
hope
we
can.
We
can
sort
this
out
within
less
than
a
week,
and
during
this
I
have
some
top-level
questions.
I
think
putting
that
in
a
github
discussion
would
take
forever
I'll
just
quickly
ask
people
here
to
get
some
initial
idea,
and
probably
we
can
use
the
tuesday
op
spec
meeting
to
get
more
feedback.
A
I
got
folks
asked
me
because
I
reach
out
to
folks
like
gmeter,
they
ask
me,
can
you
explain
why
would
we
introduce
open
climate
matrix
instead
of
simply
taking
some
existing
matrix
api,
and
I
was
under
the
impression
that
we
have
a
goal
that
we
have
to
support
open
sensors?
This
is
why
we
have
open
telemetry,
so
just
want
to
confirm
that
is
that
our
understanding
that
doesn't
have
to
be
the
final
decision,
but
I
want
to
get
the
consensus.
A
D
D
Customers
to
go
to
there
are,
I
think,
like
the
the
path
we'd
like
to
give
them
is
a
path
where
they
continue
using
an
open
census
api
and
they
can
get
a
back
end
which
is
maintained
or
that
the
open
census
maintainers
can
start
migrating
to
a
community
that
you
know
is
is
growing
so
like
the
idea
would
be
you'd
still
use
the
open
census
api
and
you
would
use
open
telemetry
sdk
as
your
exporter
chain
right
from
opencensus.
D
So
that's
one
of
this
use
case
of
like
I
want
open,
telemetry
to
be
underneath
an
existing
api
and
because
I
don't
think
we
can
just
go,
ask
everyone
to
rewrite
all
their
code
from
scratch
to
open
telemetry
straight
up.
I
don't
think
that's
a
viable
adoption
path,
kind
of
ever
and
then
after
that
right
then
we
would
try
to
get
the
people
to
go
from
open
census,
api
to
open,
telemetry
api,
and
that's
like
the
harder
problem
of
what
this
looks
like
of.
D
Like
you
know,
is
it
does
it
have
to
be
a
hundred
percent
semantically
compatible,
or
does
it
have
to
be
like
spiritually,
compatible
to
the
point
where
you
can
do
everything
you
could
do
in
a
and
b,
but
it
doesn't
necessarily
look
the
same
and
the
reason
I
want
to
focus
on.
We
need
that
kind
of
sdk.
Shim
is
because
it
alleviates
the
need
to
have
it
look
exactly
the
same
right,
you're
it.
D
D
E
I
mean
for
what
it's
worth
is
what
we
did
with
open
tracing
right.
It's
just
the
idea
that
the
the
primary
path
people
needed
was
to
get
a
maintained
implementation
without
having
to
rewrite
their
instrumentation
and
as
long
as
they
can
do
that,
and
then
you
know
the
cherry
on
top
is
like.
Can
you
progressively
migrate
over
that?
That
would
be
nice.
I
don't
know
how
difficult
that
would
be
for
open
senses.
It
seems
to
me
with
metrics
that's
a
lot
easier
than
tracing
conceptual.
D
Conceptually
we
found
with
the
current
state
of
the
world
it's
a
little
harder
just
because
open
census
has
views
and
it
open
census
doesn't
have
this
notion
of
api
sdk
separation
in
the
same
way
that
open
telemetry
does
so
it's
actually
harder
to
find
a
point
to
cut
in
like
effectively
you
can
cut
in
as
an
exporter.
A
Right,
yeah
and
and
josh
has
a
pr
here
he's
trying
to
describe
the
challenge,
and
I
I
reviewed
that
pr.
I've
seen
some
very
interesting
comments
there.
So
if
we
could
like
josh,
if
you
could
help
us
to
like
get
there,
I
want
I
want
to
see
like
who's
willing,
to
help
to
move
for
get
the
clarity
or
we
think
we
can
move
ahead,
having
the
open,
telemetry,
api
and
sdk
without
the
clarity
of
what
what's
the
scope
for
open
census
support.
A
Should
we
get
blocked
on
this
and
and
figure
that
out
then
who's
going
to
work?
That
on
that,
I
think
george
probably
has
a
better
idea,
or
we
can
say
this
shouldn't-
be
a
blocker.
Let's
get
open
time
to
apis
the
hyper
focus
on
that
forgot
about
open
sentences,
compat
story,
and
once
we
have
that
we're
going
to
come
back
and
revisit
it.
Probably
after
this
year.
A
D
Like
to
bundle
the
problem
together
of
open
senses
and
pro
so
so
when
you,
when
you
talk
about
prometheus
right
or
say,
open,
metrics,
there's
the
question
on
that
right.
D
If
we
support
open
telemetry
behind
openmetrics,
that's
the
level
of
support
that
I
think
we
need
from
open
census
in
v1
right
is
like
give
a
transition
path
for
open
census
users
to
to
move
to
open
telemetry,
because
that's
where,
where
they
told
we
told
them
to
go
and
then
for
those
that
are
willing
to
adopt
the
whole
api
great,
that
there
might
be
things
that
they
can't
do
in
our
api
that
are
actually
okay
from
an
sdk
standpoint.
D
Right
like
there
are
things
in
otlp
that
the
api
might
not
be
able
to
do,
and
I
think
we're
willing
to
take
that
as
a
a
limitation
in
this
design
discussion
here
right.
So
that's
what
I'm
proposing
is
like
there's
this
use
case
of
open,
telemetry
sdk
behind
an
existing
api
in
a
language.
D
A
F
F
Like
josh
just
said,
we're
going
to
need
api
to
process
our
data
model
and
then
potentially,
you
can
still
use
your
otlp
exporter
from
your
open,
telemetry
sdk,
so
that
open
opences
is
really
integrating
like
halfway
down
the
export
pipeline
and
I'm
okay
with
that,
and
it
really
means
that
we're
saying
we're
going
to
have
a
data
model
that
can
do
more
than
most
api
apis
need
and
well
I
don't
want
to
block
the
conversation
riley.
It's
back
to
you.
A
B
I
have
a
question:
it
might
be
already
discussed
or
pretty
stupid,
but
what
what
is
it?
What
is
it?
What
does
the
first
question
mean
in
terms
of
instead
of
taking
a
well-established
metrics
api
like
prometheus?
What
does
that
mean
like
if
we
go
that
path.
A
Probably
wait
on
that
and-
and
I
I
think,
because
these
are
the
sub
questions-
we'll
eventually
come
back
to
that
question,
because
we
have
to
figure
out
what's
the
goal,
are
we
like
do
we?
If
our
thing
is,
we
want
to
be
100
open
senses,
api
compatible
with?
On
top
of
that,
then
there's
no
way
we
can
take
premises,
api
right,
so
we're
trying
to
explore
this
question
and
in
hope
that
we
can
get
back
to
the
top
level
question
and
make
sense.
B
G
Question
so
when
we
say
that
we
want
to
be
semantically
compatible,
is
that
also
translate
to
being
binary
compatible
in
some
way
such
that
you
know,
plugins,
processors
or
exporters
are
developed
for
one
system
are
compatible
in
the
other
system?
Well,
what
does
that
mean?
Actually,
if
we're
not,
if
we're
just
semantically
compatible.
A
G
G
Right
so
so,
if
I
get
this
right,
so
we're
just
trying
to
say
that
on
the
wire
protocol
is
semantically
correct,
but
without
on
a
host
they're,
not
gonna.
If
you
have
a,
if
you
have
a
a
machine,
you're
only
gonna
be
running,
open,
telemetry
or
you're
only
gonna
be
running
open
sensors,
but
they
will
all
send
to
you
know
to
the
otlp
protocol,
to
you
know,
collectors
by
either
open
sensors
or
open
telemetry.
G
D
I'd
actually
like
to
go
a
bit
further
and
if
you
look
at
how
we
specified
tracing
it's
a
little
bit
further,
so
the
idea
is,
I,
as
a
user,
want
to
use
open
telemetry,
but
I'm
using
a
library,
that's
instrumented,
with
open
census.
D
So
I
configure
one
exporter
pipeline
right,
one
pipeline
in
open,
telemetry,
and
then
I
configure
an
open
census.
Exporter
or
I
forget
what
they're
called,
but
I
think
it's
relatively
the
same
concept
right
that
feeds
into
the
same
pipeline
as
open,
telemetry
and
for
tracing.
We
actually
have
two
implementations.
If
you
want
to
look
at
them
and
the
way
it's
spec'd
is
the
context
of
open.
Telemetry
is
shared
between
open,
open
census
and
open
telemetry.
D
So
if
I
have
a
trace
in
open
census,
it
can
have
a
parent
that
was
created
and
defined
in
open,
telemetry,
okay,
so
the
context
objects
of
the
sdk
have
open
telemetry
take
over
that
means
resource
attachment.
That
means
labels.
That
means
baggage.
All
of
that
takes
over
from
open
telemetry
from
a
practical
standpoint
in
open,
like
open
sense,
is
driving
into
open
telemetry.
D
Adjustments,
as
far
as
I
know,
and
if
we
do
something
of
that
fashion,
that'll
be
a
more
difficult
story
right,
but
the
idea
here
is,
as
a
library,
author
right,
I
start
using
open
telemetry
to
define
my
export
and,
as
my
as
the
ecosystem
moves
off
of
open
census,
because
it's
deprecated,
I
see
no
change
like
nothing
changes
from
my
endpoint.
All
that
instrumentation
still
works
and
feeds
out
this
data
model.
D
G
D
That
that
is
the
shim
I'm
proposing
I'm
just
specifying
where
the
shim
belongs.
So
you
could
make
a
shim
where,
when
you
instantiate
an
open
senses,
api
object,
you
instantiate
an
open,
telemetry
object
on
the
other
side.
I
don't
think
that's
practical,
just
given
where
things
are
today
and
and
like
you
actually
physically
can't
do
it
today
with
the
current
apis
and
we
tried
to
keep
them
similar
right.
So
I
I
don't
think
we
want
to
limit
ourselves
with
that.
I
think.
D
Instead
we
should
define
the
hook
from
the
sdk,
the
sdk-ish
thing
of
open
census
to
open
telemetry's
sdk
right,
which
is
more
of
a
data
model.
A
And
josh,
I
believe
you
have
described
that
clearly
as
I
look
through
that
spec
issue,
like
the
pr
you
created.
So
if
folks
are
interested,
please
take
a
look,
but
I
think
the
takeaway
here
is:
we
don't
aim
for
api
compatibility
or
might
map
things
one
like
strictly
one-to-one
like
api
imaging.
So
so
we
have
a
good
clarity
here
that
should
unblock
the
api
and
sdk
work.
So
we
we
can
go
to
the
next
one.
I
feel
like.
F
You
could
imagine
this
that
that
there
are
several
layer
layers
in
the
in
the
export
pipe
for,
for
metrics
and
and
in
the
sort
of
current
sort
of
legacy
hotel
model.
We
have
this
notion
of
accumulator
processor
exporter
and
I'm
trying
to
establish
whether
to
to
hook
t
to
open
senses
together
with
open
telemetry.
F
The
accumulator
is
going
to
be
used
or
not,
because
that
was
the
piece
of
code
where,
where
we
do
a
lot
of
that
kind
of
high
performance
like
aggregation
of
code,
which
is
sensitive
and
like
high
effort
and
and
like
one
of
the
important
pieces
of
a
complete
metrics
sdk.
If
if
open
sessions
has
to
continue
having
its
own
stuff
to
do
all
that,
it
seems
like
like
a
a
miss
in
some
sense
for
open
census,
and
I
wanted
to
keep
like.
Maybe
we
could
ask
the
same
exact
question
about
prometheus
clients.
F
Again,
we
don't
really
want
people
to
re-instrument
all
their
code
with
prometheus
clients,
but
but
could
you
take
a
prometheus
client
and
like
fork?
It
throw
away
all
the
code,
that's
there
and
and
turn
those
into
api
calls
that
then
you're
going
to
get
the
benefit
of
the
accumulator
and
you
should
get
sort
of
similar
performance.
F
And
if
you
look
at
how
we
did
the
open
tracing
bridge,
that's
more
like
what
it
looks
like
we.
We
are
literally
calling
straight
through
to
the
open,
telemetry
sdk.
Every
time
you
start
a
span
from
open
tracing,
and
so
I
don't
know
that's
just
another
way
we
could
do
it.
D
Yeah,
I
think
the
problem
here
is
some
of
the
concepts
in
open
census
are
done
in
the
api
level,
whereas
open
telemetry
wants
to
do
them
in
the
sdk
and
some
of
the
concepts
don't
exist.
F
I
think
that's
the
big
problem
right.
We've
talked
about
this
in
in
theory.
What
you
could
do
is
sort
of
like
take
one
of
the
six
instruments
from
otel
combine
it
with
configuring,
a
specific
behavior
of
the
hotel
sdk,
and
then
you
can
map
your
open
census
instrument
into
a
configured
hotel
instrument
with
a
behavior
that
you
want,
etc.
But
but
since
there's
no
view
mechanism,
you
can't
do
that.
F
D
D
D
E
Yeah,
I
agree
with
that
some
something
reasonable
sooner
and
then
later
once
we've
built
more
advanced
features.
Maybe
that
path
can
improve,
but
we
don't
want
the
perfect
to
be
the
enemy
of
good
enough
and
it
seems
critical
for
our
velocity
on
this
project
that
we
not
try
to
eat
the
whole
elephant
right
at
the
beginning.
A
C
I
have
one
question,
though,
for
the
open
census-
migrations
right
I
mean
josh,
maybe
josh
you
can
provide
some
more
details
on.
You
know
what
those
migration
paths
are
right,
because
I
think
that
would
also
help
us
in
deciding
you
know
what
the
minimum
compatibility
needs
to
be.
D
Yeah
the
so
I
recommend
reading
the
pull
request.
If
you
haven't
details
yeah
it
details
how
it
works
for
trace,
there's
actually
implementations
of
these
shims
for
java
and
go
so.
You
can
actually
take
a
look
at
the
implementation.
If
you
want
to
see
what
it's
doing
and
as
josh
mcdonald
pointed
out,
we
are
doing
the
accumulation
open
senses
for
metrics
and
then
shimming
it
over
from
exporter
to
exporter.
D
Just
because
that's
the
only
way
you
can
implement
it
today,
which
is
why
that
portion
of
the
spec,
I
think,
will
remain
experiment
like
not
not
ready
or
whatever
for
a
while.
But
the
tracing
thing
is,
is
how
I'd
like
to
try
to
like
the
spiritual
essence
of
how
we
do
tracing
is
what
I'd
like
to
do
in
metrics,
and
there
are
some
really
hard
questions
there,
it's
more.
B
Hey
josh
the
the
go
and
java
implementations
for
the
open
census.
Shims.
Are
they
based
off
of
this
pr.
D
This,
it's
more
accurate
to
say
this
pr
is
based
off
of
them
and
has
some
lessons
from
them
that
there's
differences
between
the
two
implementations
subtly,
not
not
nothing
like
major,
but
this
pr
is
representing,
where
we're
going
to
take
them.
A
Okay,
we're
making
progress
on
this
one.
The
second
one
is,
I
remember
in
the
in
the
big
like
like
matrix
meeting
where
we
have
prometheus
folks.
I
merely
heard
there's
a
concern
like
it
doesn't
seem
we're
on
the
same
page,
so
I
want
to
call
it
explicitly.
It
sounds
to
me
like
the
goal
of
open.
Telemetry
is
eventually
open.
Telemetry
api
and
sdk
for
matrix
will
replace
permissions
client.
A
E
So
so,
when
you
say
replace,
I
mean
the
the
goal
here:
there's
two
scenarios:
there's
one:
someone
wants
to
use
they're
starting
from
scratch
and
they'd
like
to
use
open
telemetry
when
they
start
from
scratch,
so
that
they
can
support
all
these
different
backends
and
support
all
the
stuff
that
we
provide,
and
one
of
those
back-ends
has
to
be
prometheus
like
we
need
to
have
first
class
support
and
I
don't
think
anyone's
debating
that
right.
E
The
other
question
is
what,
if
someone
has
already
instrumented
their
app
with
the
prometheus
clients
and
they've,
already
done
a
lot
of
work
there
and
they're
happy
with
that
and
they're
happy
with
with
sending
their
data
to
prometheus.
What
do
we
do
in
that
scenario?
A
I
I
see
so
there's
no
like
there's
no
consensus
that
we're
going
to
converge
the
premises,
sdk
api
with
open
time.
It's
not
the
goal,
and
and
also
we,
we
don't
think
it's
a
high
priority
folks
are
using
permissions
clients,
new
metrics
they're,
happy
today
and
when
they
add
tracing
they
have
the
problem.
Hey.
I
want
matrix
and
traces
to
be
somehow
correlated.
I
want
to
enrich
matrix,
but
I
don't
know
how
to
do
that.
I
can
either
tell
them.
There's
some
example:
you
can
use
like
hook
up
thing.
A
F
C
C
F
C
F
Mentioned
that
we
could
also
sort
of
like
pull
the
data
right
out
of
prometheus
client
library
as
we're
pushing
otlp,
which
is
just
another
option,
a
way
to
do
the
bridge,
so
there's
push
and
pull,
and
none
of
the
prometheus
clients
are
going
to
move
towards
push,
and
I
think
we
are
the
pressure,
that's
making
them
think
about
it.
So,
yes,
riley.
Please
take
this
question
to
them.
F
A
C
E
Yeah
and
just
to
follow
up,
I
do
want
to
emphasize
that
there
is
a
potential
here
that
perhaps
the
prometheus
people,
just
like
other
members
of
this
project,
may
decide.
They
would
prefer
to
adopt
the
open
telemetry
clients,
rather
than
continue
to
maintain
their
own
clients
in
the
long
run.
But
that's
that's
kind
of
their
choice
to
make
and
to
a
certain
degree
if
they're
interested.
It
means
just
making
sure
that
the
clients
that
we
make
would
be
something
attractive
to
that
group
to
be
able
to
do
that.
C
E
But
that's
that's
the
data
data
protocol
level.
Right,
I
think
that's
a
given
that,
like
we
absolutely
have
to
support
their
data
protocols,
it's
just
do
we
want
to
make
sure
that
the
clients
we're
building
are
a
thing.
A
prometheus
user
would
want
to
use
yes
and
then
do
we
is
it?
E
Is
there
something
that
makes
this
more
difficult
to
actually
make
it
clients
that
the
open,
telemetry
or
sorry
the
prometheus
project
would
want
to
adopt
just
just
for
the
same
reason,
other
people
are
adopting
open
telemetry
as
their
clients,
which
is
like
that's
just
a
bunch
of
effort
they
don't
have
to
to
do
on
their
own.
I.
C
Mean
that's.
Those
are
the
areas
that
we
are
working
on
today
right
in
the
prometheus
work
group
and-
and
you
know
those
are
above
and
beyond
the
that
is
they
will
these
issues
of
having
configuration
and
instrumentation
support
is,
is
something
that
would
support
good
support.
The
you
know
not
only
the
collector
but
the
sdks
too
right,
yeah,
so
yeah
that
that's
the
layer
at
which
we
need
to
make
sure
at
least
we're,
interoperable
or
fully
compatible.
A
My
take
is
I
I
need
to
work
with
you
and
the
premises
folks.
I
can't
see
the
thing
here:
it's
probably
not
the
premise's
best
interest
to
to
ask
every
sdk,
not
every
library
owner
to
integrate
with
their
like
client,
because
it's
only
sending
data
to
premises.
It's
not
the
best
place
for
them
to
have
a
plug-in
model
that
can
listen
to
arbitrary
place,
but
it
is
open,
telemetry's
interest.
So
the
question
is:
if
that's
the
case,
they
probably
want
to
join
the
force.
But
on
the
other
side
I
can
see,
promises
is
well
established.
A
F
There's
there's
something
here
about
deltas
that
I
feel
is
is
also
you
know,
an
area
where
the
prometheus
team,
because
it's
a
pull
model,
never
has
to
worry
about,
doubles
and
maybe
we're
going
to
say
no
we're
never
going
to
do
deltas,
but
then
we
have
to
address
the
stats
to
users
and
the
high
cardinality
question,
and
I
was
thinking
back
like
over
the
last
two
years.
Why
did
why
did
people
like
me?
Push
for
deltas?
F
I
mean,
aside
from
the
stats
to
use
case
which
is
pretty
established,
and
I
realize
it
has
a
lot
to
do
with
me
being
a
tracing
vendor
and
the
way
I
think
about
traces
like
every
trace
is
a
count
of
one
every
span.
I'm
sorry
is
a
count
of
one
with
as
many
cardinality
labels
as
you
want
and
we
are
familiar
at
lightstep
with
just
like.
F
How
we
work
with
spans
and
there's
a
like
an
analogy
with
every
you
know:
latency
measurement
or
value
recorder,
measurement
or
histogram
measurement
for
a
for
a
metric
system
is,
I
know
that
it's
possible
to
benefit
from
and
to
implement
high
cardinality
and
to
use
that
with
care,
and
I
can't
get
that
from
a
prometheus
system,
because
they'll
tell
me
no
or
my
client
has
to
have
infinite
memory,
and
so
there's
this
chicken
and
egg
problem
of
where.
F
If
all
we
want
to
do
is
support
prometheus,
then,
then
we
have
to
not
do
deltas
and
we
have
to
say
your
dimensions
must
be
fixed
for
all
time,
because
you
can't
change
dimension,
you
know,
and
it
becomes
hard
to
see
how
we
begin
to
benefit
from
the
open
census.
Vision
which
is
like
you
can
take
your
keys
out
of
context
and
slap
them
on
a
metric
or
whatever,
which
is
very
much
like
what
you
do
with
spans
and
traces.
F
H
As
somebody
who
works
for
another
vendor,
I
I
feel
very
strongly
the
same
way
and
plus
one
for
for
continuing
delta
support.
D
Yeah
yeah,
so
it's
a
two
minute
questions,
I'm
gonna
say
one
with
tracing:
there
are
vendors
or
or
pre-existing
formats
where
we
said
open.
Telemetry
will
support
these.
You
know
zipkin
jager,
open
tracing
and
there's
different
levels
of
support
for
the
the
three
you
know,
one
obviously
becoming
open
telemetry
and
the
other
two
like
we're.
Gonna
support
these
first
class.
D
I
think
this
is
agreement
but
like
prometheus
and
statsd
are
like
the
two
existing
formats
and
then
there's
open
census,
which
is
that
special
group
right,
because
we
have
more
direct
control
over
it
as
it
merged
into
open
telemetry.
So
is
that
fair
to
say
like
like?
Should
we
just
call
that
out
as
like
a
thing
because
a
lot
of
the
times
we're
like?
Should
we
use
an
existing
metric
api?
One
of
the
reasons
to
say
no
is
well.
We
can
handle
all
of
these
formats.
D
That's
that's
what
we
do
so
you
know
at
a
minimum.
The
requirement
would
be
open,
open
census,
right,
prometheus
and
statsd,
or
whatever
extensions
are
on
it
too.
E
Yep
and
I
think
just
to
jump
in,
I
think
we
absolutely
want
to
support
this
vision
that
josh
that
all
the
josh's
are
promoting
here,
which
is
maybe
something
that
that
literally
the
prometheus
back-end
can
accept
today.
But
as
long
as
we're
solving
this
in
the
collector
right.
As
long
as
we
have
a
data
processing
layer
where
we're
able
to
to
give
them
the
thing
that
they
want-
and
it's
not
like
our
instrumentation
is
just
broken,
because
we
didn't
figure
out
a
way
to
do
that.
E
But
it
seems
like
we
have
all
the
pieces
to
do
that
like
it.
I'm
personally
like
not
worried
that
if
we
build
like
a
more
advanced
system
that
has
more
features
that
we
won't
be
able
to
turn
that
into
something
any
existing
back
end
could
make
use
of
today.
E
So
so
I
don't
think
we
need
to
worry
about
like
sticking
to
what
they're
currently
providing
at
the
api
level
like
that.
Just
seems
like
the
place
that
but.
F
But
chad,
I
think,
there's
a
pledge
that
we
can
anything
we're
offering
is
like
a
new
and
fancy
feature
in
our
data
model
has
to
be
offered
in
a
way
that
we
can
just
take
it
out,
and
I
and
I
that's
the
challenge
I've
been
working
on
for
the
for
the
data
model.
Question
is,
and
I
I
put
a
link
to
my
slide,
maybe
riley
you
could
just
click
into
it
quickly.
It's
a
spreadsheet,
and
I
want
to
work
on
this
a
little
bit
more.
E
But
josh
my
question
is:
can
we
take
that
out
in
the
the
collector?
Can
it
just
be
that,
yes,
they
don't
support
delta,
so
we
just
have
a
prometheus
exporter.
That's
that's!
Just
modeling
the
world
according
to
prometheus
right
and.
F
And
the
way
we
had
done
it
so
up
until
sort
of
last
summer
bogdan
and
I
were
kind
of
taking
the
lead
there
and
we
had
had
it
working
so
inside
of
an
sdk.
It's
very
easy
to
take
out
the
labels
you
can
take
out
because
it
certainly
turns
out
to
be
actually
an
easier
problem
to
do
it
in
the
sdk
and
that's
what
we
learned
so
now.
The
question
is:
can
we
do
all
these
things
in
the
collector?
Can
we
take
deltas
to
cumulatives?
F
Can
we
strip
out
labels
in
a
meaningful
way,
and
I
think
the
answer
is
yes,
but
we
have
a
pretty
hard
challenge
to
specify
it
yeah
and-
and
there
are
sort
of
like
cases
where
it
doesn't
work,
which
you
also
have
to
specify
and
they're
so
like
if
we
could
be
horizontally
skilled.
E
F
I
think
so
too,
there's
still
some
operational
questions
about
it,
so
it's
not
clear
and
like
it's
not
going
to
open
an
enclosed
case
in
the
case
of
the
label
removal.
How
do
you
tell
it
what
to
do,
because
if,
if
two
different
prometheuses
get
different
label
sets
like
things
are
already
wrong?
F
F
Was
there
I
pretend
to
put
more
about
the
prometheus,
remote
right
translation,
so
there's
sort
of
like
there's
the
otlp
model
and
there's
the
prw
data
model
and
and
there's
got
to
be
some
transformation
from
one
to
the
other,
and
as
long
as
it's
that's
correct,
then
I
think
you
know
we've
satisfied,
you
know
we
can
do
deltas
and
we
can
do
sort
of
label
dimension
freedom.
We
can.
You
know
we
have
to
specify
it
though
my
back-end
team
gets
nervous
and
they
say.
Oh
that
sounds
really
complicated.
F
A
Yeah,
so
we
probably
had
have
to
work
offline
on
this
item,
and
so
I
have
to
do
time,
control
and
let's
move
on
so
topic,
three
and
four
there.
I
guess
this
should
be
simple,
so
this
is
my
understanding
why
the
open
climate
matrix
api
will
have
additional
things
that
cannot
be
fulfilled
by
premises,
and
this
is
why
we're
trying
to
do
this
new
api
in
matrix.
A
A
D
To
I
I,
yes,
I
think
it's
important
I'm
planning
to
write
a
blog
about
it.
Also,
I
want
to
throw
out
that
there's
even
a
question
of
whether
context
itself.
Now
you
mentioned
like
correlation
between
messages
and
traces,
is
important
baggage.
You
call
you
call
out
right
so
in
open,
telemetry
there's
this
notion
of
these
like
pillars
underneath
the
signals-
and
that
is
context
baggage
resource
right
and
I
think
all
of
those
pillars
are
important.
F
Yeah-
and
I
I
think
this
is
something
that
has
been
missing
from
both
statsd
and
prometheus
models
and
there's
some
connection
with
service
discovery
here,
which
is
it
like.
You
often
don't
know
everything
about
yourself
and
and
and
hotel
has
given
us
the
idea
of
a
resource,
but
almost
it
seems
like
we
don't
always
know
our
own
resource
and
then
there's
this
resource
discovery
question
and
then
becomes
like.
Can
you
join
your
resources
downstream,
which
is
exactly
what
promethor
prometheus
does
good.
C
Job,
so,
given
that
you
know
we
are
working
on
that
for
the
collector,
maybe
we
can
reuse
that
you
know
the
discovery
and
then,
of
course,
the
resource
manager
or
resource
help
checker.
If
you
will.
C
F
F
I
feel
that
there's
some
question
about
whether
there's
a
mandatory
resource
and
whether
people,
whether
it
must
be
preserved
and
metrics-
and
I
I
confess
that
I
I
feel
more
questions
and
answers
here-
there's
there's
something
about
the
data
model
about
the
use
of
time
stamps,
which
is
meant
to
to
help
you
detect
overlap.
So
I
have
a
start
timestamp
and
then
I
report
these
cumulative
values,
and
I
can
see
that
one
cumulative
is
part
of
this
of
a
series
because
they
all
have
the
same
timestamp.
F
But
I
could
also
do
that
using
some
sort
of
identify
like
unique
resource
identifier,
to
say
this
is
a
unique
process
that
started
and
another
one's
overlapping
with
it
and
don't
need
time
stamps.
Maybe
so,
there's
a
question
of
whether
you
must
preserve
resource
information
because
it
helps
you
know
whether
there's
a
distinct
process
but
oftentimes.
Just
knowing
start
time
is
enough
to
disambiguate
metrics
series,
and
so
I
have
this
question:
why
do
we
need
to
have
a
resource?
Is?
F
I
know,
but
it
seems
like
a
useless
line
of
reasoning,
except
if
you
were
to
remove
every
every
label
that
was
resource
identifying
from
a
metrics
series.
All
of
a
sudden.
F
You
have
this
overlap
and
we
don't
actually
specify
what
to
do
with
overlap
and
that's
a
data
model
question
that
I
feel
feel
has
to
be
answered
very
quickly
and
I've
used
the
phrase,
and
I
don't
remember
where
I
wrote
this
now:
it's
in
that
1078
issue
about
up
metrics,
basically
saying
oh
geez,
this
could
be
a
private
conversation,
but
the
idea
of
a
single
writer,
the
idea
that
every
time
series
that
we
output,
we
expect
that
there's
only
one
person
writing
that
time,
series
and
in
prometheus,
that's
done
by
ensuring
that
job
and
instance
are
always
unique.
F
So
there's
always
only
one
target
producing
a
job,
an
instance
and
as
long
as
you
preserve
those
labels,
you
get
the
right
behavior,
but
it
would
be
invalid
to
remove
a
job
in
an
instance
label
from
a
prometheus
time
series,
because
now
you
have
a
jumble
of
cumulatives
that
are
all
mashed
together,
I.e,
overlapping
and,
of
course,
remember.
Promises
doesn't
keep
that
start
time,
so
you
need
those
resource,
identifiers
and
prometheus
model,
because
you
don't
get
start
time
in
that
prometheus
model
so
because
we
are
required
to
translate
into
prometheus
model.
F
D
Talk
about
because
I
I
do
think
it's
really
important
to
get
resource
right.
I
think
resource
is
fuzzy
on
purpose
and
in
practice,
you're
like
it's
an
exercise
in
what
can
I
get
as
close
to
accurate
as
possible?
Not
perfection
right,
so
it's
it's
a
really
hard
problem.
We
should
talk
about
it.
I
don't
think
we
need
to
talk
about
it
here
in
the
sense
of
we
agree.
There
needs
to
be
some
kind
of
identifier
and
we'll
call
that
resource
for
now.
C
C
C
Riley,
can
we
can
we
actually
create
an
issue
for
this
to
track
the
research
discussion
because-
and
I
can
create
it
under
the
prometheus
work
group,
if
that's
helpful,
for
the
prometheus
details
to
be
identified
and
clearly
yeah.
A
Yeah,
okay,
so
we
have
12
minutes
left.
So
one
thing
in
the
pr
like
the
old
types,
I'm
putting
a
proposed
timeline.
I
know
it's
a
bad
behavior,
but
just
want
to
get
feedback
and
quickly
explain
here.
I'm
like
I,
I
learned
from
other
folks
saying:
hey
like
you
got
to
move
that
a
month
early,
because
in
december
people
are
going
on
vacation,
so
there's
no
progress
on
december
period,
so
I
think
that
makes
sense.
A
I'm
trying
to
say
can
we
target
by
end
of
may
we
have
the
the
spec
where
we
start
to
tell
people
hey
like
you
can
like
you
can
trust
it's
relatively
like
there
give
you
the
idea
you
can
start
to
do.
You
can
start
to
do
that
in
sdk.
We're
not
going
to
like
do
major
like
changes
but
we'll
add
features
we'll
keep
adding
features
and
by
enough
september
we're
telling
people
it's
feature.
Freeze
we're
only
going
to
fix
bugs.
A
If
you
have
additional
thing
like
big
thing,
we're
not
going
to
add
it
so,
but
that
if
the
ict
is
making
progress,
basically
we
learn
from
that.
You
can
fix
minor
issues
and
by
december
we
lock
that
I
know
it's
a
little
bit
aggressive,
but
just
a
initial
timeline,
so
we
can
communicate
and
we
can
use
that
to
to
backtrack
our
work
to
something.
So
we
can
measure
the
progress
and
we
can
use
that
to
do
the
triage
and
there's
the
consensus.
A
What
I'm
going
to
do
is
I'll
cover
that
in
the
in
the
metrics
blog
post
document,
because
when
we
explain
something,
I
think
it
has
to
explain
where
we
are.
What
are
we
going
to
do
and
why
are
we
doing
this
and
what
what's
the
expected
result
and
what's
the
timeline
that
gives
a
complete
story,
and
I
also
need
to
use
this
to
update
the
issue.
Template
and
I
know,
like
ted
I've-
been
amazing
for
the
friday
triage
meeting,
because
I
think
the
trace
part
is
done.
Matrix
is
still
far
away.
There's
no
strong
reason.
A
A
I'm
I
personally
I'm
against
that,
but
I
I
think,
having
those
clarification
is
important,
so
so,
starting
from
this
friday,
I'm
going
to
join
the
triage
meeting
and
I'll
need
to
update
the
issue
template,
so
people
go
and
file
a
matrix,
sdk
or
api
related
issue.
We
communicate
to
them
where
we
are
and
point
them
to
the
the
document
that
the
progress,
so
they
know
they're
not
expecting
hey
this
matrix
api
is
already
locked.
A
I
want
to
add
a
new
feature
or
I
want
to
add
new
semantic
and
then
we're
going
to
tell
them
no
we're
doing
something
differently.
That
makes
sense.
So
do
people
think
the
timeline
is
crazy
or
you
think
we
we
should
put
it
as
the
the
current
goal
I'll
put
november
since
november
is
literally
the
same
as
december.
I
think.
A
Yeah,
we
might
be
running
into
a
similar
situation
where,
by
end
of
like
last
year
november,
we're
tracing
is
almost
ready,
but
we
only
have
some
minor,
because
this
is
the
first
time
we
have
to
figure
out.
But
I
I
guess
this
time
because
we
have
already
stepped
through
the
tracing
part.
We
might
be
able
to
do
a
better
job.
C
A
D
E
E
So
this
is
going
to
go
through
a
process
and
there's
a
certain
point
where
the
maintainers
cut
in
and
we're
going
to
expect
maintainers
to
probably
have
to
put
down
whatever
other
work,
they're
doing
and
focus
on
implementing
metrics
right.
I
think
that's
the
september
date
is
that
correct,
but
so,
but
the
reason
why
that
data
is
important
to
know
is
we
need
to
to
clean
up
our
tracing
experience.
E
C
E
Yeah,
absolutely
definitely
that's
also
helpful
to
know
that
we
can
onboard
people
at
that
point.
Yeah
use
that
as
target
that's
great,
but
it
also
just
we
wanna
when
we're
thinking
about
this
next
leg
of
tracing
work,
we
wanna
try
to
to
time
box
that
work
to
fit
into
this.
Like
initial
gap,
we
do
wanna.
In
other
words,
we
want
to
avoid
a
situation
where
we
hit
this
metric
state
and
like
we're
kind
of
in
the
middle,
a
bunch
of
other
stuff,
so
but
september
seems
like
that's
far
enough
out.
E
A
A
F
A
So
please
take
a
look
at
this
old
tab.
I'm
going
to
update
the
date
based
on
what
we
discussed
and
do
any
like
like
comment
like
follow
up
and
if
you
haven't
optionally,
if
you
care
about
open
senses,
help
josh
out,
because
I
I
try
to
avoid
josh
being
the
only
one
speaking
for
open
sense,
because
when
we
start
open
country
we
were
saying
open.
A
Census
is
one
goal
now
it
seems
most
of
the
folks
are
working
on
the
other
stuff,
so
so
help
josh
out
and
and
also
I'm
going
to
follow
up
on
this
so
I'll
send
pr
and
I'll
ping
you
guys.
So
don't
worry
and
you
have
time
please
help
out.
Thank
you.