►
From YouTube: 2020-12-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
A
A
A
Hey
ted,
I
moved
my
item
about
the
metrics
topics.
Time
box
to
the
end.
A
Because
it
looks
like
the
version,
stability,
pr
might
be
a
good
one
to
talk
about.
Is
it
okay?
I
go
over
the
status
stuff
first,
or
did
you
want
to
lead
off
with
that.
C
Nope
great
head
man.
A
Okay
morning,
folks,
we
have
standing
agent
items
we'll
usually
start
off
with
I'm
switching
things
up
just
a
little
bit
I'll
go
over
the
status
and
then
go
into
triage.
We
don't
have
much
of
triage,
got
a
couple
of
important
things
on
the
agenda,
especially
the
versioning
stability,
pr
and
the
associated
oteps,
and
then
I've
left
a
standing
topic
item
placeholder
for
metrics
topics.
A
If
we
need
to
discuss
any
so
status
of
p1
issues
on
the
ga
spec
burndown
project,
that's
tracking
the
issues
in
the
spec
repo
proto-repo
tips,
repo
we've
got
20
p1s
in
the
to
do.
Two
of
them
have
associated
prs
and
we've
resolved
55
of
them.
This
movement
from
last
week,
where
from
the
triage
session,
not
a
heck
of
a
lot
of
activity
of
new
stuff
coming
in
but
to
do
net,
went
up
by
one
in
progress.
We
moved
three.
A
A
E
Here
yeah,
this
feels
like
required
for
ga,
because
we
either
change
the
name
now
or
we
don't
in
the
future.
That's
my
feeling.
A
And
because
it's
p1,
we
need
an
assignee.
F
A
E
C
Right,
I
actually
haven't
read
through
this
issue,
so
I'm
not
quite
sure
what
he's
asking
for.
C
Is
he
just
saying
that
there's
a
different
idiom
in
different
languages
right,
so
you
rather
than
having
a
shutdown
function,
you
just
make
something
closable
in
java.
Is
that
the
gist
here
not
that
that
some
languages
don't
need
a
shutdown.
E
Yes,
I
think
it's
more
about
the
entire
idea.
One
interesting
part
of
this
of
the
issue
is
that
he
says
a
stateful
spam
processor
may
be
shared
by
multiple
providers,
but
this
allows
one
provider
to
shut
down
another
one,
so
it's
like
how
it
works
like
they
should
be
close
independently.
E
G
C
Yeah
well,
this
is
the
thing
that
in
practice,
we
we
don't
necessarily
have
a
thing
called
an
sdk
in
every
language
right,
the
way
it's
divided
up.
We
have
these
independent
providers
at
least
the
way
it's
specced
out
right
now,
right
one
thing
we
don't
have
is
say
a
wrapper
around
all
of
this
stuff-
that
kind
of
bundles
all
of
the
sdk
into
a
single
thing,
with
a
single
manager,
but
that
exists
in
some
languages.
I'm
not
sure
that
it's
written
out
that
way
in
the
spec.
I
C
Definitely
agree:
it
should
not
be
on
the
api.
This
is
like
the
whole
point
of
that.
Separation
is
like,
as
someone
writing,
mongodb
instrumentation,
I
shouldn't
be,
have
access
to
nor
be
shutting
down
the
sdk,
so
it
should
be.
E
G
B
So
I
think,
having
talked
to
anarag
about
this
issue,
I
don't
think
there's
anything
with
the
api
I
think
he's.
I
think
this
is
just
a
the
sdk
implementation
of
the
tracer
provider,
where
we
have
specified
that
there's
a
shutdown.
I
don't
think
this
is
talking
about
api
versus
sdk.
I
think
I
think
we're
getting
lost
in
that.
I
think
this
is
literally
talking
about
the
shutdown
method
in
the
sdk.
I
B
I
think
I
think
it's
like
what
oberon
then
comments
about.
The
fact
is
that
there's
nothing
in
our
spec
that
says:
tracer
providers
can't
be
shared,
and
if
we
have
a
shutdown
on
the
sdk
version
of
tracer
provider,
we
have
a
difficulty
with
those
two
things
living
together,
and
this
is
because
there
could
be
shared
there.
Could
the
the
tracer
provider
sdk
could
be
being
shared
in
some
way
where
this
shutdown
would
then
stop
one
from
working
and
not
the
other,
or
they
would
stop
the
one
when
only
one
was
wanting
to
be
shut
down.
B
G
C
What
you're
saying
is-
and
I
could
be
wrong
about
this,
but
you're,
saying
like
in
java-
you
can
be
running
multiple
services
within
a
process
and
that
you
may
be
starting
up
and
shutting
down
one
service,
but
that
may
be
sharing
a
tracer
provider
across
several
services
within
that
process.
Is
that
is
that
the
crux
of
this.
B
Well,
certainly,
nothing
stops
you
from
taking
that
reference
and
doing
whatever
you
want
with
it
right,
like
the
user,
the
sdk
user
can
can
share
references
to
apk.
However,
once
it
can
have
a
spring
context
that
has
access
to
it,
it
could
have
an
end
user
that
has
access
to
it
like
we
don't
there's
nothing
that
says
that
multiple
users
end
users
can't
be
having
two
two
threads
having
handles
to
the
same
implementation
of
tracer
provider
in
the
sdk,
which
has
a
shutdown
method
defined
on
it
in
the
spec.
C
I
mean,
I
guess,
like
the
the
purpose
of
this
is
like
the
application
owner
when
you
set
all
this
stuff
up
at
the
beginning,
you
should
then
shut
it
down
in
a
coherent
fashion
when
you're
done
it's
true
that
in
any
language,
you
could
just
like
grab
the
shutdown
method
and
call
it
at
any
time,
but
that
would
be
incorrect.
So
is
this
like
a
structural
issue,
or
is
this
us
trying
to
save
users
from
just
doing
something?
Silly.
J
G
I
feel
like
the
language
of
this
conversation
is
confusing
me.
Incredibly,
like
tracer
providers,
don't
have
access
to
a
spam
processor,
the
sdk
owns
a
spam
processor
and
you
can
start
with
sdk
and
you
can
shut
down
sdk,
but
when
the
users
are
holding
a
tracer
provider,
all
they
should
be
able
to
do
is
get
tracers.
G
C
I
think
what
anthony
is
saying
is
you
can
register
tracer
providers
right,
but
we
have
a
registered
tracer
provider
right.
There's
ways
for
people
to
register
spam
processors
and
register
these
various
things,
so
maybe
there's
like
ownership
could
potentially
be
spread
out
there.
C
C
Yeah,
so
if
we've
written
this
in
a
way
that
that
isn't
clear
or
if,
if
that
is
turning
out
not
to
be
feasible,
that's
the
thing
that
we
should
be
addressing,
but
I
think
the
intention
is
that,
at
the
beginning
of
a
process,
this
stuff
is
all
set
up
and
then,
at
the
end
of
that
process,
there's
something
that
globally
controls
the
shutdown
of
all
this
stuff
and
everybody
else
shouldn't
be
worrying
about
that
aspect
of
things.
C
B
Yeah,
so
I
mean
the
context
from
the
java
implementation
is
the
tracer
provider
is
an
interface
and
there's
an
implementation
of
it
provided
by
the
sdk
yeah.
That
implementation,
which
needs
to
be
created
at
initialization
of
the
sdk,
has
a
shutdown
method
on
it,
which
I
believe
is
specified
as
something
that
must
be
the
case
in
the
sdk
spec.
G
B
Two
pieces
of
your
your
application
infrastructure,
like
your
spring,
your
spring
config
or
your
dependency
injection
framework
or
whatever,
and
if
two
peop
two
pieces
of
code
have
reference
to
it
as
the
sdk
implementation,
one
shutdown
can
cause
the
other
or
if
someone
just
has
a
span
processor
being
shared
across
two
implementations
of
the
sdk
tracer
provider,
one
one
of
those
implementations
could
shut
down
and
the
other
one
would
get
shut
down.
When
maybe
it
wasn't.
G
J
J
I'm
in
the
sdk
issue
right
I'm
trying
to
I'm
trying
to
ensure
that
john
understands
or
josh
understands
that
we
do
have
the
separation
between
an
api
interface
that
is
used
by
instrumentation
and
the
sdk
interface.
That
would
have
the
shutdown
method
and
it
does
have
this
shutdown
method
in
the
go
sdk
and.
B
G
We
have
a
correspondingly
similar
issue
in
the
metrics
design,
specs
space
right
now
about
whether
there
can
be
more
than
one
processor
or
more
than
one
accumulator
per
processor,
and
where
that
dictates,
where
the
start
and
finish
controls
are,
I
can
see
a
parallel
here.
We
we
are
under
specified.
I
agree.
C
To
to
maybe
rephrase
this,
is
it
the
idea
that
this
stuff
should
be
private
at
this
level,
and
there
should
be
some
higher
level
thing
that
exposes
the
shutdown
function
like
you,
shouldn't
be
going
around
and
shutting
down
different
providers
and
things
there
should
just
be
some
global
sdk
shut
it
all
down,
I've
done
and
internally
it
can
do
that
with
tracer
providers
or
wherever
it
actually
needs
to
to
shut
down
the
actual
code.
Maybe.
J
I
think
that
was
that
was
part
of
the
impetus
behind
adding
this
shutdown
method
right,
because,
prior
to
this,
you
had
to
keep
track
of
all
of
the
spam
processors
that
you
had
registered
and
shut
them
down
individually.
So
this
aggregated
that
into
that
place,
is
there
a
higher
level
of
aggregation
that
needs
to
happen.
Perhaps
is
a
valid
question
that.
B
C
G
Yeah,
I
think
you're
right
ted,
we've,
we've
sort
of
said
something
about
what
the
sdk
is
required
to
do
functionally,
but
not
like
really
about
the
apis
for
starting
and
stopping
sdks.
That's
really
way.
Underspecified
and
we've
said
you
need
to
have
a
spam
processor
concept
and
a
chaser
provider
concept.
But
we
haven't
said
what
the
ownership
relationships
are
between
these
parts
when
you're
assembling
a
complicated
hole,
I
guess-
and
I
I
don't
think
we
should.
I
just
want
to
point
that
out.
C
Yeah,
I
just
wonder
whether
we
need
more.
I
just
need
to
look
into
it.
I
I
wonder
whether
with
the
sdk,
if,
if
it's
slightly
under
structured
in
our
thinking
about
it,
that
will
lead
people
to
be
like.
Oh,
I
need
a
thing
like
a
shutdown.
So
can
I
tack
it
on
here
versus
over
there
and
if
there's
no
structure
for
debating
that
stuff,
then
this
stuff
can
kind
of
move
around,
but
maybe
not
always
land
in
in
a
place.
That's
satisfying
to
everyone,
but
I'm
just
spitballing
right
here.
The.
B
C
Right
exactly
yeah
like
there's,
maybe
we're
mis.
We
need
a
little
bit
more
of
a
framework
for
thinking
about
the
sdk.
But
again
I
don't
know
how
how
much
of
a
pain
point
this
actually
is.
Maybe
this
is
the
only
pain
point.
B
I
think
it
becomes
a
much
larger
pain
point
when
you
start
talking
about
function
as
a
service
and
the
life
cycle
of
an
sdk.
That's
running
in
that
in
that
context,
and
I
think
this
is
probably
why
honorark
is
bringing
it
as
an
aws
person
that
we
have.
B
C
C
B
G
One
of
the
topics
in
metrics-
that's
very
associated
with
this-
is
whether
the
sdk
spec
will
allow
us
to
configure
multiple
resources
in
the
same
sdk
space.
Essentially
so
right
now
in
the
metrics
world,
the
resource
gets
tacked
on
by
the
accumulator,
and
so
I'm
able
to
create
tracer
providers
that
will
use
one
specific
resource.
Now
I
could
create
a
new
accumulator
with
a
different
resource
and
attach
it
to
the
same
processor.
G
C
This
is
an
aside,
but
this
is
actually
why
I'm
happy
we're
separating
version
numbers
for
the
api
and
the
sdk,
because
I
do
think
the
api
is
gonna.
It's
pretty
solid
and
solidifying,
and
likewise
for
metrics
it'll
go
this
way,
but
sdk
framework
stuff
like
this
it
it
it
tends
to
to
drag
out
a
lot
more
like
this
kind
of
nuance.
C
So
I
wouldn't
be
shocked
if
we
end
up
having
to
do
a
major
version
bump
of
the
sdk
at
some
point
to
address
some
of
these
things.
It
doesn't
shock
me
at
all.
C
Anyways-
let's
maybe
pass
on
this
for
now-
bring
it
up
again
this
afternoon
just
to
get
clarity
on
what
this
particular
issue
is
about,
and
maybe
some
food
for
thought
for
maintainers
is
to
think
about
the
sdk,
spec
and
structure
there.
If
there
is
a
desire
to
have
that
arranged
differently
or
the
feeling
like
we
need
more
guidance
or
something
overall
there
to
bring
that
up.
K
One
one
short
comment
about
josh
example:
one
of
the
failure
case
that
identified
in
java
was
having
multiple
instances
of
the
sdk
with
the
same
resource.
They
will
produce
bad
metrics
because
of
the
aggregations
and
stuff.
If
they
are
not
global,
the
you
will
have.
You
will
send
multiple
data
deltas
for
the
same
intervals
to
the
back
end
and
we'll.
G
C
A
I
put
in
the
agenda
item
for
this
afternoon's
line
item
in
the
spec
signal.
We
got
one
more
to
triage
another
one
from
anarch.
This
is
the
last
one.
C
B
C
Already
going
a
different
direction
for
for
serverless
and
lambda
with
this
anyways,
which
is
to
accept
the
fact
that
the
way
lambda
freezes
things
without
informing
the
process
is
harmful
for
this
stuff
and
it
should
run
as
as
a
lambda
extension-
that's
dynamically
plugged
in
so
that
it
doesn't
have
to
deal
with
all
this
freezing
thing
stuff
that
they
do.
But
it's
also
again.
C
This
is
a
thing
that
that
the
sdk
can
can
deal
with,
not
not
the
api
and
and
the
reason
why
I
say
that
to
be
clear
is
we
we
experi.
I
experienced
this
with
with
prior
tracing
clients,
I've
written
several
of
these
things
and
I
found
very
consistently
if
you
put
something
like
flush
or
force
flush
on
the
api
users
see
that
and
they
assume
they
need
to
to
manipulate
that
thing,
and
so
they
start
writing
code
very
cons.
I
found
this
very
consistent.
C
You'll
start
writing
code
to
to
to
force
flush
to
flush
in
cycles
in
order
to
deal
with
their
problems,
and
then
it
interferes
with
the
the
functioning
of
the
sdk.
C
B
As
I
commented,
I
think
this
is
a
little
bit
analogous
to
java
exposing
the
ability
to
call
the
garbage
collector
manually,
which
they
they
had
to
basically
make
it
a
no-op
in
the
actual
jvm
implementation,
because
people
just
called
it
willy-nilly,
and
it
just
made
things
terrible.
C
H
C
Well,
we
already
have
fl
to
be
clear.
We
already
have
flush
as
an
sdk
method,
so
so
I
don't
think
it
would
necessarily
change
things
from
that
perspective.
C
I'm
just
wary
of
saying
this
should
be
something
on
the
api
rather
than
the
sdk
and
someone
other
than
the
application
owner
should
be
out
there
out
there
calling
this
you
know.
I
I
see
users
literally
write
code
that
like
wraps
up
yeah
span
and
with
span
and
and
flush,
because
I
don't
know
I
was
in
some
situation
where
it
seemed
to
help
randomly
so
now
I
do
it
everywhere.
Reflexively.
C
C
Yeah
this
is
this
is
an
issue
with
with
lambda
honestly
landa,
offering
people
the
ability
to
run
to
execute
runtimes
that
have
no
concept
of
a
serverless
environment
and
they
don't
want
to
modify
that
runtime
to
have
this
concept,
but
then
they
want
to
be
doing
things
like
freezing
the
vm,
and
you
know
that
that
creates
trouble
for
running
things
on
lambda
above
and
beyond.
C
Just
this,
but
we're
coming
up
with
alternate
solutions
to
that
particular
problem,
and
I
honestly
don't
think
adding
this.
The
api
would
really
help
there.
But
again,
let's
maybe
talk
about
this
again
in
the
afternoon
session,
when
rx
there.
C
C
I
don't
I
don't
really
have
much
to
say,
except
just
to
let
people
know
that
I
went
ahead
and
opened
a
pr
for
this.
The
the
otep
hasn't
been
merged
yet,
but
it
has
more
than
enough
approvals
and
no
longer
has
any
discussion
going
on.
It
hasn't
been
resolved.
So
I
expect
that
to
happen.
C
That's
I
got
a
request
from
tigran
to
just
to
just
go
through
here
and
and
click
resolve
on
everything
or
make
sure,
and
I
did
see
yeah
armin
did
just
roll
in
and
give
give
some
more
reviews.
So
so
I
think
that's
fine.
I
just-
and
I
will
update
the
pr,
but
if
people
want
to
have
a
look
at
how
I
split
this
up,
which
is
not
very
much
and
I
switched
to
using
more
of
like
the
official
rfc
language
of
must
made
etc.
C
I
expect
the
remaining
issues
to
get
resolved
without
much
trouble
there
and
for
that
otep
to
get
merged
today
or
tomorrow,
but
in
the
meantime,
I'd
love
to
get
this
thing.
C
I'd
love
for
debate
on
this
thing
to
not
get
delayed,
so
if
people
can
review
the
pr
version
of
it
and
flag
flag,
anything
that
that
looks
suspect
to
you,
please,
please
start
doing
that,
just
so
that
we
can
get
this
closed
off
before
the
end
of
the
year,
which
really
means
this
week
right.
C
It
would
be
a
bummer
if
this
thing
is
just
dangling
out
there
waiting
for
an
approval,
if,
if
there's
no
real
objections
to
how
we're
doing
it
at
this
point,
I
want
to
be
able
to
put
this
in
make
a
1.0
version
and
allow
net
to
1.0
before
the
end
of
the
year.
G
C
No
just
to
be
clear,
we
cannot
merge
the
spec
until
the
otep
is
merged
that
that
nothing
like
that's
happening,
but
I'm
saying
like
please,
please
begin
to
review
this
pr
like
right
now,
so
that
you
do
have
feedback
where
there
is
a
blocker
like
start
getting
it
into
the
now,
so
that
we
can
we're
not
just
suddenly
merging
all
this
stuff
right.
At
the
very
end,
we've
been
launching.
A
C
Okay,
that's
all
I
got
on
that
carlos
you
got
some
issues.
E
Yeah
there
are
two
issues:
is
it
okay,
andrew?
You
keep
sharing
sure.
Thank
you
thanks,
so
much
yeah.
So
the
first
one
is
about
whether
we
should-
and
this
is
not
something
I
think
we
have
to
discuss
here,
but
please
pay
attention.
E
Basically,
the
idea
is
that
some
of
the
some
of
the
stuff
that
you
cannot
detect
at
the
third
time
and
that
should
be
going
to
resource
you
and
you
may
have
them
in
in
spans
as
attributes
instead
yeah.
I
just
need
more
eyes
for
that.
If
people
think
this
is
dangerous
or
it's
something
bad
or
just
please
raise
your
voice
there
and
the
second
one
is
more
interesting.
Probably
it's
about
it's
about.
E
This
is
kind
of
something
we
had
already
discussed
in
the
past
a
few
times
about
whether
we
should
have
yeah
that
one
lolita
feel
this
one
in
about
whether
well
like
non-core
components
like
exporters
should
live
in
country
ripples.
She
made
a
specific
case
of
prometheus
so,
and
there
were
there
was
there
was
some
discussion
there.
E
So
to
be
honest-
and
I
was
trying
to
think
about
it
a
little
bit
over
and
my
impression
is
that,
like
basically
when
it
comes
to
exporters
at
least
like
prometheus
jaeger,
shipping
exporters
should
be
in
the
core
should
be
the
responsibility
of
the
maintainers.
E
K
E
Yeah-
and
in
this
case
I
think
that
probably
she
was
mentioning
or
there's
a
feeling
that
in
some
seats,
the
prometheus
at
least
prometheus
effort
doesn't
have
enough
speed
and,
as
the
gita
says
in
that
case,
that
means
that
we
need
to
add
more
maintainers
or
something
to
the
core
repo
but
yeah.
I
think
that
I
agree
with
what
what
booktune
said
about
discord
exporters.
E
C
I
would
like-
I
think
this
is
mentioned
in
this
as
comments
on
here,
but
we
need
to
decouple
the
question
of
you
know:
can
we
have
more
maintainers,
or
specifically,
in
this
case,
I'm
kind
of
proposing?
C
We
can
have
it's
fine
to
have
like
the
same
way
with
the
spec
if
we
want
to
have
like
prometheus
maintainers
who
just
have
access
to
the
prometheus
codebase
in
open
telemetry
it
it's
neither
here
nor
there,
whether
that
lives
in
core
or
in
contrib
we've
got
the
github
tools
to
set
that
up
either
way
with
code
owners.
C
I
no,
I
think
I
would
actually
propose
precisely
that
if,
if
we're
gonna
say
there
are
there's
a
a
need
for
a
prometheus
working
group
to
make
sure
the
prometheus
algorithms
and
prometheus
code
are,
are
working
well
and
working
self-similarly
across
the
different
languages,
and
it's
a
potentially
a
burden
on
language
maintainers
to
to
review
that
stuff,
or
it's
a
it's,
it's
creating
a
bottleneck.
C
I
don't,
I
don't
necessarily
see
a
problem
with
adding
additional
maintainers
for
these,
essentially
for
like
plug-ins
that
we're
just
putting
in
core,
because
we
think
they
should
be
bundled
up.
C
But
again,
my
my
goal
here
is
is
to
make
sure
that
that
we're
lowering
the
burden
on
people,
if
that
actually
creates
a
bigger
headache
for
the
language
maintainers
to
have
people
potentially
approving
prometheus
stuff
without
their
ability
to
review
it,
then
you
know:
that's
that's
a
headache
that
hasn't
solved
any
problem,
so
I
think
that's
the
part,
I'm
actually
more
interested
in
exploring
that
part,
then
whether
this
lives
in
court
or
can
trip,
I
think
the
move.
The
attempt
to
move
it
to
contrib
is
maybe
just
tied
up
with
that.
Frankly,
I.
K
I
I'm
failing
to
understand,
what's
alolita's
point
about
problems
with
the
with
prometheus
exporter,
I
think
the
biggest
problems
that
we
have
with
prometheus
are
in
the
collector
and
because
of
the
pool
model
and
scaling
and
and
all
the
the
things
when
it
comes
to
exporter
from
the
collector,
because.
G
We
have
some
problems
with
resources,
which
is
what
I
want
to
talk
talk
about
next,
but
actually,
I
think
ted's
ted's
use
of
prometheus
is
confusing.
This
example.
It's
like
the
prometheus
exporters.
We
have
today
are
so
tied
up
with
our
metrics
data
model.
Question
like
you
just
pointed
out
that
that's
not
actually
a
great
example.
If
we
talked
about
jager
maintainers,
maybe
this
would
be
a
better
example,
because
chase
is
so
much
more
baked.
C
A
C
You're
trying
to
do
work,
but
they
couldn't
get
reviews
done
because
maintainers
were
too
busy
working
on
other
stuff
and
so
they're,
trying
to
figure
out
ways
that
we
could
paralyze
more
just
so
that
we
can
increase
the
rate
of
production
which
I'm
all
for,
but
not
if
it's
going
to
destabilize
things
or
be,
like
you
know,
just
painful
for
language
maintainers,
but
I
do
think
we
have
to
figure
out.
How
do
we
parallelize
more?
How
do
we
allow
more
people
to
to
get
involved
in
the
project?
G
To
have
more
maintainers
and
approvers
that
report
to
aolita,
I
think
that's
really.
The
core
problem
here
is
that
amazon
has
a
lot
of
interest
and
not
enough
maintainership
and
ownership
for
provers
roles,
and
I
think
they
deserve
it.
But
but
it's
like
they've
had
a
lot
of
interns
on
the
project
like
a
large
number
of
interns
does
not
add
up
to
a
number
of
maintainers
is
the
problem,
so
they
need
to
put
more
senior
people
to
get
those
roles,
and
I
think
we're
close
to
having
you
know.
G
G
H
G
H
But
right
and
they
have
a
ton
of
interns,
and
like
none
of
that,
like
interns
are
not,
I
would
love
to
promote
additional
approvers
and
stuff
that
could
that
could
help,
but
interns
that
are
only
going
to
be
around
for
three
months.
I
don't
want
to
promote
to
be
an
approver
and
then
have
them
gone
in
three
months
right.
K
So
you
see
that
it's
pre-reviewed
she
mentioned
to
me
that
they
do
something
inside
I.
I
was
encouraging
her
very
strongly
to
do
that
outside.
So
we
see
the
reviews
we
see
what's
happening,
we
just
don't
be
thrown
with
a
review
and
say
yeah.
So
anyway,
there
are
a
bunch
of
problems
when
it
comes
to
to
you
to
how
to
deal
with
all
the
interns
from
aws-
and
I
don't
know
if
this
is
the
right
solution
again.
K
And
I'm
more
willing
to
to
make
somebody
approver
or
maintainer,
who
is
a
very
strict,
reviewer
and
says
no
to
a
lot
of
things,
then
somebody
that
just
just
approves
clicks,
clicks
buttons
approve,
approve,
that's
not
also
count.
I
need
to
to
see
that
you
care
and
you
you,
you
find
things
and
I'm
more
appreciated
to
a
person
that
says
no
global
things
and
the
one
that
approves
quickly.
C
C
I
don't
think
we
really
support
our
existing
maintainers
very
well
on
this
front,
but
I
think
some
making
a
little
more
clear
to
people
like
how
this
works
and
also
making
sure
we're
doing
the
appropriate
encouragement
and
then
also
actually
like
doling
out
memberships
and
approvals
approver
status
like
appropriately
as
people
gain
it.
I
think
I
think
we
could.
C
We
could
help
the
organization
by
by
adding
some
more
structure
around
that
stuff,
because
I
I
do
hear
I
do
get
feedback
from
community
members,
which
is
that
they
feel
like
it's
not
totally
clear
like
how
that
stuff
works
and
they've
people
feel
shy
about
requesting
status.
C
B
Yeah
I'll
just
you
know,
to
follow
on
on
that.
One
thing
that
I
try
to
do
with
all
of
my
coworkers
is
encourage
participation,
encourage
doing
reviews
like
bogdan
was
suggesting
not
just
you
know,
check,
mark
reviews
but
like
actually
getting
involved,
because
that's
very
valuable
and
I
think
we
lost
ted.
C
All
right,
my
computer
died.
Apologies
anyways
that,
let's
move
on
from
this
one.
K
D
G
It's
now
a
good
time
to
talk
about
resources
and
sdk,
so
a
few
threads
are
coming
together
and
I'd
like
to
try
and
point
that
out
to
the
group
earlier
bowdoin
said
something
about
how
if
we
combine
resources,
somehow
the
metrics
break,
and
I
and
I've
now
discovered
what
was
actually
meant.
So
I
want
to
talk
about
cumulative
reporting,
which
is
something
that
is
fairly
standard
in
the
world
prometheus.
G
Does
it
it's
the
idea
that
you
have
a
start
time
stamp
and
then
you
begin
incrementing
something,
and
you
know
when
that
start
time
stamp
began
now.
I
recently
discovered
a
corner
case
here
that
I
didn't
think
through
earlier,
which
is
what
happens
when
you
strip
away
all
your
uniquely
identifying
labels
from
a
metric
time
series
and
it's
cumulative.
Well,
you
end
up
with
overlapping
cumulative
time
series
data
points,
and
we
haven't
talked
about
what
that
actually
is
supposed
to
imply.
G
I've
been
talking
with
people
who
came
out
of
monarch
at
google
and
asking
them
what
they
thought.
We
should
be
doing
not
that
that
necessarily
dictates
what
we
should
be
doing,
but
but
their
their
attitude
was.
When
you
have
overlapping
points,
you
should
choose
one.
It's
a
semantic
bug
that
there
are
more
than
one
point:
values
that
are
overlapping
and
this
the
idea
of
a
start
time
is
really
meant
to
help
us
detect
a
reset.
G
If
you
reset,
you
shouldn't,
have
both
time
series
somehow
interleaving
at
that
after
that
point,
so
we
have
a
problem.
We
cannot
remove
all
the
labels
that
are
uniquely
identifying
from
a
time
series,
or
else
we
corrupt
that
data.
And
now
the
question
of
resources
comes
back
up,
which
is
that
we
have
not
actually
specified
either
any
obligatory
mandatory,
unique
resource
labels,
nor
have
we
specified
any
resources
that
must
be
applied
as
labels
on
the
outbound.
G
So
at
this
point,
if
we
don't
attach
all
of
our
resources
and
if
we
don't
have
a
unique
identifier
in
every
process
as
resources,
we
are
going
to
output
incorrect
data.
So
one
thing
that
makes
me
think
is:
we
need
to
start
specifying
a
unique
resource.
We
need
to
start
specifying.
A
unique
resource
must
be
applied
to
outbound
metrics,
and
we
probably
also
need
to
specify
this
behavior
I'm
talking
about
with
start
times
when
they
overlap.
Is
that
you
choose
one,
you
don't
mix
them
somehow
this
there's
a
lot
to
talk
about
here.
G
I
just
want
to
open
that
up.
There's
more,
I
could
say,
but
I
think
I
can
go
too
far
at
this
point
white.
The
problem
is.
K
But
the
the
starting
has
its
own
point
like
for
and
was
added
not
for
this
was
added
for
the
reset
problem,
so.
A
K
Oh
forget
about
this:
it's
it's
the
reset
problem
on
short
living
processes
that
is
killing
the
the
old
logic.
So
imagine,
for
example,
you
are
reporting
a
cumulative
from
a
lambda
or
from
a
short-lived
less
than
a
minute
process.
So
then,
then
you
report
value,
10
and
then
value
15
from
a
different
process
because
it
restarted.
You
will
consider
the
same
the
same
thing
instead
of
making
the
the
thing
to
be
25,
because
you
actually
so.
G
So
do
you
think
we
should
specify
when
overlapping
points
are,
are
seen
in
a
time
series
that
the
intention
is
to
sum
them
in
the
overlapping
sense,
not
not
to
choose
one
not
produce
overlapping
things,
let's
start
with
okay.
So
so
then
we
have
new
problems
to
discuss
like
in
the
resource
pipeline
of
a
collector.
If
you
drop
all
your
identifying
labels,
you
have
literally
changed
the
data.
K
If
you
draw
all
your
correct,
but
so
there
are
two
failure
cases,
let's
start
with
this,
so
one
of
the
failure
cases
is
people
start
same
as
the
key
twice.
G
G
G
I
mean
well,
it
is
a
bug
and
I'm
asking
what
we
should
do
in
case
of
this,
because
if
you
take,
if
you
treat
those
and
I've,
had
this
lengthy
conversation
inside
lightstep
about
what
we
call
the
non-monotonic
cumulative
points,
these
are
traditionally
output
as
gauges
in
prometheus,
and
the
question
has
been:
why
do
those
have
a
start
timestamp,
and
why
should
you
treat
them
as
any
different
than
a
gauge?
And
I
have
an
answer?
I
have
a
couple
answers
yeah
and
we've
talked
about
them
last
week.
G
Call
that
a
bug,
but
if
we
do
this
differently
for
for
non-monotonics
as
well
as
for
monotonics,
if
I
just
blindly
treat
these
as
a
gauge
and
this
bug
is
existing,
I'm
now
interleaving
two
gauge
time
series
and
because
I
choose
last
value
there,
I'm
just
going
to
choose
a
random
value,
meaning
I'm
going
to
interleave
two
time
series
now
and
I'm
going
to
see
the
mean
jump
up
and
down
or
something
like
that.
Definitely.
K
Definitely
I
completely
agree
with
you,
so
so
what
what
you
are
saying
is:
okay,
let's
assume
the
process
does
not
do.
The
right
thing
does
not
do
the
wrong
thing.
We
have
a
failure
case
like
the
zombie
example
and
what
to
do
in
the
collector.
I
I
think
in
a
backhand.
Let's
put
this
way,
I
don't
think
we
should
include
the
collector,
I
think
in
the
back
end.
K
G
Great,
I
think
I
agree
with
you
and
I'd
like
I
suspect
that
that's
the
behavior
now
there's
a
new
topic
though,
and
now
I'd
like
to
talk,
but
it's
related.
So
if
I,
if
I
may
last
week,
we
talked
about
open,
metrics
and
open
open
telemetry
and
the
big
topic
of
conversation
was
this
mode
of
pushing
deltas
or
but
but
we
didn't
talk
much
about
pushing
cumulatives,
but
I
did
at
some
point
have
a
sort
of
direct
interaction
with
brian
brazil,
where
he
said
something
like
you
know.
G
If
you
have
an-
and
we
were
trying
to
talk
about
the
literal
difference
between
the
this
non-monotonic
cumulative
sum
and
a
gauge.
So
what
is
the
difference?
And
I'm
saying
if
you
want
to
erase
labels
from
a
non-monotonic
cumulus
sum,
what
we
should
be
doing
is
adding
those
point
values
not
averaging
those
point
values,
and
so
this
comes
up
in
reality
is
like
if
I
have
one
process
counting
heap
size
by
two
dimensions,
and
I
have
another
process
kind
of
heap
size
by
three
dimensions,
and
I
want
to
combine
all
that
data
together.
G
I
end
up
with
this
problem,
which
is
that
the
average
value
over
here
is
like
higher
than
the
average
value
over
here.
So
I
shouldn't
be
ever
using
average
values.
I
should
be
using
sums
and
those
will
sum
up
correctly
now
what
what
I
observed
is
that
we
can't
remove
if
you
have
a
label
like
like,
let's
call
it
color
it's
that
third
dimension,
which
is
an
optional
dimension
on
a
sum
observer,
a
cumulative
sum
and
you
drop
that
you're
also
changing
the
semantics
of
the
data.
G
So
we
can't
we
can't
just
arbitrarily
drop
any
labels
really
without
being
very
careful
about
whether
it's
correct
or
not,
there's
some
labels
that
we
can,
when
they're
consistent
across
a
process.
But
if
they're
not
consistent
across
a
process,
we
can't
really
drop
them
without
changing
the
semantics
of
the
data
right.
I
have
a
solution
and
then
the
high
level
idea
is
it
for
us
to
call
ourselves
openmetrics
compatible
anywhere
we're
introducing
a
new
mechanism
or
a
new
technology,
or
a
new
protocol
support
that
openmetrics
can't
really
do
in
the
prometheus
model.
G
G
That's
going
to
take
a
bunch
of
time
series
points
and
output,
correct
otlp
in
the
sense
that
we
all
think
semantically
is
untransformed
effectively
and
that
this
means
effectively
getting
to
the
prometheus
recording
rules
where
we're
going
to
have
some
state
some
window
of
data
in
the
in
the
collector
and
we're
going
to
output,
aggregated
data
and
and
to
make
that
correct,
we
have
to
not
just
literally
drop
labels.
We
have
to
sum
things
when
we're
dropping
labels,
I'm
interested
in
designing
that
inspecting
that
out.
G
And
we
also
will
we're.
I
think
the
problem
is
that
conceptually
we
are
encouraging
users
to
think.
Oh,
it's
fine
to
add
labels
and
and
prometheus
is
going
to
break
because
we're
encouraging
people
to
mix
their
labels,
and
I
so
it's.
What
I'm
seeing
this
now
is
is
the
difference
between
like
the
older
ones
of
prometheus
and
stasis.
G
You
opted
into
metrics,
you
decided
exactly
which
metrics
you
wanted
and
exactly
which
dimensions
you
wanted,
and
in
open,
telemetry,
open
census,
world
we're
saying
you're
going
to
link
in
a
bunch
of
instrumentation
and
it
could
be
different
implementations
of
the
same
instrumentation,
so
different,
instrumentation
libraries.
They
may
have
different
choices
about
dimensions.
You
might
configure
pipelines
that
are
going
to
change
that
stuff.
In
order
to
do
all
that,
we
need
to
have
like
a
processor
that
can
do
that
stuff.
Essentially.
K
So
in
open
sensors
we
did
metrics
manipulation
only
in
a
global
way.
So
so
you
had
only
one
way,
which
was
the
global
instance
of
that
and
hence
all
the
measurements
are
getting
to
the
same
thing.
So
you
didn't
have
that
problem
too
much
and
we
didn't
have
the
problem
of
callbacks.
Anyway,
it
was
simpler,
a
bit
in
open
source
yeah.
What
I'm
trying
to.
K
I
think
we
need
to
decouple
the
capabilities
of
the
sdk
versus
the
capabilities
of
the
collector,
and
I
think
we
need
to
to
to
to
identify
problems
in
the
sdk
and
what
are
the
abilities
that
we
give
to
the
users
inside
the
sdk
versus
what
are
availabilities
in
the
collector,
because
they
are
different
and
there
are
a
different
stage
in
the
in
the
sdk.
You
have
all
the
measurements
you
can
do
all
the
yeah.
G
Yeah
I
agree.
These
are
totally
different
problems,
I'm
starting
to
I'm
just
starting
to
say
that
I
think
we've
we've
got
some
some
collector
level
specs
that
we
ought
to
be
doing
in
order
to
say
we
are
compatible
because
we
can
output
data
from
a
collector
that
fits
the
model
prometheus
expects,
which.
D
K
I
think,
maybe
maybe
we
can,
by
the
way
have
we
decided
on
the
collector
if
we
support,
pull
or
push
model
for
open
metrics,
because
I'm
more
and
more
inclined
to
delete
the
pool
model
from
the
collector,
which
means
collector,
should
export
open
metrics
only
in
a
push
model.
G
We're
gonna
run
into
trouble,
but
I
think
that's
an
independent
one
anyway.
The
highest
level
of
heroes
I
was
trying
to
say
is
that
when,
where
there's
a
disagreement
conceptually
between
open
climate
between
openmetrics,
we
need
to
make
sure
that
we
can
smooth
that
data
back
into
a
way
that
that
it
works,
and
so
far
we've
talked
about
two
different
things
in
the
sdks.
It's
like
converting
deltas
to
cumulatives
we're
going
to
spec
that
out,
it's
going
to
work
and
it'll
support
prometheus.
It's
the
collector's
side.
K
Yeah,
maybe
maybe
finish
the
sdk
first
in
terms
of
priorities
I
would,
I
would
try
and
encourage
the
sdk
and
then
fix
the
collector.
G
C
C
Both
have
a
way
to
improve
literacy
within
the
existing
open,
telemetry
group,
which
is
tracing
heavy.
We
need
to
get
more
metrics
experts
involved
in
open
telemetry,
and
we
need
to
ensure
that
this
thing
we're
writing
is
is
actually
comprehensible
to
humans
because
we're
making
something
that
is
very
good
and
very
complicated
on
that
front.
So
that's
going
to
be
january
for
us,
I
think
really
digging
into
all
of
that.
G
B
K
One
one
last
comment:
josh,
you
may
saw
that
I
filed
a
couple
of
issues
on
the
met
related
to
metrics
in
gold.
The
main
reason
was,
I
started
to
adopt
whatever
respect
into
the
java,
and
I
found
couple
of
differences
that
I
would
not
do
in
java
like
using
a
global
map
in
the
accumulator.
I
just
yes.