►
From YouTube: 2021-01-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Yes,
great
yeah,
josh,
hi,
hi,
everyone.
Usually
we
start
this
meeting
with
andrew's
time
box
and
we
could.
I
think
we
probably
ought
to
do
that,
although
actually
it
feels
like
every
week.
We
do
that
and
it's
the
same
and
that's
why
we
plan
a
workshop
for
tomorrow.
A
Maybe
we
should
start
directly
talking
about
workshop,
a
lolita
and
I
and
yana
and
bogdan
met
with
sergey
and
josh
surith,
got
that
right,
one
of
the
josh's
from
google
and
talked
about
it
just
earlier
today
and
we
came
up
with
some
plans,
so
I
was
working
it
out
before
I
joined
the
meeting
here.
Andrew.
Do
you
object
to
starting
with
a
discussion
about
the
workshop.
A
Here
I
will
share
it:
it's
a
little
bit
drafty,
so
you
can
imagine
that
okay
so
talked
about
trying
to
break
this
day
up
into
sort
of
a
five
and
a
half
hour
block
on
the
calendar.
A
Now
and
I'm
gonna
be
there
that
whole
time
talked
about
breaking
up
and
sort
of
three
parts
and
among
those
three
parts
leaving
one
of
them
for
just
general
q
a
at
the
end,
so
that
gave
us
sort
of
trying
to
plan
out
two
sessions
of
about
90
minutes
each
with
some
break
time
and
then
the
two
sessions
that
we
kind
of
have
identified
that
that
we
talked
over
earlier
today.
A
Doing
a
small
group
were
to
lead
off
this
sort
of
beginning
with
some
kind
of
opening
remarks,
and-
and
I
would
be
able
to
work
on
some
of
that-
and
I
think
I'd
like
to
share
this
responsibility
with
a
few
of
the
people
here.
The
and
then
sort
of
to
fill
out.
The
first
sort
of
45-minute
chunk
is
just
really
to
sort
of
lay
out
our
goals
of
working
together
with
openmetrics
and
prometheus
and
open
telemetry
is
tracing
and
everybody
trying
to
get
everybody
together
here
to
make
this
work.
A
D
That's
fine
and
we
definitely
haven't.
A
But
the
group
is
here
to
talk
about
these
items
and
I
wrote
down
the
first
two
that
came
to
mind
for
me,
which
is
sort
of
like
we
want
to
get
this
to
work.
But
there's
some
trouble.
Excuse
me
I'm
busy.
A
So
then
the
second
45
minutes
of
that
first
session
would
be
to
talk
about
sort
of
the
collector
deployment
models
and
pulling
versus
pushing
and
the
problems
that
we've
discovered
about.
Daemon
sets
versus
stateful
sets
and
and
like
scalability
questions
in
general,
I
put
yana's
name
there
with
bogdan
as
our
collector
expert.
A
Just
to
kind
of
give
people
a
break,
there's
there's
some
time
here
that
I
put
in
and
then
we'd
resume
at
11
30..
I
and
bogdan
will
then
begin
talking
about
some
of
the
things
that
open
telemetry's
current
data
model
are
doing
to
to
extend
and
enhance
what
we
can
do
as
vendors
and
system
providers
and
and
as
providers
of
sdks
and
so
on.
So
this
comes
down
to
things
like
resources.
A
I
was
in
the
middle
of
writing
this,
so
it's
incomplete
but
delta
versus
cumulative,
the
whole
philosophy
about
adding
and
removing
labels
and
histograms
potentially
being
variable,
but
that's
going
to
spill
over
into
the
next
part,
which
is
where
I
think
a
lot
of
people
want
to
just
just
talk
about
histograms.
I
know
that
so
and
and
for
this
one,
I've
collected
a
lot
of
feedback
from
various
people.
Aws
has
provided
a
bunch
of
feedback,
we've
got
new
relic
and
datadog
all
working
on
this
sort
of
area.
A
The
idea
was
to
make
sure
that
the
prometheus
people
are
aware
of
what
we're
talking
about,
because
we've
we're
getting
close
to
making
some
decisions,
and
so
so
having
45
minutes
on
histograms
another
break
and
then
q
a
that
was
my
idea.
I
I
didn't
really
finish
making
these
notes,
but
then
the
idea
would
be.
Maybe
the
one
of
the
people
named
on
each
one
of
these
is
is
going
to
be
the
the
person
in
charge
of
kind
of
planning
out
the
structure
or
skeletons
of
the
45
minutes
there.
A
That's
my
idea,
and,
and
otherwise
that
was
all
I
had
to
share
on
that.
I
was
going
to
create
an
outline
for
this
second
hour
and
a
half
based
on
some
of
the
some
of
the
sort
of
thinking
I've
been
doing.
But
the
first
half
is
is
definitely
I'm
hoping
to
get
someone
else
to
help.
D
Yeah
josh
jana
and
I
are
working
on
it
right
now,
so
we'll
get
that
added
in
by
end
of
day
today
and
sure
yeah
I'll
keep
you
posted
as
we,
the
finance.
A
Great
so
between
me
and
you
elita,
let's
just
make
sure
that
by
the
like,
we
are
going
to
blast
this
out
on
gitter
and
twitter
and
various
places,
so
that
people
who
are
checking
in
with
us
at
the
last
minute
can
find
us.
Thank
you.
That
was
all
on
the
workshop
that
I
had
prepared.
I'm
gonna
unshare,
because
I
don't
have
anything
else
worth
seeing
on
my
screen,
and
so
I
guess
that's
all
to
build
some
anticipation.
There
is
bogdan
here.
A
You
had
expressed,
you
know,
we
talked
about
how
the
open
telemetry
project
is
sort
of
like
metrics
project
is
maybe
slowing
down
a
little
bit
here
and
we
need
we
need
to
kind
of
reboot
and
get
more
specs,
written
and
stuff
about
how
our
sdks
are
need
to
be
more
mature
and
so
on.
What
we
discovered
in
this
earlier
discussion
that
we
had
is
that
there's
almost
two
tracks
here,
there's
a
track.
A
So
while
we
may
be
talking
about
how
we
need
the
collector
to
work
and
to
get
more
collaboration
with
prometheus
in
that
first
hour,
it's
like
we
haven't
really
gotten
enough
time
in
front
of
us
to
talk
about
getting
that
sdk
specs,
written
and
finishing
the
open,
telemetry,
metrics
kind
of
clients-
and
so
I
want
to
put
that
to
the
group
here
say
that
we
are
lacking
in
enough
contributors
and
enough
time
on
the
calendar
for
writing
specs,
and
there
has
been
a
hope
that
we
would
kind
of
really
dive
in
this
month
and
and
and
commit
to
finishing
stuff.
A
But
it
is
pretty
much
a
separate
conversation
and
all
the
stuff
that
we
need
to
have
out
there
planned
for
tomorrow
about
working
with
prometheus
and
about
you
know
everyone
getting
to
an
agreement
that
these
enhancements
that
we
want
for
the
data
model
are
good.
Once
we
get
to
that
agreement.
I
think
we
should
schedule
another
like
session
and
it
can
be
smaller
groups,
but
it
should
be
longer
like
long
blocks
of
time
where
we
can
really
iron
out
what
we
want
these
specs
to
say
and
then
maybe
just
write
them
together
and
so
on.
A
All
right
well,
excuse
me,
can
I
can.
I
have
a
quick
question.
E
Please
do
now
there
was
a
google
docs
about
the
workshop
topics,
brainstorm
like
with
a
long
list
of
ideas.
Yes,.
A
D
A
Yeah-
and
I
know
jonathan,
that
you
would
put
something
here,
for
example,
and
maybe
that
falls
into
q
a
maybe
that
is
sort
of
one
of
these
topics
that
is
more
about
the
open,
telemetry,
client,
libraries
and
sdks,
and
it
is
about
data
model
and
prometheus
kind
of
integration
questions.
Perhaps
we
should
talk
about
your
your
questions
and
concerns
right
here
and
now.
E
Yeah,
so
the
api
naming
is
basically
about
the
mostly
about
the
instruments
that
the
token
telemetry
provides
and
I
went
through
those
with
which
tommy
who
is
like
leaving
the
the
micrometer
project
and
both
of
us
find
it's
not
very
intuitive,
and
I
believe
it
is
like
pretty
hard
for
users
to
to
to
pick
the
right
one
and
not
just
like
pick
the
right
one.
But,
for
example,
like
gets
up
until
that
point
when
they
can
select
the
meeting.
E
This
is
mostly
about
the
the
the
java
api
by
the
way
or
the
java
sdk.
A
So
I
confess
not
being
very
familiar
with
the
java
sdk,
particularly,
but
we
have
tried
to
standardize
instrument
names
and
I
think
that
probably
the
root
of
your
question
comes
back
to
some
of
these
questions
about
data
model
because,
for
example,
a
lot
of
questions
we
get
are.
Why
do
you
have
a
value
observer
as
some
observer
when
originally
in
prometheus
land
you
had
just
gauge
and
the
reason
that
we've
given
for
this
is
has
to
do
with
the
ability
to
add
or
remove
labels
in
a
sort
of
semantically
meaningful
way.
A
So
if
we
can
get
to
a
place
of
agreeing
that
data,
that's
a
sum
is
different
than
data.
That's
a
gauge!
Essentially,
then,
then.
The
next
step
is
to
agree
that
those
are
those
should
be
different
instruments
and
I
feel
like
we
haven't
gotten
to
that
first
point
of
agreement
which
is
helping
it,
which
is
causing
us
confusion.
When
we
talk
about
that
second
point,
the
other
point
that
may
contribute-
and
I'd
just
like
to
just
say
that,
because
I
don't
know
where
confusion
comes
from.
A
A
Where
we
did
because
of,
I
would
say,
the
open
telemetry
guidelines,
which,
more
or
less
said
we
want
to
separate
semantics
from
implementation
of
for
the
api
from
the
sdk
by
giving
semantics,
and
it
was
hard
to
find
semantics
for
kind
of
the
the
current
metrics
apis
that
we
that
we
knew
had
types
that
were
very
behavioral
in
nature.
B
A
And
then
I
see
your
second
point
here
is
about
time,
and
that
has
been
one
where
we've
definitely
had
debates
and
kind
of
got
stuck.
I
think,
recording
time
timing
is
kind
of
special.
The
apis
are
special
and
I
have
seen
the
micrometer
there's
been
a
write-up
of
how
micrometer
does
a
lot
for
recording
time
when
I
first
saw
something
about
that.
It
reminded
me
that
you
know
the
whole
tracing
side
of
open
telemetry
is
trying
to
design
an
api.
A
That's
about
recording
timing
events,
and
so
at
some
level
there
are
things
that
you
may
do
with
a
metric
api
that
the
open
telemetry
wants
you
to
do
with
a
tracing
api,
and
so
there's
some
question
there
as
well.
B
I
think
josh
you
talk
too
much
and
I
appreciate
that.
But
I
I
wanna
hear
what
exactly
was
the
problem
and
how
can
we
improve
that.
E
So
the
the
non-intuitiveness
of
I
believe,
the
the
naming
of
those
instruments.
I
believe
it
is
mainly
coming
because,
like
there
are
a
lot
of
them
like
there
are
like
12
instruments,
okay
and
I
think
for
the
users-
it's
not
very
easy
to
pick
okay
to
pick
the
right
one,
because,
for
example,
if
you,
if
you
like,
look
at
widely
adopted
like
other
like
metric
libraries,
they
have
like
a
subset
of
this
and
six
instruments.
Right.
B
We
have
12
because
of
the
stupid
java.
If
we,
if
we
templatize,
based
on
the
the
long
or
double
primitives,
we
we
do
cause
boxing
and
unboxing
problems,
so
we
had
to.
B
We
had
to
to
do
duplicate
all
of
them
to
have
long
and
double
versions,
but
yeah.
Essentially,
it's
essentially
six
six
yeah,
it's
just
a
matter
of
the
input
type
because
we
want
to
use
primitives
to
avoid
boxing
and
unboxing.
B
But
jonathan,
I
think
this
is
a
great
topic
to
have
tomorrow,
and
maybe
maybe
maybe
maybe
josh
what
do
you
think
about
before
talking
so
so
we
talked
the
data
model
development
and
I
would
like
to
to
to
see
a
a
topic
about
a
data
model
instrument,
maybe
that
one,
yes,
that
one
can
be
changed
to
to
have
all
these
discussions
about
library
and
stuff
yeah.
E
You
need
to
go
through
a
lot
lot
of
like
different
classes
like
there
is
the
there
is
the
the
global
metrics
provider,
which
will
not
give
you
a
matrix
class,
but
it
will
give
you
another
provider,
which
is
the
meter
provider,
which
will
give
you
a
meter,
and
you
can
get
the
instrument
from
the
meter,
and
I
found
it
like
a
little
bit
complicated.
E
Maybe
I'm
the
only
one
like
who
who
thinks
this
but,
like,
I
think,
like
there
are
like
areas
there
which
could
be
improved
in
terms
of
usability.
B
If
you
think
that
is
it's,
so
I
think
first
of
all,
I
would
really
encourage
you
to
provide
a
half
page
ideas
document
about
things
that
you
found
hard
just
so
that
we
can
share
with
everybody,
and
everybody
can
comment
that
that
is
a
great
start.
Second.
B
Secondly,
if
I
do
understand
the
the
problem
with
the
global
thing
and
and
all
of
that-
and
you
can
skip
some
of
them,
but
not
all
of
the
steps
and
stuff-
and
we
we
have
the
same
problem
on
tracing,
we
follow
the
same
pattern
there
and
most
likely
users
of
our
library
will
get
familiar
and
with
the
concept
and
it's
kind
of
very
familiar
with
the
logger
concept,
you
have
a
logger
that
you
get
a
logger
with
a
name.
Essentially,
the
meter
is
equivalent
with
the
logger.
B
E
Actually
that
that
was,
that
was
what
puzzled
me
a
little.
What
you
just
mentioned,
that
at
least
in
the
java
sdk
the
meter,
I
think
it's
not
the
equivalent
of
the,
for
example,
a
logger
or
tracer,
because
the
meter
will
basically
give
you
a
bunch
of
builders.
B
I
see
yeah
we'll
give
you
instruments
that
you
can
use
to
to
do
whatever
you
want.
Okay,
we
need
to
to
think
about
this,
maybe
maybe,
as
I
said,
we
can
take
this
offline.
Unless
you,
you
believe
you,
you
want
to
to
have
a
half
an
hour
discussion
tomorrow.
No.
E
A
I
can
say
that
there
has
been
feedback
in
the
hotel
go
repo.
It's
roughly
what
you
said.
It's
like
the
new
user
has
to
go
through
four
layers
before
they
can
get
to
do
to
do
anything,
and-
and
I
know
that
yana
proposed
some
simplifications-
to
help
like
at
least
remove
one
layer.
I
think
it's
a
good
idea
to
try
and
improve
the
api
sort
of
usability.
B
A
The
question
that
you've
raised
kind
of
caused
me
to
have
flashbacks,
like
we
were
talking
about
this
type
of
question,
about
whether
the
meter
is
bound
with
the
instrument
or
or
the
instrumentation
library
about
a
year
ago.
So
it's
I
feel,
like
the
project
has
moved
on
to
talking
about
a
lot
different
stuff,
but
we
did
talk
about
that
quite
a
lot
in
distant.
B
No,
no,
no,
it's
not
a
problem.
I
think
we
probably
failed
to
document
these
things.
If,
if
you
still
have
these
questions
and
and
it's
good
to
to
to
raise
them-
and
we
should-
we
should
carefully
address
them
this
time
at
least
so,
hopefully,
next
person
when
comes
will
not
have
the
same
concerns.
E
Also,
like
tomorrow's
meeting,
can
we
also
talk
about
the
the
special
kind
of
of
instruments
for
recording
time,
because
I
believe
that
is
a
very
common
like
scenario
where,
like
especially
from
api
usability,
it
can
help
a
lot
for
the
users
if
they
have
a
obstruction
over
over
this.
This
scenario,
which
is
measuring
time.
A
I
feel
that
there's
definitely
a
an
open
issue
that
says
we
should
have
a
dedicated
timing
instrument,
and
I
know
that
because
there's
sort
of
a
common
standard
practice
in
a
lot
of
existing
metric
libraries
for
one
thing,
but
there's
also
like
widely
known
problems
of
doing
time,
conversion.
So
for
two
reasons,
you
kind
of
want
to
have
a
built-in
idiom
for
making
a
time
measurement.
A
B
A
But
the
but
the
programmatic
usage
is
different
because
you
need
to
create
a
timer
or
measure
time
somehow,
which
is
special
and
so
yeah.
I
think
I'm
going
to
find
that
issue.
E
It's
very
easy
to
mess
it
up.
For
my
experience.
B
From
the
user
perspective,
because,
yes,
I
think
I
think
I
think
the
implementation
should
be
very
simple:
it's
just
a
wrapper
on
top
of
what
we
call
the
value
recorder
that
has
the
capability
of
saying
start
top
or
give
you
a
autoclausable
object
or
something
like
that
depends
on
the
language,
and
you
are
done
with
that.
So
it
shouldn't
be
that
hard,
but
yeah.
I
agree
with
you
that
people
are
because
of
the
units.
People
will
will
mess
it
up.
That.
A
Issue
464
talks
about
this.
A
The
question
that
we
should
answer,
though,
is
that
really
the
I
think
anything
you
do
about
timing
is
likely
to
be
almost
the
same
api
as
you
use
for
for
taking
a
span
in
the
tracing
api.
If
it's
not
I'd
like
to
know,
what's
different.
B
Yeah
we
we,
we
said
at
one
point,
one
of
the
blocking
thing
or
the
reason
why
we
did
not
move
forward
with
this
was
we
need
to
carefully
think
the
correlation
with
spans,
which
also
measure
times
and
see.
What
is
the
difference?
B
A
F
B
B
I
I
can
answer
this
one
as
we
do
a
q
a
right
now,
victor
right
now
the
label
said
we
do
not
offer
right
now.
We
do
not
offer
a
clear
mechanism
to
select
which
label
set
to
be
used
or
not,
I
think
go-
has
a
small
implementation
that
allows
you
to
do
something
like
that,
but
we
haven't
specified
a
clear
story
for
everyone
in
the
specification
and
everything
now
so
right
now,
the
expected
behavior
is
whatever
you
put
there.
B
Everything
will
be
used
as
labels
and
that
that
will
determine
your
cardinality
and
your
time
series
in
the
future
in
the
future
and
not
too
far
we
want.
As
you
pointed
we
want
to
have
this
kind
of
view,
api
or
configuration
per
instrument
thing
that
will
allow
you
to
specify
a
selector
or
whatever
you.
We
call
it
to
to
extract
from
the
label,
set
that
user
pass
and
extract
also
labels
from
the
context
that
was
used
during
the
the
recording
time.
B
Unfortunately,
we
are
very
limited
in
time
and
we
haven't
had
a
chance
to
to
to
do
this.
But
what
I'm
trying
to
say
here
is
we
will
we
want
to
offer
a
mechanism
very
soon
that
will
allow
you
to
select
a
subset
of
the
labels
passed
during
every
record,
as
well
as
extract
information
from
the
context
and
very
nicely
from
the
baggage
baggage
being
one
of
the
components
in
our
ecosystem,
which
are
like
key
values
passing
around
with
the
request.
F
It
does
thank
you,
the
the
I
I'm
trying
to
implement,
or
you
know
a
specific
exporter,
and
at
this
moment
I
not
sure
how
to
go
about
doing
that.
Given
that
it
yeah.
B
You
you
don't
have
to
do
anything
that
will
be
part
before
exporter,
so
I
think
I
think
all
these
things
will
happen
all
the
logic
and
everything
will
happen
before
hitting
the
exporter
exporter
will
get
whatever
the
library
produces.
There
will
be
a
configuration
for
the
library
that
allows
you
to
configure
this,
but
the
exporter
doesn't
have
to
do
anything.
A
Okay,
it
makes
sense
to
expand
on
that.
There's
there's
something
in
the
middle
between
the
producer
of
data
and
the
exporter.
That's
going
to
do
some
processing
and
we,
we
know
sort
of
mechanically
how
to
process
the
data,
especially
in
the
sdk.
It's
very
easy.
We
just
there's
a
configuration
question
to
say:
okay,
you've
got
some
data
with
some
labels.
If
you
want
to
change
the
labels,
the
question
is
which
labels
do
you
want
to
choose
and
it
turns.
A
I
think
the
hardest
question
is
deciding
which
labels
not
how
to
respond
once
you
know
which
labels
okay.
Thank
you.
Bergen
referred
to
a
little
thing
and
go
all
it
does
is
erase
labels
and
then
presume
that
the
aggregator
will
do
the
right
thing.
So
if
you
erase
labels
from
a
cumulative
as
long
as
you
have
the
some
observer
data
points
will
be
summed
together,
which
is
the
correct
behavior
for
that
data
point,
and
so
this
little
thing
is
just
erasing.
Labels
and
assuming
the
right
thing
will
happen
downstream,
but
you
are
not.
B
A
Would
be
just
further
configuration
yeah,
the
context
stuff
is
coming.
The
thing
that
I'm
aware
of
is
that
to
do
the
same
type
of
removal
of
labels
or
re
to
specify
exactly
which
labels
are
exported
in
a
collector.
That's
re-aggregating
data
may
require
some
processing.
That's
like
has
a
sort
of
short-term
state
involved
because
what's
different
is
in
sdk,
your
exports
are
always
aligned.
So
you
come
to
export
some
data
and
you're
going
to
export
everything
at
the
same
time
stamp.
A
So
so
doing
alignment
on
time
is
done
for
you
in
the
sdk,
but
if
you're
doing
this
again
in
the
collector,
when
times
aren't
quite
aligned,
you
have
to
go
through
a
bunch
of
extra
work
and
I
think
that's
work.
That
will
and
can
be
done.
It's
just
it's.
It's
pretty
significant
that
amount
that
chunk
of
work
so
getting
this
in
the
sdk
is
easy
configuring.
It
is
sort
of
the
hard
part.
A
I
don't
know
if
you've
seen
the
views
mechanism
in
open
census.
I
I
think
that
there's
potentially
we're
going
to
move
in
the
direction
of
you
know
you,
the
sdk,
the
main
function
that
installs
the
sdk
can
provide
a
yaml
file,
let's
say,
and
that
yaml
file
will
set
up
for
each
of
your
instruments
which
labels
you
should
be
using.
That
seems
like
a
lot
of
work,
because
every
sdk
will
have
to
do
that.
So
then
you
start
to
think
about.
I
just
want
to
configure
that
in
the
collector.
F
So
I
sorry
just
somewhat
related
but
separate
question
when
we,
when
we
talk
about
these
collect,
not
collector
the
allocators
and
sum
and
stuff
are
these
plug-in
code
that
could
be
used
either
on
the
sdk
or
on
the
collector
or
on
some
other
place?
Or
is
this
specifically
only
to
the
hotel,
sdk
implementation.
F
Yeah,
so
so,
well,
yes
or
no
so
so
I
was
just
thinking
like
the
algorithm
to
sum
a
value
and
to
provide
a
sum
for
a
given
period
of
time.
Now
that
seems
like
pretty
standard
code,
but
as
far
as
I
could
tell
at
the
moment,
that
seems
to
be
all
of
the
mechanism
for
that,
including
all
the
classes
required
to
interface.
With
the
you
know,
exporter
and
the
aggregator
are
all
built
into
one
into
the
sdk
right.
F
So
if
I
were
to
write
a
collector,
I
would
likely
have
to
write
the
same
code.
So
I
was
just
wondering
if,
if
you
know
the
the
algorithms
for
doing
a
sum
or
the
algorithms
for
doing
a
count
min
max
sum
or
the
algorithm
for
doing
a
dd
sketch
or
what
have
you
it's
just
a
interface
of
some
kind,
with
some
code
that
can
be
used
anywhere.
B
The
the
interesting
thing
here
is
it:
it
completely
depends
the
reason
why
it
depends
is
our
hotel
protocol
or
tlp
protocol
exports
aggregated
data,
all
these
aggregators,
that
you
are
mentioning
works
on
on
raw
measurements.
So
so
we
solve
different
problems.
So
there
is
a
problem
of
solving
like
from
from
individual
measurements
to
produce
aggregations,
and
then
there
is
another
problem
from
different
aggregations
to
reaggregate,
merge
or
whatever.
B
F
F
B
A
A
I
was
trying
to
draw
a
distinction
earlier
between
the
work
you
need
to
do
to
say
change
dimensions
in
the
sdk
versus
in
the
general
case
of
the
collector,
and
the
reason
it's
different
is
that
you,
the
the
time
points
are
going
to
be
different.
A
You
could
use
all
that
merge
functionality
essentially
and
have
the
hotel
go
sdk
act
like
a
stage
in
the
in
the
collector,
which
would
aggregate
over
10
second
windows
and
then
output
a
new
new
re-aggregated
data.
I
think
this
is
what
we
will
need
to
add.
F
Sure
josh,
I
may
take
you
up
on
that
one
day.
Yes,
awesome,
yeah,
I've,
I've
got
a
prototype
pr.
In
fact,
okay.
F
Since
I'm
the
one
kind
of
talking
at
the
moment,
I
just
making
a
comment,
and
maybe
this
by
the
way
I'm
using
the
c-sharp
implementation
at
the
moment
or
looking
at
the
c-sharp
implementation,
and
it
seems
like
a
lot
of
the.
So
I
know
that
we
have
separated
out
the
api
portion
from
the
sdk
portion.
F
But
if
I
were
to,
as
I
said,
I
was
trying
to
implement
a
exporter,
it
seems
like
a
lot
of
the
internal
classes
are
only
available.
If
I
pick
up
the
sdk,
I
would
have
assumed
that
the
api
portion
would
be
just
the
type
definition
interface
that
I
could
use
for
throughout
the
sdk
and
that
the
sdk
is
just
one
specific
implementation.
F
Write,
you
know
if,
if
I
have
an
exporter
and
I
were-
and
I
chose
to
use
a
different
sdk
or
some
kind
with
those
experts.
B
So
so,
victor
victor
first,
let's
clarify
one
thing:
exporter
is
a
concept
of
our
own
sdk,
so
api
does
not
come
with
any
notion
of
exporter.
Api,
as
you
mentioned,
are
just
the
interface
that
the
user
can
use
to
to
record
measurements.
B
But
if
you
decide
to
choose
a
different
implementation
of
the
api,
that
may
come
with
a
different
way
to
plug
in
different
exporters
or
may
not
come
with
any
way
and
maybe
directly
just
exporting
only
to
prometheus,
for
example,
or
to
something
like
that,
we
cannot
control
that.
So
that
being
said,
what
I
was
trying
to
say
is
we
have
an
api
that
everyone
can
implement
if
they
want,
or
they
can
use
our
sdk.
Our
sdk
has
this
notion
of
an
exporter
and
it's
a
part
of
our
sdk
contract
or
our
sdk
api.
F
A
A
A
I
can
imagine
reasons
why
someone
would
but
it's
pretty
esoteric.
For
the
most
part
we
expect
people
are
going
to
take
our
default
sdk
and
it's
itself
has
a
sort
of
component
architecture
where
you
know
in
metrics,
specifically
there's
a
part.
We
call
accumulator
there's
a
part,
we
call
processor
and
then
there's
the
exporter.
A
F
B
Correct
so
so,
as
you
can,
think
of,
there
is
an
api.
There
is
some
processing
happening,
and
then
there
is
exporting
the
api
is
just
the
first
surface.
The
the
contract
between
the
exporter
between
the
exporter
has
to
happen
with
the
processing
part,
which
is
our
sdk,
because
because
the
that
processing
part
is
the
one
that
produces
the
data
in
a
specific
format
that
the
exporter
can.
A
Consume,
but
it's
also
reasonable
to
assume
that
someone
could
create
a
completely
new
sdk
and
still
use
those
export
interfaces
in
go.
For
example,
there's
an
sdk
export
metric
package,
it's
just
full
of
interfaces
and
rsdk
uses
those
interfaces,
but
a
different
sdk
could
also
use
those
interfaces.
A
A
A
Looking
up
label
sets
and
finding
a
unique
like
date,
entry
in
some
map,
so
that
your
aggregator
only
sees
updates
for
that
specific
label
set
and
the
mechanism
to
do
all
that
is
pretty
complicated,
has
to
deal
with
concurrency
and
a
bunch
of
other
stuff
garbage
collection
and
so
on.
So
we're
trying
to
sell
you
an
idea
that
you'll
use
our
accumulator
because
that's
tricky
and
then
all
you
have
to
do
is
write
an
aggregator
and
an
exporter.
A
F
Well,
these
all-
I
don't
necessarily
want
to.
I
was
just
asking
questions.
F
F
Right
so
yes,
it
was
just
my
initial
thought
that
the
when
I
picked
up
the
api
that
were
all
of
the
interfaces
for
the
api
and
the
aggregator
and
exporters
would
already
be
defined,
and
that
I
would
I
would
just
use
the
ones
you
guys
provide.
Not
that
you
know
I
would
have
to
pick
up
the
sdk
to
only
just
to
pick
up
the
interface
to
override
that.
That's
really
the
only
my
only
question.
That's
all.
A
I
imagine
if
you
were,
for
example,
to
create
a
metric
sdk
that
wanted
to
record
raw
events
and
nothing
more
you
wouldn't.
There
would
be
no
reason
for
you
to
use
our
sdk
like
just
create
a
logger
that
writes
to
a
file
or
something
like
that.
That's
going
to
be
a
completely
new
sdk,
but
you're
not
doing
aggregation
there.
So
that's
that
makes
sense.
I
think,
if
you're
going
to
do
aggregation
right,
you're
only
win
by
using
our
sdk
yeah.
F
So
in
my
case,
I
am
likely
forced
to
just
pass
in
raw
to
our
existing
library
and
then,
if
there,
if
the
hotel,
aggregated
and
stuff
were
to
be,
then
I
would
have
a
bigger
problem
in
trying
to
make
sure
that
the
aggregators
and
how
it
aggregates
matches,
with
my
you
know,
existing
back
end
and
so
forth.
So
so
I'm
trying
to
bridge
how
to
leverage
you
know
the
hotel
aggregators
with
my
back
end
without
having
to
you
know
completely
descend
raw.
A
Yeah,
I
think
so
one
good
test
case
that
might
make
resonate
for
the
audience
here
is
the
prometheus
summary,
which
is
a
particular
algorithm
that
prometheus
client
libraries
have,
and
it's
not
very
well
documented
or
it's
not
specified.
So
when
people
say
I
use
prometheus
summary.
What
I
mean
is
they
use
exactly
a
piece
of
code
that
prometheus
gave
them
and
we
can't
just
go
and
re-implement
that
we
have
to
just
use
exactly
that
piece
of
code
and
exactly
that
piece
of
code
doesn't
conform
to
our
interfaces.
A
So
that's
going
to
be
hard,
but
if
we
once
we
did,
I
guess
the
idea
is
that
what's
the
hotel,
sdk
buying
you
it's
taking
care
of
knowing
the
instrument
and
not
keeping
track
of
label
sets
and
making
sure
that
that
memory
doesn't
explode
because
of
it
and
giving
you
the
ability
just
to
write
an
aggregator
that
only
has
to
deal
with
data
points
not
think
about
mapping
new
labels
and
so
on.
B
And
you
have
a
bunch
of
other
benefits
like
ability
to,
even
though
somebody
did
not
want
to
produce
a
summary.
You
can
still
overwrite
and
produce
a
summary
for
that
specific
instrument
and
so
on.
So
there
are
a
bunch
of
other
configurations
that
you
can
do
if,
if
you
are
consuming
third-party
instrumentation,
not
not
your
your
first
party
instrumentation,
like
in
in
case
of
third
party
instrumentation,
it
has
a
lot
of
other
benefits
right.
A
I
feel
like
I've,
heard
this
request,
though
a
couple
few
times,
though,
and
it
seems
like
we
ought
to
be
able
to
have
an
aggregator
that
just
passes
through
to
your
code
and
at
that
point
you're
you're,
bypassing
the
effectively
bypassing
processor.
You
don't
have
enough.
Don't
even
have
an
exporter
you're,
just
using
the
hotel
sdk
to
map
to
to
manage
to
manage
aggregators
for
you
really
and
then
your
aggregators
are
basically
bypassing
the
rest
of
the
hotel,
sdk
and
doing
their
own
thing.
I
don't
see
a
big
problem
with
that.
A
I'd
I'd
be
happy
to
talk
about
this
offline.
Victor,
oh
okay,
find.
A
Or
something
like
that.
A
Anybody
else
want
to
talk
about
q,
a
I'm
going
to
post
a
final
schedule
for
tomorrow
as
widely
as
I
can
and
update
the
calendar,
and
things
like
that
at
some
point
in
the
next
hour
or
two.
E
Someone
else
spoke
go
ahead,
so
in
the
in
the
brainstorming
list
there
is
a
topic
for
talking
about
open
senses.
Compatibility
will
be.
Will
we
talk
about
that
tomorrow?
A
A
spec
being
written
by
google
on
it
right
now
do
you
I
I
feel
it's
not
contentious.
Is
there
something
we
should
discuss
about
it.
E
E
So
are
there
plans
to
integrate
open,
telemetry
metrics
with
micrometer
or
bring
their
api
like
closer
together
or
more
comfortable?
That
is.
B
Initially,
when
I
thought
about
micrometer
personally,
I
thought,
as
micrometer
having
kind
of
some
api
plus
some
implementation.
That
is
able
to
do
some
some
of
the
aggregations.
But
later
I
heard
that
not
necessary
there
are.
There
are
ways
that
you
can
pass
the
individual
measurements
to
to
the
other
implementations
like
prometheus
and
stuff.
B
So
I
think
there
was
a
discussion
happening
with
micrometer
and
john
is
not
here.
John
watson
was-
and
there
is
also
a
channel
on
the
micrometer
stack-
called
open
telemetry,
where
there
were
a
couple
of
discussions
there,
and
you
can
ask
questions
about
that.
It
would
be
good
to
to
to
discuss
that
there.
A
Ken
finnegan
was
part
of
that
discussion.
I
understand
that
there
were
multiple
ideas
presented.
One
was
that
the
micro
micrometer
implementation
could
be
used
as
an
export
strategy
to
get
access
to
all
the
other
micrometer
integrations
as
well
as
micrometer
targeting
open
telemetry
as
an
output.
I
think
for
the.
E
Existing
users
just
to
confirm
them
when
you
said
channel,
what
do
you
mean?
Is
it
like
a
guitar
or
is
it
the
new
like
github
discussions,
it's
in
slack?
Oh,
do
you
see
ncf
slack,
it's
in
micrometer
dash
metric
slack,
oh
okay!
E
B
That
is,
you
can
join.
I
think
there
is
not
too.
B
There
but
yeah,
but
if
you
join
that
channel,
you
can
ask
questions
there
and
we
can
discuss
so
anyway.
In
my
opinion,
we
we
have
a
definition.
B
We
have
an
interface
that
we
called
metrics
exporter
and
that
this
metrics
exporter
accepts
what
we
call
metrics
data
and
I
think
one
option
for
the
micrometer
integration
is
to
purely
produce
the
metrics
data
and
push
them
to
to
and
become
a
producer
for
for
our
exporter
pipeline,
and
then
every
exporter
written
against
our
data
model
will
be
able
to
be
plugged
in
as
an
exporter
for
micrometer
as
well.
That's
one
of
the
options
second
option,
which
I
think
we
failed
three
months
ago
to
to
really
implement
it
nicely
would
be.
B
I
would
like
to
to
be
able
to
produce
the
to
to
pass
individual
measurements
to
the
from
the
micrometer
to
the
open,
telemetry
api
that
will
allow
me
to
to
to
use
all
the
logic
of,
for
example,
reducing
the
labels
or
like
removing
labels,
selecting
a
subset
of
the
labels
or
or
to
to
extract
labels
from
the
context
as
we
want
and
bunch
of
other
things
that
we
would
like
to
do.
I
don't
know
if
that's
possible,
but
I
think
we
try
to
do
something
like
that.
B
We
we
stopped
at
one
point
because
of
different
testing
problems
and
so
on
and
so
forth,
but
we
can.
We
can
come
back
to
that
discussion
and
I
think
we
can
have
a
conversation
there
now
in
terms
of
aligning
the
apis.
I
don't
know
how
much
intent
is
from
from
from
from
micrometer
to
do
that
and
I
I
don't
think
open
telemetry
is
ready
to
drop
all
the
innovation
that
we
try
to
to
provide
to
the
to
the
community.
B
So
I
don't
know
what
we
we
can
have
a
chat.
If
that's
one
of
the
goal
and
we
can
see
what
we
can
do,
but
I
don't
think
that
was
a
goal
so
far
or
a
written
goal.
Yeah.
E
Okay,
do
you
happen
to
know
if
there
were
some
thoughts
or
any
discussion
about
the
the
other
way
that
you
just
so
what
you
were
talking
about?
This
is
basically
like
if
the
user
is
using
the
micrometer
api,
how
to
how
to
basically
bridge
that
over
to
the
open,
telemetry
sdk
and
do
you
happen
to
know
if
there
were
discussions
about
the
other
way,
then
the
user
is
interfacing
with
the
open,
telemetry
api,
but
they
want
to
publish
their
metrics
using
micrometer.
A
B
If
you
have
time
you
can
read
my
pr
in
java,
which
I
just
added,
I
will
send
you
here
two
five,
three
four,
which
fixes
a
bunch
of
inconsistency
like
we
now
use
we're
gonna
start
using
some
aggregator
for
for
observer
for
some
observers
and
stuff
everything
is
great
but
read
it
and
I'm
pretty
confident
that
you
will
find
it
counterintuitive
and
you'll.
B
Let
me
know
why,
because
because
I'm
I'm
having
a
hard
time
simplifying
and
everything,
so
I
want
you
to
read
it
and
see
if,
if
you
find
it,
okay.