►
From YouTube: 2020-09-22 .NET SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
B
Hey
cj
in
terms
of
agenda,
if
there
is
time
at
the
end,
I'd
love
to
learn
a
little
bit
more
about
what
what
we
generally
think
about
metrics
and
the
whole
aggregation.
I've
heard
some
conversations
about
like
dd
sketch
and
things.
C
Like
that,
so
I
just
you
know,
maybe
first
of
all
I'm
sharing
some
information.
Yes,.
D
C
A
Yeah,
I
think
we
can
start
if
you
can
put
your
names
in
that
in
this
list.
That
would
be
great.
A
A
Okay,
yeah,
I
can.
I
can
actually
talk
about
this
as
well,
so
we
were
proposing
to
add
edius
and
approver
few
months
back
and
the
consensus
at
that
time
was
to
give
him
more
chance
so
that
he
can
contribute
more,
but
I
think
currently
he's
one
of
the
top
contributors,
so
there
is
no
need
of
delaying
this
any
further.
I
think
he
is
good
enough
to
be
an
approver.
A
He
was
already
good,
but
now
it's
more
than
proved
in
terms
of
contribution,
so
for
proposed
rate.
I
think
I
can
just
approve
it.
If
there
are
no
other
propositions,
we
will
just
submit
apr
after
this
and
add
dds
and
approval.
A
Okay,
now,
let's
go
to
the
next
one:
adding
persistent
storage
to
exporter,
so
just
focus
through
your
overall
idea.
What
you're
trying
to
achieve
so
we
can
get
some
initial
feedback
from
more
people.
D
Sure
so,
when
telemetries
are
sent
out
from
the
exporter,
for
example,
let's
take
a
zipkin
or
eager
exporter,
we
send
it
to
a
zipkin
endpoint
and
due
to
some
transient
issues
if
telemetries
is
not
reachable
by
the
service
or
if
the
service
for
that
time
was
busy
and
sends
a
response
like
a
retry
after
some
time.
D
What
we
do
at
this
point
is
we
drop
the
telemetries
on
the
floor,
so
my
proposal
here
is
to
like,
whenever
we
run
into
these
kind
of
transient
issues
store
the
telemetries
onto
the
storage,
local
storage,
so
that
we
can
retry
at
a
later
time
that
retry
at
a
later
time,
also
has
a
question
how
frequently
we
should
be
trying
to
retry
to
the
service.
D
Maybe
we
can
just
rewrite
that
for
like
we
can
use
a
method
like
exponential
back
off
and
retry
for
like
24
hours
in
after
that,
if
it
still,
we
are
not
able
to
send,
we
can
drop
it
on
the
floor
and
we
can
have
an
additional
features.
We
cannot
do
like
send
data.
If
service
is
unavailable,
we
cannot
keep
sending
the
data
to
this
service.
D
So,
for
that,
like
we
can
have
some
back
off
kind
of
thing,
so
to
add
it
to
the
exporter,
so
application
insights
and
the
like
open
since
python
azure
exporter
has
this
feature
already,
and
it
has
proven
to
be
great
because
we
don't
lose
most
of
the
telemetry
data
and
especially
open
sensors.
Python
has
a
different
feature
during
the
application
exit.
D
We
store
data
to
the
storage
and
try
to
transmit,
like
like
we
try
to
transmit
the
data,
even
if
application
closes
down
before
the
re-transmission
when
the
application
wakes
up
next
time.
We
have
an
opportunity
to
send
the
data
that
got
collected
during
the
exit
time,
so
so
the
python
has
like
a
very
good
structure
laid
out
already.
So
most
of
the
information
here
is
from
the
open
senses
python.
So
we
can
leverage
the
idea
from
there
and
do
a
small
implementation.
D
So,
whatever
we
had
it
in
python,
I
just
created
a
small
prototype
pr
and
sent
in
open
telemetry
as
a
draft
p.
I
kept
it
as
a
draft
pr
also
for
a
discussion
here,
so
I
thought
it's
a
good
opportunity
to
have
the
discussion
in
the
sig
meeting
to
get
everyone's
feedback
here.
A
Is
it
like
part
of
individual
exporter
or
is
the
sdk
provide
this
feature
and
exporters
can
leverage
it.
D
At
this
point,
I
am
adding
it
to
the
zipkin
exporter.
I
have
added
the
code
to
the
zipkin
exporter,
so
really
getting
this
to
an
sdk
is
a
like
a
bigger
thing.
I
don't
know
whether
it
need
to
be
a
part
of
the
ot
specification
to
be
a
part
of
the
sdk,
so
yeah,
that's
the
reason
I
have
placed
this,
but
this
could
be
a
common
implementation
can
be,
it
will
be
implemented
in
a
way
it
can
be
taken
anywhere
to
the
any
exporters
that
that
we
have
it
available
now.
A
D
So
we
can
get
this
feature
as
a
part
of
the
sdk
also
so,
as
I
said
like
it's
a
finally,
we
are
going
to
make
a
method
called
so
that's
another
discussion.
We
can
do
it
like
whether
we
should
this
should
live
in
the
exporter
or
in
the
sdk
itself.
A
Okay,
if
it
is
in
the
sdk,
it
would
be
like
something
is
part
of
the
like
processing
pipeline,
where,
if
you
turn
one
flag
on
or
one
setting
on
it
can
like,
the
sdk's
exporting
processor
can
react
based
on
the
export
result,
because,
right
now
we
don't
do
anything
with
the
export
result.
It's
left
up
to
the
individual
exporter,
so
we
could
either
have
add
more
intelligence
to
the
sdk
itself
or
alternately.
A
We
just
expose
these
classes,
like
whatever
is
our
storage
classes,
make
it
part
of
the
sdk,
but
it
doesn't
do
anything
on
its
own.
You
still
have
to
modify
each
and
every
exporter
to
do
local
storage,
backup,
etc,
but
it
doesn't
have
to
like
rewrite
all
the
code.
It
can
just
reuse
some
of
the
components
from
the
sdk
itself.
D
Yeah,
this
option
looks
to
be
good.
The
reason
is,
at
this
point,
the
export
async.
The
functionality
in
the
exporter
just
send
the
result
as
a
flag.
So
we
don't
have
the
we
don't
know
the
failed
transmissions
or
the
failed
telemetries
in
at
the
batch
exporter.
A
A
F
Yes,
is
there
alternative
way
that
we
can
expose
this
as
an
extension
to
the
sdk,
so
we
don't
put
that
in
the
sdk
people
don't
consume
that,
for
example,
if
I
have
a
high
performance
scenario
where
I
don't
use
any
batching
and
I
don't
really
try,
I
just
dump
the
data
on
the
kernel
buffer.
In
this
way
the
storage
seems
to
be
extra
tax.
For
me,
I
have
to
pay
for
the
storage
and
distribution,
but
I'm
not
eating
that
at
all.
A
Yeah,
but
by
making
it
part
of
the
exporter,
we
cover
that
scenario
right
like.
If
you
are
a
high
performance
exporter,
you
don't
care
about
like
storage
and
disk
or
anything,
so
you
just
don't
use
it.
F
A
Okay,
but
if
you,
if
we
make
it
like
as
a
shared
folder,
then
exporters
which
are
like
vendor
specific,
they
won't
be
able
to
leverage
it.
So
we
have
to
like
ship
it
as
like
some
yeah
package
like
similar
to
like
extensions.hosting.
A
We
can
create
a
package,
so
it's
not
part
of
the
core
sdk,
so
you
don't
use
it
unless
you
want.
So
if
you
want
you
install
that
package
and
like
the
vendor
specific
exporter,
whoever
needs
that
it
can
refer
to
like
open,
telemetry,
storage,
extensions
or
something
like
that
and
yes,
okay,.
A
A
G
A
D
A
Look
at
the
actual
api
in
a
moment,
but
let's
yeah
follow,
go
ahead.
E
Right
that
there
is
any
spec
related
to
that,
I
think
perhaps,
if
it
not,
maybe
we
can
use
this
as
kind
of
a
proposal
for
a
spec.
It
could
be
the
the
kind
of
base
first
implementation,
but
it
seems
that
if
you're
gonna
have
that
it
seems
that
we
should
be
in
this
respect,
perhaps
not
for
ga.
Now,
but
at
least
later
yeah.
F
Yeah,
so
the
idea
is,
we
have
something
experimental
and
pick
one
exporter
like
bibking
and
to
show
like
how
would
that
work,
and
once
we
get
a
good
understanding
of
what
api
we
might
need,
we
can
propose
to
the
open
time,
respect,
I
think,
there's
a
general
interest.
I
I
think
in
simplest
parts
and
python
I've
seen
people
have
similar
asks
this.
This
is
a
good
like
optional
feature.
A
Yeah,
so
we
will
pursue
changing
spec
only
after
we
proved
it
in
like
this
repo,
I
mean
it's
generally
little
bit
slower
if
you
want
to
move
things
into
specs,
so
we'll
just
do
without
any
about
spec
for
now,
and
once
things
are
in
good
shape
here,
we
can
take
it
to
the
specs,
but
it'll
definitely
happen
after
the
like
1.0
and
ga
of
current
open,
telemetry
yeah.
If
this
approach
solves
like
how
to
structure
and
where
to
put
the
code
raj,
do
you
want
to
do
a
discussion
on
the
actual
api
itself.
G
E
G
About
the
app
insights
experience
about
this
persistent
storage,
have
you
ever
run
into
problems
with
this
type
of
storage
in
a
containerized
environment?.
A
G
A
I
mean
it's,
it
is
a
non-issue
in
like
any
places
where
you
don't
like
the
local
storage.
If
it's
not
like
persisted,
then
yes,
you
will
lose
it.
It's
not
just
containers
like
other
places
in
assure,
for
instance,
like
azure
functions
or
azure
web
apps
by
default.
The
storage
is
like
tied
to
that
particular
vm
or,
like
particular
like
instance.
A
So
if
you
relocate
like
after
restart
it
may
or
may
not
land
on
the
same
physical
place
so
like
when
you
restart
you,
you
would
not
find
the
things
which
you
put
there
earlier.
This
is
a
known
issue
and
there
is
like
well-known
workaround.
A
You
can
change
the
storage
folder
to
like
something
persisted,
which
is
typically
backed
by
a
remote
blob,
storage
kind
of
thing,
but
like
from
an
from
the
sdk
perspective,
we
don't
really
need
to
worry
about
it,
because
user
has
complete
freedom
of
where
they
want
the
backing
storage
to
be
it,
it
should
be
like
they
just
configure
it.
So
if
they
configure
the
non-persistent
one
yeah,
they
lose
it
after
restart.
A
If
they
really
care
about
like
data
laws,
they
don't
want
to
lose
that
under
any
circumstances,
then
they
can
configure
some
storage,
which
is
backed
by
a
like
blob
or
something,
but
generally
that
comes
with
their
price,
because
it's
going
to
be
like
substantially
slower
than
like,
like
a
fast
ssd
based
thing.
G
A
Yes,
so
exporter,
all
they
get
is
like
storage
folder,
like
one
input
from
the
user
like
go
and
use
this
storage.
If
you
ever
want
to
store
something
and
whether
that
storage
is
persistent
or
like
it's
gone,
when
you
restart
that
is
not
it.
My
thinking
is.
It
should
not
be
a
concern
of
the
exporter
itself.
If
user
cares
about
like
they
should
provide
a
folder
which
is
reliable
or
if
they
care
about
performance,
they
should
just
provide
a
folder
which
is
fast
but
not
necessarily
playable.
G
A
Yep
yeah,
so
it's
like
mostly
we
are
saying
it's
not
an
issue
of
the
sdk.
Whoever
is
using
the
end
like
who
is
it?
Who
is
the
end
user?
They
know
best
like
where,
where
they
are
hosting
the
application,
so
they
can
make
the
best
call.
Should
I
give
a
local
folder,
or
should
I
give
remote
backed
storage?
So
it's
not
a
concern
of
the
storage
api
itself
as
long
as
it
provides
the
option
to
change
the
like
storage
folder,
and
I
think
there
is
also
a
mention
of
like
default.
D
A
D
Underneath
it's
going
to
create
a
folder
with
the
hash
of
the
the
process,
the
identity
of
the
process
that
is
running,
for
example,
if
I
mean
identity,
the
user
identity
of
the
processor
processes
that
is
running
plus
the
executable
name,
so
it
uses
both
to
ash
it,
so
why
we
need
it,
create
sash
out
of
this
and
creates
a
new
folder.
The
reason
why
we
do
that
is
if
multiple
applications
are
running
it
should
not
share
the
same
folder
name.
So
that's
the
reason
we
do
that.
So
it's!
A
Yeah,
that
makes
sense,
but
like
chris,
was
referring
to
like
what
is
the
root
folder
like
you're,
creating
a
new
folder,
but
it
is
under
like
c
temp
or
like
some
direction.
D
A
Yeah
so,
depending
on
like
where
you're
hosting,
like
temp,
could
mean
different
things
like
in
functions.
It
returns
like
one
folder,
which
is
not
persistent,
yes
same
with
web
apps,
but
like
in
vms,
like
typically
you
get
like
some,
some
more
reliable
storage,
so
it's
it's
up
to
the
user
is
what
I
believe
and
if
there
are
like
better
options
here,
we
can
consider
it.
A
F
A
Yeah,
I
think
that
makes
sense,
because
if
user
doesn't
care,
then
by
default
we
don't
use
any
storage
that
matches
the
spec,
and
this
is
like
a
add-on
feature
which
we
are
explicitly
exposing.
So
if
a
user
cares
about
it
yeah
they
have
to
go
ahead
and
enable
it
either.
Let's
say
export
option.
Yeah.
F
Yeah
and
regarding
what's
the
guidance
for
example,
in
some
like
like
function
as
a
service
environment
or
some
environment,
that
we
don't
control
the
shutdown
and
the
local
storage
might
get
lost.
I
think
this
one.
We
should
be
more
careful
and
drive
that
clarity
in
the
spec.
F
For
now,
I
think
it's
fine
to
add
the
functionality
and
have
that
off
by
default
and
user
can
optionally
turn
that
on
and
we
tell
them
the
right
expectation
like
this
is
experimental
and
we
expect
that
to
be
part
of
the
spec,
eventually
yeah,
okay,.
A
That
makes
sense
yeah.
Would
you
expect
us
to
like,
or
would
you
expect,
like
each
vendors
of
the
cloud
provider
like
my
cloud
provider,
is
to
give
some
guidance,
so
is
that
something
which
we
hope
that
spec
can
clarify
like,
for
instance,
only
like
the
vendor
would
know
what
is
a
situation?
What
is
the
persistency
of
the
local
storage
family
functions
or.
F
Lambda,
yes,
I
I
think
each
vendor
should
take
care
of
that
and,
for
example,
if
you
have
three
exporters
and
each
exporter
is
owned
by
a
different
vendor,
I
would
imagine
they
don't
want
to
mess
up
with
the
file
created
by
another
exporter.
So
they
should
go
and
create
their
own
stuff
and
and
will
be
careful
in
giving
such
guidance.
Okay,
yeah.
A
All
right,
so
if
there
are
no
questions
like
raj,
do
you
want
to
like
explain
like
the
api.
D
The
api
is
very
simple,
like
the
we
are
going
to
have
a
here,
it's
based
on
the
file
storage,
local
file
storage,
so
it
will
be
initialized
once
to
a
path
and
using
that
storage.
What
we
do
is
we
create
a
a
file
which
I
call
it
as
a
blob
here.
So
we
if
the
failed
transmissions
will
be
part
of
that
bob.
For
example,
if
you
look
at
storage
dot
put
blob,
so
we
have
a
data
json
data
which
is-
and
we
can
just
put
that
to
that
file
like
it.
D
It's
going
to
put
it
in
that
storage
folder
with
the
name
of
that.
If
you
could
see
in
the
output,
there
is
a
file
with
the
date
timestamp
giveaway
and
dot
blob.
So
it's
going
to
put
dump
that
json
to
that
file
and
in
order
to
get
that
file
like
we
do,
a
storage
dot
get
blob.
That
will
get
the
time
the
blob
with
the
recent
timestamp.
D
D
We
we
have
the
ability
to
read
the
blob
so
that
the
least
why
we
have
it
here
is
this
avoids
like
if
there
are
multiple
threads,
so
it
avoids
like
reading
by
many
threads.
What
we
do
is
we
rename
the
file
from
the
dot
blob
to
a
dot,
lock
extension
with
some
defined
time
stamp
that
we
know
other
threads
will
touch
that
and
once
the
like,
we
read
it.
We
deleted
that
file
in
case
during
this
process.
Something
bad
happened.
We
don't
leave
the
file
as
it
is.
D
There
is
a
maintenance
thread
which
looks
at
it,
which
renames
the
lock
back
to
the
blob
itself,
so
it
gets
another
chance
for.
D
That's
a
timeout
on
that.
Okay,
that's
where
the
other
maintenance
threads
will.
B
Come
but
the
blobs
are
remote
or
on
the
file
system.
B
But
the
the
storage
they're
on
local
storage
or
on
potentially
like
blob
storage
in
the
cloud.
D
It's
all
named
as
blob
because
to
be
on
the
operating
system
friendly.
We
named
it
as
blob,
instead
of
making
it
aspire.
D
We
can
write
anything
to
the
file,
but
most
of
the
services
writes
json.
That's
why
I
just
put
a
json
there.
F
Yeah,
I
think,
from
the
interface
perspective,
we
probably
should
put
that
as
a
as
a
byte
array
and
also
I
have
a
question
regarding
the
coupling
of
the
interface
and
the
actual
implementation.
I
think
it
it's
probably
nicer
to
have
the
abstract
interface
and
then
provide
a
local
file
storage.
Then,
for
example,
if
some
exporter,
they
have
a
special
need.
F
They
want
to
store
the
data
in
a
tpm
chip
or
they
want
to
store
that
data
in
a
binary
database
or
they
want
to
compress
the
data
they
can
implement
their
own
version
using
the
same
interface.
So
it
gets
more
composable.
For
example,
if
I'm
using
zip
king
and
I
have
a
special
need
for
the
nuclear
power
plant,
I
can
use
my
nuclear
power
plant
storage
instead
of
a
local
file.
H
I
was
kind
of
just
trying
to
think
through,
like
jaeger,
for
example,
so
it
builds
these
packets
that
it's
trying
to
transmit.
So
let's
say
a
something
failed.
Would
you
take
the
packets
that
you've
composed
and
write
them,
or
do
you
write
before
transmission,
so
you
would
need
to
serialize
to
something
else,
I'm
just
trying
to
think
through
like
where
would
we
where
would
we
implement
that.
D
So
this
is
like
before
the
serialization
to
the
binary,
because
someone
can
open
the
file
and
take
a
look
at
the
data.
What
is
being
what
is
not
transmitted?
So
that's
the
idea,
but.
D
It
is
not
a
goal.
The
moment
we
make
it
to
bite
array
the
and
we
don't
need
to
be
worried
about
like.
Is
it
what
kind
of
data
we
write
it?
So
it's
all
up
to
the
exporter,
because
this
is
going
to
be
a
common
like
file
that
is
residing
so
exporter
will
decide
whatever
the
functionality
it
has
to
send
the
data
recent
the
data.
D
So
it's
based
on
that,
for
example,
in
this
case,
if
I
dump
a
json,
I
will
have
another
send
like
method
in
my
exporter
to
send
this
json
so
I'll
be
doing
a
in
this
case.
It's
all
json
data,
so
I
can
just
send
it
just
doing
after
a
compression
and
then
send
it
so
yeah.
A
But
basically,
up
to
the
exporter
like
whatever?
Yes,
we
don't
control,
like
you,
just
use
it
like
json
as
an
example,
but
if
zipkin
wants
to
put
like
something
else,
we
just
let
it
like
the
api
is
general
can
have
to
be
used
by
individual
exporters
in
whatever
format
they
want.
Yes,.
B
Excuse
me
guys,
so
I
mean
if
we
make
this
battery
like
this
whole
interface,
with
read
being
like
you
know,
being
able
to
read
from
from
a
blob
using
this
this
for
each
thing
it's
sort
of
it
all
becomes
the
whole
api
becomes
different.
So
what
is
the
purpose
of
this?
F
I
think
my
point
is
like
different
exporters.
They
have
different
optimization
and
some
of
them
might
prefer
plant
tags.
Some
of
them
might
prefer
a
very
compact
binary
format
and
it's
up
to
them
where
they
want
to
store
the
ultimate
unwired
format
or
they
want
to
store
the
input,
format
or
some
intermediate
format,
and
we
don't
really
know-
and
I
think
we
should
just
make
this
transparent,
similar
like
a
file
storage
and
if
you
look
at
the
title
of
this
pr,
it
is
persistent.
F
B
I
I
I
I
think,
and
I
mean
maybe
I'm
too
far
removed
from
this.
So
please
I'm
sorry.
If
this
is
disruptive
but
like
when
I
think
about
this
just
from
experience
and
application
insights,
then
I
mean,
if
you
make
it
all
super
abstract,
then
there
is
no
point
doing
anything,
because
then
people
can
just
implement
it
using
c
sharp.
So
if
you
want
to
create
an
abstraction
that
is
useful.
B
So
if
I
was
kind
of
to
think
about
this,
then
I
would
say
yes,
I
do
want
to
focus
on
file
storage,
because,
realistically
speaking,
what
else?
Yes
there
is,
could
be
some
other
persistent
storage.
But
first
of
all,
if
it's
remote
storage,
then
the
whole
thing
is
pointless,
because
99
of
problems
will
be
related
to
being
offline.
The
the
the
problems
that
we
are
trying
to
solve
with
this
yeah.
F
B
Agree
so
so,
making
this
abstract
in
in
regard
to
some
remote
storage
is
kind
of
pointless
and
making
it
abstract
in
regard
to
some
other
persistent
storage
that
is
not
remote
and
not
file
storage
seems
so
specialized
that
we
might
not
create
the
right
abstraction
anyway.
So
file
storage
seems
actually
a
very
reasonable
abstraction
to
me.
B
However,
in
terms
of
the
shape
of
the
data
in
the
storage
having
its
strings,
that
seems
less
reasonable
to
me
because,
honestly,
in
a
performance
scenario,
I
really
probably
want
to
deal
in
terms
of
bite
areas
or
spence
as
in
not
not
not
the
distributed
spans,
but
the
memory
spans
the
ones
in
dotnet
yeah,
so
so
that
would
be
maybe
a
better
interface,
and
when
I
started
like
the
whole
example
when
I
go
like
read
line
in
a
loop,
that
means
I'm
reading
lines.
B
That
means
I'm
already
in
a
text
based
scenario
where
I
have
a
concept
of
new
lines
which
kind
of
implies
json
and
that
I'm
not
sure
whether
this
is
generic
enough
for
for
a
high
performance
scenario.
Yeah.
F
Good
point,
I
I
think
if
I
could
summarize
what
he
mentioned,
I
I
agree
so
number
one.
I
think
this
storage
probably
should
be
named
something
like
I
storage
or
offline
storage.
It
doesn't
because
the
purpose
is
to
solve
the
transmission
across
the
wire.
It
doesn't
make
sense
to
have
a
another
like
remote
storage.
So
probably
I
offline
storage
or
high
storage
interface
number
two
is
regarding
the
actual
storage
format.
I
think
we
have
already
discussed
this.
So
it's
up
to
the
exporter,
and
this
interface
should
just
give
people
access
to
byte
array.
A
F
A
F
Is
indented?
I
think
that
should
be
the
goal,
because
different
exposure
have
different
required
policy.
They
have
a
different
format
and
like
I
think
we
want
them
to
be
able
to
optimize
their
scenario
and
if
multiple
exporters
exist,
I
don't
expect
them
to
store
the
same
thing.
For
example,
when
exporter
has
succeeded
in
exporting
all
the
data,
then
there's
nothing
to
require
another
exporter
might
get
stuck
because
the
network
is
not
available
for,
for
it.
A
Yeah,
so
in
the
scenario
where,
like
user
has
two
exporters
configured
and
we
just
give
the
root
folder
as
like
some
storage
folder,
but
each
individual
exporter
gets
its
own
unique
like
sub
directory
under
which
they
store
their
files.
So
it's
not.
There
is
no
correlation
between
these
two.
Is
that
the
idea,
I
think
so,
yeah,
okay,.
B
So
I
was
actually
recently
thinking
how
I
would
do
this
in
in
the
tracer,
so
so
sdk
may
be
slightly
different
because
it
may
have
multiple
exporters,
but
I
was
thinking
whether
or
not
we
should
be
building
this
into
tracer.
And
if
I,
when
I
was
preliminary
thinking
about
an
api,
I
was
essentially
what
is
the
problem.
So
I
am
writing
this
export
and
I'm
already
I
already
serialize
things,
and
I
want
to
put
things
on
the
wire
and
for
some
reason,
I'm
failing.
B
So
I
have
this
byte
array
or
a
memory
stream
that
I
am
trying
to
put
put
into
my
network
line
whatever
protocol
I'm
using
and
somehow
it's
failing.
So
I
would
like
to
have
an
api
that
says
where
I
can
say:
geoapi:
here's
already
a
serialized
data
packet,
just
take
it
please
and
keep
it
and
next
time
I'm
trying
to
to
send
again
and
it
fails
again.
I
want
it's
a
different
packet.
I
want
to
give
it
to
the
api
and
say
here:
take
it
and
keep
it
so.
B
The
one
api
should
be
taking
these
data
packets
and
persisting
it
in
some
some
appropriate
way,
and
then
the
other
api
says
gear
api,
like
I
give
you
if,
if
I
did
give
you
the
the
packet
earlier,
please
give
it
back
to
me,
so
I
can
send
it
and
then
later
confirm
whether
I
was
successful
or
not.
So
like
give
me
next
data
packet
given
to
you
previously
and
then
I
am.
B
I
now
have
some
sort
of
like
like
eliza
at
least
now,
and
then
I
have
to
confirm
that
I
successfully
send
it
like
with
a
queue
message,
because
at
this
point
I
know
that
it's
persistent
and
my
application
could
crash.
So
I
don't
want
to
like
delete
it
from
from
the
persisted
storage
and,
but
I
do
want
to
say
I'm
now
confirming
that
I
sent
it
so
now
go
delete
it.
So.
F
B
Exactly
so
when
I
was
kind
of
thinking
whether
or
not
I
want
this
in
the
tracer,
this
was
my
thought
and
then
I
thought
it's
too
complicated
for
version
one,
so
I
just
decided
not
to
do
it,
but
sdk
is
much
more
mature
than
the
than
the
tracer
so
yeah.
That's
that's
something
to
consider.
F
Yeah-
and
I
I
think
when
you
look
at
the
the
api
here
like
get
these
and
delete,
it
actually
implement
the
semantic
of
a
of
a
weak
transaction
transactional
data
model.
B
Yeah,
I
think
a
better
interface
would
be
to
say,
like
the
the
lease
api
should
return
in
one
api,
as
a
single
sort
of
transaction
should
return,
both
the
pointer
to
the
data
and
the
token
for
the
lease
and
then
at
the
end
of
the
time
period.
I
have
to
either
confirm
that
I'm
I
have
processed
the
data
or
I
have
to
the
data
aps
for
other
consumers.
The
only
difference
to
a
queue
here
needs
to
be
that
in
a
normally
distributed
queue.
B
If
someone
else
now
comes
along
and
asks
for
this
data
from
this
source,
then
they
just
get
the
next
item.
But
in
this
particular
case,
something
to
consider
would
be
that
some
some
consumers
might
require
order.
So
if
somebody
took
a
lease
on
this
guy
and
then
someone
I
previously
gave
you
another
package
to
store.
Maybe
nobody
should
be
getting
the
next
package
until
this
one
is
either
processed
or
failed,
because
some
some
consumers
might
require
order
or
it
should
could
be
a
parameter.
H
This
could
potentially
introduce
duplicate
span
transmission.
Is
there
anything
in
the
spec
that
says
that's?
Okay,
you
cannot
do
that.
It's
undefined.
F
So
currently,
there's
no
spike
like
asking
about
like
at
most
once
or
at
least
once
guaranteed,
so
there's
no
guarantee
at
all
and
and.
E
F
F
B
But
in
reality,
backhands,
because
because
dupe
is
very
reliable,
is
very
hard
at
a
huge
scale
and
costly.
So
in
reality,
back-ends
often
don't
do
dedupe
and
they
rely
on
on
best
effort
on
the
client
to
avoid
duplication,
and
if
duplication
does
happen,
they
just
accept.
I
think,
while
in
in
theory
d
would
be
great.
Reality
is
like
I
described
so
because
of
that.
B
While
we
cannot
guarantee
the
the
presence
of
the
absence
of
duplicates,
we
should
do
as
much
effort
as
possible
to
avoid
it,
because
we
cannot
rely
on
the
back
end
to
do
it
so
like
in
the
case
like
you
described
where
we
fail
to
receive
the
response.
There
is
no
choice
we
have,
we
will
duplicate,
but
where
we
have
a
choice
by
like
making
the
transactional
that
we
could
do
right.
That
would
make
it
better.
B
No,
no,
no,
not
remote,
not
remote,
you're
right,
but
things
like
where,
for
example,
here
we
are
or
getting
a
blob
and
then
getting
a
list
and
then
deleting
it.
So
there
is
a
possibility
for
someone
some
other
thread,
maybe
to
acquire
the
lease
or
I
don't
know
like.
We
should
make
it
as
transactional
as
possible.
Yeah.
F
I
agree
so
for
local
thing:
it
should
be
as
transactional
as
possible
yeah
and
for
remote.
We
rely
on
different
exposure
and
backhand
to
decide
whatever
they
do
and
normally
ddop
means.
You
need
to
give
a
keep
a
memory
of
what
you
have
seen
already
and
you
don't
have
infinite
memories.
You
can
only
keep
a
window
of
things
and
and
that's
just
a
reality.
B
And
then
you
will
you
will
like
in
in
reality,
you
will
have
many
instances
of
the
ingestion
service
and
if
they
want
to
do
dedupe,
that
means
they
have
to
either
segment
the
data
partition
the
data
to
the
same
instance
or
they
need
to
have
some
sort
of
cache.
Both
of
it
is
expensive
and
you
know
it
may
be
not
cost
effective
to
actually
do
it.
Yeah.
A
So
there
are
no
pending
topics.
I
think
we
can't
resolve
any
other
things
in
the
pr
itself.
It's
still
a
draft,
so
please
go
ahead
and
look
at
it.
I
think,
as
you
can
like
update
the
example
to
like
make
it
clear
that
the
data
is
not
json,
it
can
be
anything
sure
like
it
should
be.
Like
a
biter,
I
think
so
not
a
string
array,
but.
A
Discuss
in
the
tr
itself,
yeah
michael.
H
A
Makes
sense
all
right,
let's
go
back
to
the
agenda,
which
is
yeah
almost
over
yeah,
so
there
are
two
asks
from
sean
to
get
some
peer
review.
I
think
I
reviewed
this
one,
so
we
briefly
discussed
this
last
week
as
well.
There
is
a
come
on.
Let's
yeah,
there
is
a
pr
which
he's
doing
the
first
step
of
it.
Basically,
what
we
want
to
do
is
the
entire
open
elementary
you
get.
A
All
the
components
are
loading
into
event
source,
but
there
is
no
easy
way
to
listen
to
it
unless
you
like,
like
install
perfume
kind
of
thing
in
the
machine,
but
the
purpose
is
to
get
a
like
self
diagnostic
module,
which
will
listen
to
all
this
even
so,
and
locate
a
file
based
on
some
configuration.
So
please
take
a
look
at
the
pr.
This
is
just
like
part,
one,
it's
not
containing
any
actual
code,
just
to
get
a
feel
of
how
the
how
the
self
diagnostic
usage
is
going
to
be.
A
I
don't
think
we
want
to
discuss
it
today.
It's
good
enough
to
be
covered
in
pr,
but
just
a
reminder.
Please
take
a
look
at
the
actual
pr
same
with.
There
is
one
more
thing
which
we
are
trying
to
automate.
So
there
is
a
automation.
Suit
is
part
of
the
w3c
trace
context,
so
we
just
did
the
first
step
of
making
it
as
part
of
the
ca.
So
we
had
a
test
which
validates
that
our
instrumentations
are
compliant
with
receive
trace
context,
some
non-failures,
which
is
by
design
in
the
dot
net.
A
So
there
is
a
pr
again
open
to
handle
that.
So
please
take
a
look.
Is
there
anything
which
you
want
to
like
ask
opinion?
Or
is
it
just?
Please
go
ahead
and
look
at
my
pr
kind
of
thing
or
is
there
anything
specific
which
you
want
to
ask
with
more
folks.
I
I
kind
of
want
to
know
like
anyone
wants,
wants
to
vote
here
to
look
at
my
pr,
so
I
can
have
a
more
discuss
or
whatever
to
follow
up
for.
A
I
A
A
Right
yeah,
so
there
are
two
more
agenda.
I
think
I
mean
I
just
put
logging
plans
just
to
give
like
just
to
give
a
heads
up
on
what
is
the
thing
which
I
am
trying
to
deliver
before
ga?
So,
as
we
already
know,
we
have
a
commitment
that
traces,
bga
matrix
are
still
up
in
the
air,
so
it's
quite
likely
that
we
will
mark
the
metrics
part
as
beta
even
after,
like
november,
but
for
logging.
A
It's
it's
somewhat
in
the
middle
of
these
two
because
for
logging,
open
telemetry
does
not
define
a
new
api,
so
there
is
no
api
for
as
to
go
and
implement.
All
the
open
elementary
specification
says
is
just
integrate
with
whatever
is
the
existing
login
api
for
a
given
language
and
in
dotnet
it
is
I
logger.
A
A
I
mean
it
basically
boils
down
to
the
ability
to
stamp
each
log
message
with
trace
id
spare
span
id
and
tristate.
Whatever
is
the
thing
from
the
context?
So
as
long
as
we
do
those
two
things
like
two
things,
one
is
ability
to
correlate
and
second
is
have
a
way
to
send.
I
logger
logs
into
open
telemetry.
A
We
can
call
ourselves
done
with
logging,
because
there
is
no
need
of
inventing
a
new
apa,
because
the
spec
says
no
need
of
that,
which
means
it
would
be
a
good
idea
to
pursue
like
doing
like
minimal
investments
in
logging,
and
we
can
call
ourselves
like
tracer's
ga
logging
also
ga
and
matrix
beta.
A
B
B
A
So,
like
let's
say
what
we
are
going
to
have
is
an
open,
telemetry,
logger
provider,
which
would
take
the
I
logger
dot
log
messages
convert
them
into
the
protofile
as
per
the
spec
and
give
it
like
export
it
using
otlp
and
from
sorry.
You
are
asking
something
or
you
were
asking
yeah
yeah.
I
have
to
please
yes,
so
what
we
can
do
at
the
minimum
is
create
something
called
open.
A
Telemetry
I
logger
provider,
which
will
be
defined
as
a
logo
provider
which
takes
a
lot
of
messages
and
it
can
be
serialized
into
the
otlp
protocol,
because
the
protofiles
are
already
there
marked
as
ready
for
consumption
in
the
product
repository.
So
we
can
just
serialize
it
using
that
and
give
it
to
otlp
exporter
is.
B
Is
an
open,
telemetry
location
open
elementary
collector
yeah,
so
we
already.
A
Know
so
there
is
the
standard
for
it.
Okay
got
it
okay,
so
we
can
export
that.
That's
one
option,
and
I
mean
I
don't
know
whether
jaeger
and
zipkin
supports
collecting
logs,
but
this
is
something
which
I
will
come
back
with
more
solid
plans
in
the
next
couple
of
weeks,
but
this
is
just
a
heads
up
that
I
mean
we
intend
to
have
a
logging.
B
Api,
essentially,
there
is
a
open,
telemetry
spec
that
describes
a
a
serialization
protocol
for
logs
and
we
are
making
a
pro
a
exporter
that
supports
it
got
it
now.
I
understand
okay,
thank
you.
F
So
the
logging
workstream
in
open
telemetry
has
two
pillars.
One
is
for
people
who
already
work
on
a
language
that
has
established
log
interface
like
log4j
for
java
and
ilogger
for
download.
These
are
not
likely
to
change
in
next
10
years,
so
the
guidance
from
open
time
for
logging
sales
go
and
follow
the
protocol
and
build
some
story.
So
we
can
forward
those
logs
to
this
new
protocol
and
also
there's
another
work
stream
targeting
next
year
or
even
later,
to
define
the
open
telemetry
native
logging
api
and
that
part
is
making
slow
progress.
F
Their
debates,
whether
we
should
invent
our
own
logging,
api
or
just
live
with
the
existing
language,
preferred
login
api.
But
there
seems
to
be
a
demand,
because
in
many
languages
we
don't
have
a
established
logging
api,
like
c
plus
plus.
I
ask
wrong
and
nobody
knows
where
there's
a
famous
logging
api
across
the
platform
and
most
likely
for
doughnuts
people
will
just
live
with.
I
logger
we're
not
going
to
invent
another
login
api
for
python.
They'll
just
use
the
python
built-in,
logger
and
java
will
use
log4j.
F
F
A
A
So
because
of
that,
like
thing,
we
should
be
able
to
do
like
logging.
Also
matrix
can't
fall
after
that,
but
now,
since
greg
also
asked
about
metrics,
we
can
briefly
chat
about
metrics
in
the
next
eight
or
nine
minutes,
so
that.
F
These
are
like,
so
what
once
for
logging,
so
I
like
aws
folks
reach
out
to
me
and,
and
they
want
to
also
put
some
effort
on
the
login
part.
I
wonder
if
here
like
someone
is
waiting
to
take
one
intern.
I
can
probably
explore
if
they
can
give
us
one
intern
on
this.
F
A
Is
there
like
an
intern
available
from
amazon
who
is
willing
to
work
on
open
elementary
logging?
And
yes,
we
just
need
like
someone
to
like
guide
them
right.
Yes,
yeah,
let's
see
like
if
there
are
any
interest,
please
like
contact
us
in
jitter
and
we
can
connect
with
the
right
folks.
F
Okay,
is
it
like
upcoming,
like
or
the
intern
already
started?
Do
you
know,
I?
I
already
met
two
interns
from
aws
on
the
open
time
gc
plus
plus,
and
I
heard
that
there
will
be
another
intern
working
on
the
donut
they're
waiting
to
work
on
the
donut.
It's
it's
up
to
us
to
decide
whether
we're
taking
that
or
not
okay
got
it.
A
Okay,
yeah,
we
I
I'll,
have
some
conversation
and
figure
out
whether
I
have
the
bandwidth
to
like
help
with
that.
Otherwise
I
mean
at
this
stage.
I
do
not
know
exactly
how
the
open
elementary
logging
provider
is
going
to
look
like.
So
let
me
come
back
to
it
in
a
week
or
two,
and
then
we
can
like
see
whether
there
is
enough
scope
for
like
one
internship
or
is
it
like
much
much
similar
in
scope?
B
So
my
question
was:
I
was
talking
to
some
folks
who
are
engaged
in
the
open,
telemetry
metrics
group,
and
they
asked
me
about
dd
sketch
and
its
implementation
for
the
net.
So
dd
sketch
is
a
algorithm
that
can
provide
aggregation
of.
B
B
It
has
some
advantages
over
t
digest
for
the
specific
use
case
of
of
apm
related
percentiles,
and
there
was
a
discussion
in
the
metrics
group
whether
or
not
it
should
be
a
standard
like
open,
telemetry,
standard
and
they're
debating
it,
and
the
reason
why
they
reached
out
to
me
is
they
asked
they
wanted
to
consider
in
in
terms
of
like
within
the
discussion,
whether
or
not
should
declare
it
to
be
the
standard
they
wanted
to
consider
plans
to
implement
it
for
different
languages,
including.net
and
so
for
the
net
they
reached
out
to
me,
because
I'm
datadocker9.net
and
dd
sketch
is
comes
out
of
datadog.
B
So
so
right
now
we
didn't
have
plans
to
implement
this,
because
it's
more
of
an
sdk
focus
thing.
It's
primarily
important
form
for
custom
metrics.
The
reason
being
is
that
on
tracer
we
currently
aggregate
metrics
in
a
separate
process,
and
we
will
probably
improve
on
it
in
the
long
term,
because
it's
not
very
efficient,
but
in
the
immediate
weeks
we
don't
have
plans
to
work
on
it.
So
I
wonder
what
is
the
stance
here
with
the
sdk
team
like
without
with
the
people
who
are?
Who
are.
A
Implement
any
aggregation
which
can
produce
histograms
or
percentiles,
so
it's
like
absolute
basic,
just
doing
like
min
max
comes
out.
That's
the
aggregator
we
have.
There
is
a
open
issue
open
for
someone
to
implement
it,
but
the
reason
why
we
were
like
always
like
delaying
this
work
is
to
like
first
have
the
apa
and
metric
sdk
spec
stable,
and
then
we
will
come
back
and
invest
more
efforts
into
it.
A
So
this
is
just
me
like,
like
some
basic,
almost
like
a
prototype
based
on
now
one
year
old
spec,
which
has
completely
changed.
So
that
is
the
current
status
and
in
terms
of
my
understanding
of
like
what's
going
to
happen
in
the
next
few
months,
is
similar
to
what
we
did
activity
from
dot
net,
which
is
our
span
equivalent.
We
are
hoping
to
get
a
metric
api
exposed
from
the
dot
net
itself.
It
currently
has
a
api
called
event
counters.
It
does
not
meet
the
open,
telemetry
api.
A
It
cannot
be
used
to
satisfy
the
open
elementary
api,
but
there
are
discussions
which
would
happen
in
the
next
months
to
include
some
api
in
the
dot
net
itself,
which
would
be
compliant
or
which
would
make
us
build
a
open,
telemetry,
compatible
or
compliant
matrix
api
on
top
of
it.
So,
and
once
that
is
done,
it
will
be
part
of
dotnet
6
release
coming
out
next
year
and
then,
like
this
language,
like
this,
can
decide
to
build
our
matrix
story.
A
On
top
of
that,
like
similar
to
tracing
which
we
are
building
on
top
of
activity,
we
will
do
matrix
on
top
of
the
dot
knight
itself.
So
it's
not
yet
decided
like
the
title.
B
Dot
api,
but
the
ddsk
question
is
more
about
aggregation
right.
So
yes,
it's
related,
but
it's
slightly
separate
yeah.
So
I
mentioned
this.
A
Because
of
the
timeline,
so
once
I
think
the
goal
would
be,
let
me
just
wait
for
the
specs
to
stabilize
then
introduce
like
the
api
in
dotnet
and
start
using
it
yeah.
We
don't
really
need
to
have
all
the
variations
to
get
started.
We
can
probably
start
with
absolute
minimum,
which
could
be
maximum
and
then
we
can
reduce
and
it
should
be
like
aggregator
would
be
a
plug-and-play
model
you
can
plug
in
any
created.
So
at
that
stage
we
would
want
to
implement
more
advanced
algorithms
to
make
percentage
and
histogram
so
timing.
A
Wise,
I
mean,
unless
somebody
is
like
volunteering
to
do
this
work.
I
don't
see
this
happening
like
along
with
rga.
It's
not
going
to
happen
this
year.
A
There
is
something
there's
a
parallel
or
like
not
parallel
like
related
effort.
It's
currently
offered
to
be
helped
by
like
allen
from
new
radiki's
trying
to
get
some
metrics,
which
we
can
it's
not
really
custom
metric.
There
is
no
api
to
make
it,
but
we'll
get
some
metrics
from
the
spans
like
http
spams.
There
is
a
convention
already
written
in
open
elementary
specification
about
like
response
time.
A
A
B
G
B
Then
the
aggregator
will
aggregate
it
either
to
minimax
account,
or
it
will
aggregate
into
some
algorithm
that
supports
percentiles
yeah.
A
Like
max
come
mean
back
some
count,
we
just
use
that
like
and
exported
to
prometheus.
So
once
we
have
like
once
we
in
the
future,
once
we
implement
like
more
algorithms
yeah,
I
mean
this
can
be.
Like
I
mean
api
requires
no
change.
They
just
limit
logs.
B
So
no
concrete
thoughts
about
about
algorithms
that
support
percentiles.
Yet
right
yeah
I
mean
at
least
we
haven't
thought
about
it
in
detail.
So.
D
A
It's
like
the
matrix
spec
is
expected
to
make
some
progress
like
after
the
traces
are
ga
which
is
expected
like
this
month,
so
hopefully,
starting
next
month,
there
will
be
more
concrete
specs
and
then
it
makes
sense
to
like
actively
invest
in
because
right.
This
spec
in
this
implementation
is
based
on
one
year
old
spec,
which
is
already
out
of
time.
So
we
want
to
anyway,
throw
it
away
and
rebuild
so
if
we
just
want
to
wait
for
the
specs
to
be
more
stable
before
doing
that.
A
So
I
just
shared
like
what
are
my
like:
okay,
original
plans
and
in
that,
like
percentiles
or
supporting
more
aggregators
is
not
in
my
plan.
But
if
there
are
folks
who
wants
to
contribute
a
new
aggregator
which
can
do
that
yeah,
we
can
take
it
but
be
fully
prepared
that
it
may
be
like
prefected
or
restructured
when
we
rebuild
the
whole
stuff.
B
Okay,
cool
make
sense
thanks
for
for
explaining
by
the
way.
As
you
look
at
implementations
for
the
aggregators,
I
recommend
that
you
take
a
look
at
what
application
set
does,
because.
B
C
A
Okay,
yeah,
we
don't
have
any
agenda.
So
if
there
is
any
like
topics,
please
add
to
the
agenda
or
we
can
discuss
in
jitter
or
it
appears
itself
all
right.
We
don't
have
any
new
members,
so
nobody
introduced
yeah.
Then
that's
the
end
thanks.
Everyone.