►
From YouTube: 2022-02-03 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
B
Cool,
we
could
probably
jump
into
things
here.
I
think
we
probably
have
a
few
things
to
talk
about
so
start.
I
think,
as
far
as
my
ability
to
handle
and
understand
zoom
everyone
on
the
call
is
signed
in
as
an
attendee,
so
I
guess
I
won't
give
the
boilerplate
there
but
yeah.
If
you
have
anything
else,
you
want
to
talk
about
as
an
agenda
item.
Please
make
sure
you
have
the
included
agenda
and
we'll
jump
into
this
aaron.
B
If
you
wanted
to
start
us
off,
you
have
the
first
item
asking
for
feedback
on
this
pr.
A
That
that's
really
what
I
want.
This
is
the
new
metrics
api
that's
being
proposed.
Currently
it
does
break
the
current
sdk.
There's
no
way
around
that.
I
am
working
with
josh
as
we
speak
to
port
the
sdk
to
provide
these
functions
so
that
it
can
work
as
the
api.
A
The
doc
strings
are
not
the
best
shape,
but
but
just
generally
I've
kind
of
laid
out
what
I
was
asking
for
in
the
pr.
So
if
we,
if
I
can
get
more
feedback,
I
will
not
merge
this
as
it
is.
But
I
would
like
to
see
if
people
would
approve
of
this
just
as
an
api
change
and
so
that
we
can
make
sure
that
we
don't
need
to
iterate
on
this
more
for
some
feature
that
we're
missing.
B
Yeah,
that
makes
sense.
I
still
am
working
through
it.
It's
a
tough
one,
because
I
want
to
make
sure
that
I
understand
it.
We've
done
a
few
of
these
and
I
find
that
it's
always.
I
prove
it
and
then
later
on
come
back
and
go
like.
Oh,
I
guess
that
doesn't
make
any
sense,
so
I
want
to
make
sure
to
pay
a
little
bit
attention.
I
don't
know
anything.
C
Yeah,
I
think
this
is
definitely
something
we
can
move
forward
with.
I
think
ezra
probably
knows
the
the
doc
strings
could
use
a
little
help
or
they're
a
bit
hand
wavy
in
some
sense
and
kind
of
repetitive.
So
it's
not
always
clear
what
the
difference
between
some
of
the
instruments
are,
but
it's
a
place
we
can
evolve
from.
I
think
all
of
us
understand
what
this
means
and
what
we'll
be
building
from
here.
So
as
a
starting
point,
it's
good
I'm
looking
at
the
the
amount
of
red
there.
C
That's
also
very,
very
good.
You
know
getting
rid
of
4
700
lines
to
replace
it
with
750
I'll.
Take
that.
A
C
B
Cool
yeah,
like
I
said,
I'm
still
working
through
it.
Hopefully
I
can
get
you
a
review
this
week,
I
eat
today
or
tomorrow,
but
yeah
anybody
else
on
the
call
participation's
always
encouraged
and
welcomed.
C
A
A
I
think
the
biggest
thing
that
I
could
use
right
now
is
like,
let's
confirm,
let's
lock
down
the
api,
so
that
I
don't
have
to
continue
to
kind
of
chase
that
with
the
sdk
and
schedule
in
sometime
either
the
end
of
next
week
or
the
beginning
of
the
following,
because
that's
when
I
plan
to
have
the
rest
of
this
sdk
out
the
door
so
that
people
can
can
use
it.
C
A
That
yeah
that's
the
idea,
because
there
are
a
large
number
of
simplifications
that
come
from
the
generics,
as
well
as
there's
a
couple
design
choices
that
were
rough,
that
made
that
made
sense
in
the
time
when
they
were
implemented,
but
has
meant
that
there's
a
lot
of
extra
code
in
the
sdk
that
we
could
eliminate
the
other
thing
that
we
will
need,
because
I
call
this
out
in
the
pr
this
isn't
spec
compliant
as
the
spec
is
written.
A
Today
there
is
a
number
of
pr's
there's
a
number
of
conversations
that
are
happening
at
the
spec
level
to
allow
for
red
for
not
having
the
callbacks
be
registered
only
at
creation
time,
but
it's
acceptable
for
it
to
be
registered
in
a
later
way.
That
is
probably
the
key
one.
Josh
is
actively
trying
to
push
those
forward.
A
So
that
is
a
warning
and
a
call
out
that
I
just
wanted
to
make
sure
is
plain
and
clear.
If
there
could
be
any
way,
you
can
review
those
prs,
those
that
would
also
help
that
won't
help
develop
this
faster,
but
it
will
help
get
this
acceptable
into
the
whole
way
faster.
I.
C
I
think
we're
going
to
in
any
case
wait
for
those
to
be
landed
upstream,
because
it
just
doesn't
make
sense
for
us
to
ship
an
api
without
that
change
that
tyler,
that's,
perhaps
an
area
where
you
can
contribute
more
than
I,
because
you
are
a
spec
approver
for
metrics,
whereas
I
just
get
a
boring
gray
check.
If
I,
if
I
give
it
the
thumbs
up,.
B
Yeah
I've
been
doing
my
best
to
hit
all
those
pr,
but
if
I've
missed
some,
please
let
me
know
I
I
I
thought
the
same
thing.
C
B
Yeah,
let
me
double
check.
Do
I
have
to
share
my
screen?
Sorry,
this
is
the
milestone
that
anthony
is
referring
to
if
you
haven't
already
seen
it,
but
I
think
this
looks
pretty
comprehensive.
There's
a
lot
of
do
not
mergers
in
here.
B
I
think
there's
probably
a
lot
more
that
we
could
add.
So
if
you
also
know
of
something
that
needs
to
be
included
in
this,
that
already
exists.
Please
go
ahead
and
either
actually.
A
Have
a
question
on
that
two:
five:
five:
five:
the
replace
recording
span.
That
seems
to
be.
B
I
saw
that
earlier
and
I
was
like
oh,
I
didn't
remove
that,
but
I
just
never
did
yeah.
That's
a
good
call
out.
Yes,
it's
also
a
good
segue
if
we
want
to
move
on,
but
I
just
want
to
make
sure
we're.
Not
nothing
else
must
be
said
here.
B
Okay,
so
with
that,
one
of
the
things
that
I've
been
working
on
recently
is
performance
enhancements,
and
there
are
a
few
things
that
are
going
through.
There's
been
a
lot
of
discussion
about
this
in
the
past,
and
so
some
of
these
are,
as
I
said
last
week-
I'm
not
really.
I
don't
want
the
credit
for
coming
up
with
them
more
so
just
really
copy
pasting
them,
but
one
of
the
things
that
was
mentioned
is
in
this
performance
issues.
B
Pr,
it's
pretty
comprehensive
is
that
the
attribute
maps
in
case
you
haven't
been
following
along
is
extremely
resource
intensive
for
spams
that
use
attributes
as
well
as
fans
that
don't
use
attributes,
and
so
it
also
isn't
compliant
with
the
optometry
specification.
There's
an
issue
tracking
this,
but
essentially
like
the
drop
order,
is
wrong
all
contacts
if
you
wanted
to
go
and
dig
into
this-
but
this
is
all
here
so
with
that
minds,
there's
a
replacement
pr
that
was
proposed
last
week
and
it
has
some
approvals.
B
B
One
of
the
big
drawbacks
that
I
see
from
this
pr
is
that
by
using
a
map,
the
returned
attribute
values
are
unordered
which,
as
we
discussed
last
week,
is
not
a
guarantee
by
the
specification,
and
it
probably
isn't
something
that
people
should
rely
on,
but
it's
always
sitting
in
the
back
of
my
mind
that
it's
going
to
be
a
disruptive
user
behavior.
By
releasing
something
like
this,
I
think
that
it's
justified,
but
if
we
could
avoid
it,
my
goal
was
to
say:
let's
see
about
trying
to
avoid
it.
B
So
that
comment,
let
me
get
to
going
and
building
out
this
alternate
implementation,
which
slice
instead
of
a
map
to
track
attributes.
It
should
be
pretty
straightforward
to
find
it
yeah
in
a
very
similar
configuration
instead
of
being
a
map,
it's
just
a
slice
and
it
realized
pretty
similar
performance
improvements.
I
kind
of
wanted
to
talk
about
that,
so
I
did
a
little
comparison
here.
B
This
is
a
comparison
using
bench
stat
between
the
pr
we
just
looked
at
with
the
map,
which
is
this
pr
and
the
pr
with
the
slice,
which
is
the
one
that
we're
currently
looking
at,
and
so
what
you
can
see
here
is
there's
kind
of
ups
and
downs,
specifically
around
processing
time.
I
think
it's
generally
a
pretty
good
improvement.
There's
some
outliers
here
like
well.
B
There
is
an
outlier
here
which
is,
I
haven't,
dug
in
too
much,
but
you
know
something
on
the
order
of
a
tenth
of
a
millisecond
which
isn't
like
or
100
nanoseconds,
I
think
would
probably
be
so
like
it's
pretty
10th
of
a
microsecond.
I'm
sorry,
my
units
are
off
right.
Now,
it's
it's
pretty
minor,
computational
wise
allocation,
wise,
it's
a
little
all
over
the
map
as
well.
Things
like
you
know
not
doing
any
span.
B
Work
means
that
we
don't
have
an
allocation,
and
I
think
that
just
comes
from
the
fact
that
even
an
empty
map
type
has
an
allocation
that
gets
made
regardless
and
a
slice
can
still
be
nil
and
actually,
as
a
true
nail,
I
think
that's
compiler
dependent.
So
I
haven't
dug
too
deep
into
that.
B
I'm
just
looking
at
these
numbers,
though-
and
I
just
kind
of
remember
this
so
like
you
see
these
sort
of
things,
but
you
also
see
these
sort
of
things
where
allocations
go
up,
and
I
think
that
just
has
to
do
with
the
fact
that
there's
going
to
be
more
overhead
as
we
are
trying
to
prune
things
and
the
slice
capacity
is
increased
dynamically.
B
So
I
think
that
these
numbers
are
close
enough,
that
it
is
worth
going
with
this
approach,
based
on
the
fact
that
I
mentioned
earlier,
in
that
the
user
expected
behavior
of
the
ordering
of
spans
is
going
to
remain
the
same.
B
It's
going
to
have
some
change
right
because
we're
dropping
things
differently,
we're
not
dropping
compared
to
what
we
used
to
be
doing
we're
dropping
according
to
the
specification,
so
user
behavior
is
like
it's
going
to
change
slightly,
but
that
ordering
I
it
is
included
in
this
pr
documenting
that
that
ordering
is
not
guaranteed.
B
But
that
being
said
like
this
doesn't
actually
change
the
ordering.
So
I
felt
like
this
might
be
a
more
viable
approach
and
I
wanted
to
get
the
community's
opinions
on
this
before
I
closed
the
other
one
and
started
this.
A
I
wanted
to
call
out
thank
you
for
calling
out
that
that
is
a
benchmark
against
the
other
proposed
versus
this
one,
because
I
was
really
confused
by
that
that
it
was
all
up
and
down,
because
I
thought
you
were
comparing
mainline
to
this.
B
D
A
B
C
Could
you
add
a
bench
stat
comparing
against
the
current
main
to
this,
so
that
we
could
do
that
comparison
directly
as
well?
That
might
be
helpful.
B
It's
I
actually
already
have
it
it's
just
like.
If
the
meeting
runs
this
long,
I
could
probably
even
just
pull
it
up
in
two
minutes,
but
it's
it's
I'll
I'll
do
that
after
the
meeting,
it's
not
really
crucial
to
see
it
right
now,
but
yeah.
I
mean
it's
similar
to
this,
that
it
realizes
some
really
big
gains
in
memory
allocation,
size
and
values
like
they're,
just
huge
huge
improvements
like
this
one.
B
It
really
stands
out
so
if
somebody's
definitely
having
capacity
to
expand-
and
just
you
know,
blowing
away
that
capacity,
the
current
approach
is
horrible
at
handling
memory
management
at
that
point,
and
the
other
approaches
are
much
better
so
yeah.
I
will
do
that.
C
For
the
allocations,
I'm
wondering
if
it
might
make
sense
to
allocate
the
attribute
slice
initially
with
capacity
up
to
the
the
attribute
limit
and
if
that
might
get
rid
of
some
of
the
additional
allocations.
As
you
add
attributes,.
B
Yeah,
so
that's
a
a
good
question
and
next
there's
some.
This
is
an
important
point
to
make.
There's
it
turns
out
to
not
be
a
good
idea
turns
out
the
memory
allocation
for
the
go.
Compiler
is
pretty
optimized,
but
the
important
thing
to
keep
in
mind
is
that
by
allocating
that
much
every
span,
including
spans
that
have
no
attributes,
have
that
much
memory
allocated
for
them,
and
so
it
actually
produces
worse
results.
B
You
will
actually
see
more
allocations
because
it
has
to
allocate
across
word
size
in
the
page
for
the
memory
allocations
plus
the
the
allocated
bytes.
It
goes
up.
B
Well,
it
still
is
less
than
main,
but
it
goes
up
considerably
compared
to
what
this
is,
including
because
the
you
know,
if
somebody's,
only
adding
four
or
eight
attributes,
but
it
has
the
capacity
to
store
128
by
default,
like
there's
just
a
huge
amount
of
overhead
that's
incurred
in
those
processes
and,
like
you,
see
much
worse
statistics-
and
I
was
thinking
about
it
like
that-
is
really
optimizing
for
the
case
where
somebody's
using
it
always
at
capacity-
and
I
think
that's
not
going
to
be
the
common
cases-
why?
B
C
B
Yeah,
that's
the
feedback.
I
will
do
that
and
also
yeah,
and
it's
making
me
think
but
there's
another
point
in
here
where
essentially
I
need
to
de-duplicate
these
records
and
I
make
a
map
and
again
it
turns
out
like
if
you
don't
allocate
this
map
the
go.
Compiler
is
actually
really
smart.
It
can
kind
of.
I
think
it
actually
gets
insight
from
the
function
it's
in
as
to
what
that
allocation
site
is
going
to
be,
and
it
does
it
very
optimized.
B
So
yeah
I'll
add
some
more
comments
about
some
improvements
that
I
made
in
choices
there,
but
thanks
it's
a
good
idea.
A
I
want
to
call
out
two
things.
First
off
I
love.
I
love
that
we
are
doing
benchmark
driven
development
in
this.
We
are
looking
for
performance
gains
by
actually
looking
at
our
performance
and
the
other
addition
to
that
is
we.
I
want
to
make
sure
that
we
keep
our
big
picture,
that
it's
easy
to
write
benchmarks
that
are
really
small
and
targeted
at
what
we
think,
but
don't
actually
encompass
the
whole
user
story.
A
B
Yeah-
and
I
think
that's
a
really
good
point,
because
when
you
look
at
benchmarks
that
are
kind
of
up
and
down
you
really
have
to
you
have
to
determine
that
story
right
like
that.
That
is
a
good
question
to
ask,
like:
are
users
going
to
be
more
commonly
adding
four
or
zero
or
128
attributes
right?
Whereas
if
you
look
and
you
see
what
we're
seeing
in
a
lot
of
these
pr's
like
across
the
board,
things
are
improving,
I
think
that's
a
way
easier
story
to
sell.
B
You
know
like
yeah
like
when
you
know
the
benchmarks
not
even
related
to
span
attributes
and
the
whole
thing
is
going
down,
because
the
just
the
default
size
is
reduced
like
that
is,
I
think,
huge
yeah.
Definitely.
B
I
think,
with
that,
there's
probably
enough
feedback,
I'm
I'm
going
to
just
kind
of
recap
close
this,
because
I
don't
think
that
the
pursuing
this
with
the
map
attributes
is
like
worth
it.
Given
the
kind
of
the
back
and
forth
solutions
here,
we
can
always
reopen
things
that
turns
out.
Closing
is
reopening,
and
I'm
just
going
to
pursue
this
with
some
of
the
feedback
tweaks,
and
then
we
can
do
the
review
of
this
thanks.
B
Everyone
who
already
reviewed
it
like
there's
actually
a
lot
of
similar
things,
so
it
shouldn't
be
too
much
to
remember
the
second
one,
but
sorry
also
to
incur
extra
review
time.
Okay,
brian,
it
looks
like
you're
up
next
talking
about
the
crosslink
pr2
tool.
E
Yeah
hi,
so
the
pr
got
submitted
yesterday.
I
it's
kind
of
a
big
pr,
because
it's
a
big
tool,
but
it
it
works
and
I'm
pretty
happy
about
it
and
hoping
to
get
some
initial
feedback
on
it.
I
have
done
testing
executing
it
against
collector,
contrib
and
hotel
go
still
work.
Those
repositories
still
work
and
as
an
extra
little
bit
of
new
info
is
that
I
actually
stumbled
upon
the
go
1.18
patch
notes
yesterday
and
they're.
E
Actually
adding
this
feature
called
workspaces
which
solves
a
lot
of
multi-module
pain
points.
I
actually,
I
call
it
out
in
the
google
doc
for
the
design,
but
another
snippet
in
the
proposal
is
that
they
say
you
know.
E
Oh,
these
workspaces
could
be
automated
by
a
tool
that
basically
populates
the
workspace,
and
the
way
crosslink
is
built
right
now
is
that
if
we
seek
to
kind
of
support
that
it
would
be
a
pretty
straightforward
addition-
and
I
think
that's
pretty
cool,
so
you
could
either
do
it
with
the
new
workspaces
or
do
it
how
we
have
been
doing
so
yeah
and
once
this
gets
reviewed
and
changes
get
updated.
I
will
create
prs
in
collector,
contrib
and
hotel
go
to
use
the
new
tool.
B
Cool
yeah-
I
haven't
taken
a
look
at
this
yet,
but
thanks
again
for
getting
through
and
working
this
out.
Have
you
also
advertised
this
in
the
collector's
site.
E
Not
yet
I
didn't
get
it
done
before
that
sig
was
done,
but
next
week
I
will
share
it
too.
B
No,
that
makes
sense
yeah
cool
yeah.
I
would
like
to
get
this
out
like
we
were
saying
also
last
week.
I
think
this
is
something
that's
going
to
be
useful
than
more
than
just
these
two
projects,
even,
but
I
probably
even
could
be
useful
across
things
outside
of
open
cylindrical.
So
I
I
really
appreciate
it.
B
E
I
definitely
tried
to
build
that
build
it
with
that
in
mind,
so
hopefully
it
can
get
brought
on
to
really
any
big
go
project
that
does
this
multi-module
kind
of
paradigm,
so
yeah.
B
Cool
awesome
thanks
again
ryan.
B
I
guess
I
have
two
more
items,
one
of
them's
not
on
here,
but
one
of
the
other
things
I
worked
on
this
week
was
this
fix
for
hotel
http
transport
handling
protocol.
Switching,
it's
got
one
approver
I'd
love
another
review.
If
you
have
time,
I
know
there's
a
lot
of
requests
for
review
in
this
meeting,
so
I
understand
that
everyone's
pretty
busy.
B
But
that
being
said,
the
other
things
I
want
to
talk
about
was
doing
a
release,
there's
a
fair
amount
of
things
that
we
have
in
the
changelog
that
I
think,
could
probably
stand
as
releasable
and
I
think,
there's
a
few
more
things
in
the
pipeline
that
could
probably
also
get
included
in
this
next
release.
So
I
was
wondering
what
other
people's
thoughts
on
that
were.
This
is
one
of
the
pr's
that
I
would
really
like
to
include
in
a
new
release.
B
So
does
anybody
else
have
any
thoughts
on
pr's
that
we
need
to
include
in
a
release.
C
I
I
think
we
are
still
at
the
point
where
the
metrics
exporters
that
exist
in
the
contrib
repo
are
not
functional
against
the
current
sdk
because
of
the
changes
to
histograms
or
the
histogram
aggregation.
If
I
remember
correctly
right,
I
don't
think
we
had
any.
We
had
some
attempts
to
fix
that,
but
I
think
there
were
problems
with
all
of
those
prs.
C
F
C
Are
likely
to
be
the
only
things
you
can
contrib
that
are
really
dependent
on
the
sdk
that
is
going
to
get
significantly
upended
in
the
very
near
future,
and
maintaining
support
for
them
across
those
changes
might
be
difficult.
B
Yeah,
that's
a
really
question.
Maybe
we
can
talk
a
little
bit
about
strategies
here.
I
think
that
I
probably
wouldn't
be
in
favor
of
removing
them,
because
we
have
a
lot
of
complaints
when
we
do
removals.
Those
are
that's
a
pretty
heavy
operation,
especially
if
we
plan
to
add
them
back.
B
I
guess
in
the
future-
maybe
not,
maybe
I'm
mistaking
that
like,
maybe
if
we
add
them
back
in
the
future,
it's
not
really
as
big
of
a
deal,
but
I
don't
know
the
user
story
that
well
to
understand.
I
think
the
the
trade-offs
on
that
one.
I
would
wonder
if
we
could
deprecate
them,
but
again
that
comes
back
to
that
same
story.
If
we
plan
to
undeprecate
them
in
the
future,
how
does
that
user
story?
Look?
B
I
agree
that
you
know
you
should
remove
that
support
and
I
definitely
don't
think
it
should
be
getting
in
the
way
of
progressing
or
work
with
the
metrics
api
or
sdk,
which
it
isn't
just
to
be
clear,
and
so
I
think
that's
that's
a
positive
thing,
but
I
think
you're
right,
like
they
are
kind
of
like
this
tail
leftover
piece
of
work
from
the
last
release.
We
did
that
we
haven't
really
addressed.
I
don't
know
if
there
are
any
other
community
opinions
on
this
one.
B
So
stephen
harris,
do
you
use
the
I
think,
there's
like
the
prometheus
exporter
by
any
chance
or
another.
You
don't.
G
Not
yet
we
keep
looking
at
it
wondering
whether
we
can
replace
some
of
our
use
of
prometheus
or
integrate
it,
but
no,
I
haven't
used
it
yet:
okay,
yeah
sorry,.
D
Tyler,
what
specifically,
regarding
the
prometheus
exporter,
I
mean,
can
you.
D
C
Is
is
in
the
core
repo,
the
remote
right
is
in
contrib
and
there
were
some
issues.
Updating
that
and
the
data
dog
exporters
or
stats
dog
sets
the
exporters
that
we
haven't
contrib
based
on
changes
in
the
last
sdk
metrics
sdk
release.
I
think
my
recollection
is
that
the
the
issues
with
the
prometheus
remote
right
exporter
were
addressable,
but
the
doug
says
the
exporter
was
less
addressable
because
we
no
longer
had
an
exact
distribution
couldn't
reconstruct
histograms
for
it.
I.
G
G
Yeah
I'll
just
say
that
there's
it's
very
confusing
space,
because
there's
both
the
opportunity
to
write
data
into
a
prometheus
instance,
there's
also
the
opportunity
to
scrape
prometheus
targets
and
then
export
them
as
otlp
metrics
yep.
You
know
so.
We've
debated
doing
both
and
have
done
either
yeah.
C
D
Which
is
which
is
typically,
what
is
actually
being
done
for
remote
write,
so
you
are
you're
kind
of
chaining
the
go
sdk
with
the
collector
and
then
pushing
to
end
point
remote
right,
endpoint.
C
I
think
long
term,
on
the
topic
of
exporters
directly
from
the
go
sdk,
we
might
be
best
served
by
having
an
adapter
to
the
collector's
p
data
format
and
then
enabling
the
use
of
collector
exporters,
because
there's
just
a
much
broader
range
of
exporters
there
and
those
are
for
the
most
part
maintained
by
the
the
parties
who
are
interested
in
in
being
able
to
export
those
formats
or
to
those
destinations.
C
D
And
I
couldn't
give
some
background
because,
given
I
mean
my
team
had
actually
built
both
the
remote
right
exporters
on
the
collector
and
the
go
library,
so
I
agree
with
anthony
that
you
know
at
this
point.
I
think
the
project
you
know-
and
this
is
discussion
across
several
of
the
language
segs.
D
The
discussion
has
been
that
you
know,
should
we
just
make
available
pull
exporters
for
prometheus
on
the
libraries,
the
language,
libraries
and
then
the
remote
right
exporter
through
the
collector
right.
So
you
always
have
a
single
path
which
is
well
maintained
for
the
remote
right.
You
know
use
case
and
and
maintain
that
remote
right
exporter
on
the
collector
and
just
use
chain,
the
you
know,
just
use
a
training
process
or
pipeline
to
be
able
to
use
it.
D
G
Just
just
to
go
back
to
say
as
a
somewhat
confused
consumer
of
these
technologies.
I
actually
think
this
case,
where
fewer
options
might
help
clarify
what
the
right
direction
is
to
go
and
then
maybe
even
a
short
document
that
says
something
like
if
prometheus
is
involved
in
your
world
and
you're
trying
to.
D
G
D
And
and
again,
if
you
do,
you
have
an
issue
open
for
it.
If
not,
please
open
one
because
we'll
definitely
add
the
documentation
for
clarifying
that
again.
We
communicate
that
all
the
time
you
know
to
different
users.
So
again,
I
totally
agree
that
should
be
that's
a.
B
No,
that's
fine.
I've
been
talking
too
much,
anyways
yeah.
I
think
that's
a
really
good
idea.
I
coming
back
to
the
the
initial
question,
though,
is
like
how
are
we
gonna
handle
histograms
and
the
exporters
for
the
open
symmetry
contrib
repository,
I'm
trying.
I
thought
there
was
an
issue
tracking
this,
but
I
can't
seem
to
find
it
there.
It
is
it's
1478.
C
I'll
put
a
link
in
chat.
Okay
thanks
there
we
go
yeah
yeah,
so
it
looks
like
mid
max
sum.
Count
was
the
aggregation
that
was
removed
from
cortex
and
I
think
cortex
permit.
This
remote
right
can
be
functional
at
this
point,
datadog
and
doug
satsd,
I
think,
had
the
issue
that
they
don't
have
a
good
way
for
us
to
go
from
our
histogram
aggregation
to
theirs
and
we
were
using
the
exact
aggregation
in
order
to
emit
data
points
into
their
aggregation
pipeline.
B
Yeah,
I
I
I
would
drop
it
like.
I,
I
don't
see
why
we
need
to
spend
too
much
time
on
that.
I
think
that
there's
also
been
like.
I
also
had
still
it's
going
to
re-ask
that
question.
Why
we're
supporting
the
data
dog
exporter
here?
I
thought
that
was
something
we
didn't
want
to
do.
I
know
there's
history
there
and
I
know
most
people
on
the
call
actually
don't
know
the
history,
but
there's
I.
B
I
think
this
might
be
a
good
time
for
us
to
actually
deprecate
this
package
itself
and
just
go
with
the
dog
stats
d
instead
of
the
data
dog,
specific
metrics
exporter.
B
But
I
I
don't
see
why,
if
we
aren't
able
to
support
an
exact
like
a
points
here
and
just
support
this,
like
it's
going
to
be
a
breaking
change,
but
that's
why
this
isn't
a
stable
package
yet.
D
Tyler,
can
I
suggest
that
you
know
I
I
can
reach
out
to
datadog
and
ask
them
in
terms
of
them
maintaining
it,
but
then
you
know
we
just
say
that
we
deprecate
it.
B
D
B
Okay,
thank
you
yeah,
so
that
would
then
just
leave
the
dog
set
d
and
the
cortex
exporter,
which
sounds
like
anthony.
The
cortex
exporter
should
be
resolved,
not
mistaken.
C
Yes,
I
think
we
could
ship
a
new
version
of
the
cortex
exporter
right
now,
but
I
question
whether
we
want
to
maintain
that
going
forward
as
well
like
verb.
For
now
we
can
go
and
keep
it.
It's
been
fairly
low
effort,
but
as
we
rewrite
the
sdk
depending
on
what
right,
interface
between
the
the
sdk
and
exporters
looks
like,
it
might
again
lead
to
significant
turn.
B
Yeah,
I'm
also
that's
a
good
point
like
maybe
we
also
just
want
to
hold
off
releases
on
some
of
these
things.
While
we
work
on
this
sdk
and
api
development,
because
I
mean
that
was
one
of
the
big
things
that
we
did
accomplish
in
splitting
things
up
with
this
versioning
file.
Is
you
know
if
they're
not
already
in
the
experimental
section?
We
could
you
know
partition
them
off
from
other
things
we
did
want
to
like
increment
here?
Is
you
know
experimental
experiments?
B
These
are,
I
think,
probably
they
can
be
split
even
more
if
we
need
to.
I
guess
at
that
point,
if
everyone
else
is
okay
with
that.
A
My
suggestion
would
be:
let's
take
the
if
we
don't
see
ourselves
supporting
these
long-term
data
dog,
because
we
just
don't
have
the
support
for
it
like
the
types
don't
match
up
and
cortex,
because
we're
going
to
recommend
an
alternative
path.
A
Why
don't
we
take
this
release
as
an
opportunity
to
actually
deprecate
them
and
then
just
hold
the
version
of
them,
and
then
we
can
leave
the
functioning
code,
pointing
at
the
old
sdk
for
datadog
and
whatever
the
current
scak
for
cortex
like
leave
them
in
a
working
state
but
deprecated
and
then
give
ourselves
some
time
to
fully
remove
them
at
a
later
date
or
or
whatnot.
C
Yeah,
that's
similar
to
what
I
would
suggest
as
well.
I'm
splitting
that
experimental,
metrics
version
set
into
metrics
exporters
and
metrics
instrumentation
user
rotation
is
only
going
to
depend
on
the
api
and
we
can
keep
rolling
forward
with
that.
But
then
we'll
deprecate
the
exporters
and
I
think
in
the
near
future.
We
should
also
probably
just
remove
them.
I
think,
if
we
remove
them,
people
will
still
be
able
to
use
the
last
good
tag
there
for
them
if
they,
if
they
continue,
but
we
don't
need
to
carry
the
burden
of
dealing
with
them
going
forward.
D
So
just
to
understand
that
does
that
mean
that
we
would
have
a
deprecation
tag
for
a
couple
of
releases
and
the
code
is
still
bundled
in
and
then
it
and
then
it
gets
removed.
I
mean
that's
what
the
practice
we've
been
following
on
the
collector,
but
again
just
just
would
like
to
be
clear
about
that.
C
Yes,
I
I
think
it
would
not
be
for
a
couple
releases.
I
mean
it
might
be
for
a
couple
of
releases
of
the
other
parts
of
the
contrib
repo,
but
we
would
stop
with
new
releases
of
those
modules.
We
would
add
the
deprecation
notice
to
them
yeah
and
over
some
period
of
time.
We
would
wait
before
removing
them,
but
I
don't
think
it
would
be
a
set
number
of
releases
because
there
wouldn't
be
any
further
releases
of
those
modules.
C
C
We
would
have
an
issue.
We
would
put
a
deprecation
notice
in
the
go
mod
file
for
those,
so
users
trying
to
install
them
will
get
a
warning
from
the
go
tooling,
at
least
beyond
a
certain
version.
I
think
116
is
where
that
started,
so
the
version
we
support
should
should
always
warn
about
that
and
it'll
be
in
the
change
log.
You
know
the
places
where
we
would
communicate
changes,
yep.
B
A
D
I
I
can
take,
I
can
do
that
aaron
I
mean
I
can
write
that
up
and
submit.
A
D
D
B
Yeah
and
I
think
that
anthony
you
were
kind
of
talking
about
a
different
approach
as
well,
one
that
would
kind
of
short
circuit
the
otp
and
just
go
straight
to
the
p
data
right.
Maybe
that's
like
a
future
future
edition
right.
C
C
That
would,
I
think,
be
ideal.
There
are
some
questions
to
deal
with
around
there.
Like
can
we
provide
all
of
the
things
that
an
exporter
needs
for
its
life
cycle
and
configuration
management
and
all
of
that?
But
it's
something
that
would
be
interesting,
I
think,
to
explore
once
we
have
the
sdk
in
a
stable
spot,
because
it
would
allow
us
to
offer
a
lot
more
export
directly
out
of
applications
without
needing
a
collector
or
a
sidecar,
or
anything
like
that.
C
B
I
agree,
I
think,
that's
a
good
action
item
for
the
long
term
that
we
don't
want
to
lose,
so
I'm
gonna
create
an
issue
for
that
as
well.
I
think,
but
in
the
interim,
the
the
goal
is
to
then
recommend
instead
of
the
cortex
export,
to
recommend
sending
to
the
collector
and
then
utilize
it
that
way
like
what
you're
describing
it
doesn't
go
over
the
network
right
yeah.
C
Right
yeah,
what
I'm
describing
wouldn't
go
over
the
net
would
be
all
in
process.
I
think
that's
how,
like
the
the
honeycomb
trace
exporter
used
to
function,
it
might
have
been
the
other
way
around,
though
I
think
they
wrote
an
exporter
for
the
go
sdk
initially
and
then
wrapped
it
in
something
on
the
collector's
side,
but
this
would
be
kind
of
the
invoice
of
that
yeah
right.
I
remember
this
now.
Yeah.
B
B
Okay,
based
on
this,
I
don't
think
we
probably
want
to
do
a
release
this
week,
given
our
waterfall
of
dependapod
issues
that
come
up
if
we
don't
get
it
out
to
contrib.
So
maybe
that's
something
we
could
target
for
next
week
or
I'll.
Try
to
I
get
an
approval
from
anthony
and
aaron
on
that
one
I
see
aaron
said
yeah.
C
Let's,
let's
break
up
on
chat,
say
on
monday
start
kicking
it
off
early
in
the
week.
That
sounds
good.
D
Yeah,
I
just
wanted
to
call
this
out
that
we
have
an
metrics
api
milestone
that
we
created
anthony,
set
that
up
and
what
we
I
was
requesting.
D
Everyone
was
that
you
know,
as
you
we've
taken
some
of
the
aprs
and
the
issues
that
exist
and
tag
them
here,
because
these
are,
you
know,
towards
the
achieving
1.0
for
the
api,
and
I
just
wanted
to
call
attention
to
it
so
that
you
know,
as
you
are
looking
or
filing
issues
you
know
you
can
just
tag
it
with
that
milestone,
and
then
we
also
plan
to
have
a
metrics,
sdk
4.0
milestone
which,
where
we
can
tag
other
issues
but
again
just
wanted
to
call
that
out.
D
I
I
also
understand
that
the
sdk
spec
is
has
a
deadline
of
friday
for
the
exemplar.
B
Yeah,
that's
a
helpful
thing
to
have
yeah.
E
B
Appreciate
it
cool
yeah,
thanks
again
anthony
and
elena
for
setting
that
up
and
we'll
try
to,
I
think
flush.
That
out
is
a
good
idea,
as
I,
I
think
ellie
I
don't
know
when
you
jumped
on,
but
we
kind
of
talked
also
about
aaron
has
a
proposal
out
for
the
new
metrics
api
and
we
kind
of
talk
through
a
little
bit
of
the
next
steps
on
how
we're
going
to
progress
that
so,
okay.
B
Action
item
no.
A
A
D
Know
I
know
josh
josh
is
the
secret
power
powerhouse
here,
along
with
all
of
you
guys,
but
I
just
wanted
to
say
that
you
know
if
you
once
I'm
hoping
that
we
can
get
organized
then
I'm
happy
to
add.
You
know
other
engineers
to
help
under
your.
B
Leadership
cool,
so
that's
it
for
the
agenda
items
that
we
have
listed
in
the
doc.
If
anybody
else
has
agenda
items
they
wanted
to
talk
about
that
aren't
listed
there.
Please
go
ahead
and
speak
up.
B
D
B
Not
anticipated,
so
that's
also
nice
to
know
too
so.
Yeah
yeah.
A
D
G
I
don't
have
so
much
story
about
the
go
sdk
specifically,
but
we
have
been
working
on
setting
up
honeycomb
sampling,
proxy
refinery,
which
is
part
of
a
successful
thing
to
do
because
it
means
our
trace.
Volume
is
high
enough
that
we
now
care
about
sampling.
G
So
it
works
great
one
thing
that
was
just
confusing
I'll,
say
again:
it's
not
this
sig's
problem
per
se,
but
it's
a
little
confusing
that
otlt
over
grpc.
Let's
say
that
the
port
is
standardized,
whether
or
not
you're,
using
tls
or
not
so,
in
other
words
like
if
you
had
servers
that
are
receiving
both
encrypted
and
unencrypted
you're
sort
of
forced
to
pick
the
standard
port
for
one
of
them
and
not
the
other.
G
C
Well,
it's
good
to
hear
nice.
Are
you
making
use
of
compression
there
as
well
to
cut
down
traffic
volume
or
is
it?
Is
it
just
number
of
events
that
you're
caring
about
reducing.
G
It's
it's.
It's
the
count
right,
there's
the
requests.
The
requests
are
are
very
much
to
say.
The
traces
are
similar
enough
that
we're
just
collecting
way
too
many
of
them.
For
that
successful
cases
that
nobody's
looking
at
so
we're
just
trying
to
cut
it
down,
to
mostly
focus
on
the
erroneous,
like
the
traces
that
describe
the
errors
and
and
tolerate
a
little
bit
of
vagueness
in
the
counts
for
the
successful
jesus.
C
Yep,
okay,
that
makes
sense.
There's
there's
been
some
effort
in
the
collector
space
to
set
up
gzip
as
default
compression
for
outgoing
grpc.
So
I
was
wondering
if
that
was
also
something
you
were
looking
at,
but
I
know
it's
been
of
interest
for
other
people,
especially
you
know.
If
they're
sending
to
a
provider,
that's
in
a
different
cloud
or
somewhere,
where
they're
getting
egress
charges
for
their
data.
Oh.
G
B
That's
really
cool.
I
I
know
that
the
yeager
exporter,
remote
sampler,
is
also
something
that
we've
included
in
the
contrib
repo.
So
you
might
want
to
take
a
look
at
that.
I
don't
know
if
it's
relevant
but
yeah
go
ahead.
David,
sorry,
yeah.
F
Sure
so
today
was
the
feature
freeze
day
for
kubernetes.
So
I
have
a
few
small
updates
that
relate
to
our
use
of
the
sdks.
But
the
big
one
is
that
we're
planning
to
introduce
tracing
in
the
cubelet,
which
will
be
cool
because
the
cube.
F
Grpc
interfaces
and
I
suppose
it
could
be,
it
doesn't
actually
have
20..
I
think
it
has
six,
but
it's
going
to
be
nice
to
be
able
to
tie
all
the
random
interface
calls
back
to
oh
we're,
creating
a
pod
and
that's
why
we
called
out
to
the
network
interface
and
the
device
plug-in
and
all
this
other
stuff,
so
it'll
be
fun,
that's
being
driven
by
sally
o'malley,
but
I'm
involved
mostly
as
a
reviewer,
and
to
make
sure
that
oh.
B
Yeah
that
is
really
exciting
to
hear.
I
never
thought
that'd
be
a
big
part
of
the
kubernetes
platform,
but
it's
I
don't
know
why
not,
because
that's
a
great
idea
for
it.
So
yeah.
F
Actually,
quite
an
interesting
problem,
though,
because
there's
no
at
least
today,
there's
no
like
thing
that
initiates
a
pod
creation
other
than
a
watch
event,
which
we
don't
currently
have
good
one
they're
not
guaranteed
to
be
delivered,
but
two
like
that's
not
how
kubernetes
controllers
work
so
figuring
out
how
to
like.
I
want
to
create
a
pod
but
sample
all
the
things
is
an
interesting
problem
that
we'll
have
to
solve
eventually.
B
D
B
F
D
A
B
F
So
I
it
might
be
possible,
but
kubernetes
requires
all
dependencies
to
be
on
the
same
version
of
something
they
don't
allow.
Okay
version
drift
in
any
direction
so,
but
it
simplifies
some
things
but
makes
getting
a
1.0
harder,
yeah,
but
eventually
we'll
be
there,
and
then
it
won't
matter
anymore.
Right.
D
B
I
saw
an
issue
come
in
for
semconf,
supporting
the
v
zero
sunkov
package,
or
something
like
that
was
needed
by
etcd.
I
haven't
got
too
deep
into
it.
Are
you
aware
of
that?
One
dude.
B
Yeah
I'll
I'll
ping,
you
I
I
haven't,
looked
too
deep
into
it.
It
yeah
like
I
said,
but
I
think
that
that's
related,
so
I
thought
it
just
came
in
mine.
That's
really
exciting.
Thanks
for
sharing
both
steve
and
david.
Those
are
really
cool
stories.
I
love
hearing
helps
motivate
you
when
you're
in
the
doldrums
of
trying
to
wade
through
horrible
tests
or
something
like
that.
B
B
Awesome
well
cool
thanks.
Everyone
for
joining.
We
will
see
you
all
next
week
same
time
same
place,
otherwise
virtually
via
slack
or
interviews,
but
yeah
thanks.
Everyone
for
joining
thank.