►
From YouTube: 2021-05-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
D
It
looks
like
we're
just
about
at
the
one
o'clock
hour
here
in
the
pacific
time
zone.
D
If
you
haven't
yet
opened
up
the
agenda
dock
at
yourself
to
the
attendees,
and
if
you
have
anything
we
want
to
talk
about,
please
add
it
to
the
agenda
today.
It
looks
pretty
thin
so
far,
which
I'm
excited
about
cuz.
I
got
other
stuff
to
do,
but
I'm
always
excited
to
talk
with
you.
So
if
you
have
anything
here,
you
would
like
to
talk
about.
F
D
Cool,
I
think
we
might
have
quorum,
so
I
will
start
sharing
my
screen.
We
can
jump
in
here
cool
yeah
again
anyone
who's
just
joined
since
I
gave
the
announcement.
Please
add
yourself
to
attendees
list
if
you're
on
the
call-
and
if
you
have
anything
you
always
talk
about,
please
add
it
to
the
agenda
to
start
off.
We
can
go
over
the
open,
rc
project
board
and
talk
a
little
bit
about
the
progress
here,
we're
getting
pretty
darn
close.
D
Okay,
okay,
then,
I
will
start
off
by
talking
about
the
tri-state
baggage
stuff
that
I've
been
working
about.
The
tri-state
portion
of
this
issue
has
been
resolves
merged
that
last
week,
I'm
actively
working
on
a
baggage
refactor.
Here,
mostly
with
this
in
mind.
D
This
idea
that
aaron
kind
of
pointed
out
and
the
idea
that
we
need
to
implement
the
baggage
to
use
strings
currently,
and
the
idea
is
then
eventually
if
we
did
want
to
encode
some
sort
of
type
information
in
the
baggage,
whether
that's
something
that
opens
telemetry
decides
to
do
or
whether
that's
upstream
the
w3c,
we
will
have
some
sort
of
property
field
for
the
value,
which
is
the
metadata
associated
with
the
value,
and
there
may
be
encoding
of
type
there.
D
But
currently
there
isn't-
and
currently
we
don't
want
to
provide
that
for
the
same
reason
we
didn't
want
to
go
for
the
trace
state.
We
want
to
provide
it
as
a
opaque
string
string.
So
if
the
user
wants
to
make
some
sort
of
inference
there
they're
able
to,
but
we
don't
make
any
recommendation
or
any
sort
of
any
determination,
there's
that's
up
to
them.
I
have
a
I'd
say
mostly
working.
D
I
keep
finding
bugs
as
I'm
writing
the
tests,
implementation
of
the
baggage
so
yeah,
it's
just
a
bunch
of
me
writing
all
of
the
actual
validation
tests
for
the
baggage
state
itself,
so
hopefully
get
that
out,
probably
next
monday,
because
I'm
taking
off
tomorrow,
so
there
may
be
a
chance.
I
get
it
tomorrow,
but
yeah.
So
I
will
probably
have
that
up
next
monday
is
kind
of
the
goal
on
that
one
and
yeah.
D
I
think
if
the
gold
air
is
then
start
going,
is
monday
not
a
holiday
for
you,
oh
yeah,
that's
why
I
have
friday
off
as
well.
Isn't
it
so
yeah,
probably
tuesday,
is
what
I
meant
yeah
I've
been
pretty
heads
down
on
this.
So,
yes,
it
is
a
holiday
also
for
everyone
else.
On
the
call.
I
think
that
the
maintainers
meetup
was
put
off
as
well.
D
If
you
wanted
to
show
up
to
that,
just
a
heads
up,
it
probably
isn't
happening
or
other
sig
meetings
are
probably
not
happening
monday,
but
thank
you
anthony
good
call.
D
Yeah
yeah,
you
got
me
okay,
cool,
I
think
pablo
has
joined
at
this
point,
so
we
can
kind
of
jump
back,
and
I
think
this
is
something
that
robert
is
on
the
call
as
well
to
kind
of
talk
about.
So
maybe
we
can
jump
in
here
gustavo
if
you
wanted
to
give
us
a
little
update
as
to
where
we're
at
on
the
removing
of
the
instability
of
the
metrics
dependency
here.
C
Yeah,
actually,
there
is
not
so
much
that
I
have
to
talk.
The
last
thing
that
I
have
done
was
cop
every
depending
so
that
this
otlp
trace
and
the
otlp
trace.
Jrpc
packages
depends
on
like,
for
example,
the
connection
and
the
otlp
config.
C
B
B
My
biggest
concern
here
is
that
you
have
an
api
where
you
have
some,
you
have
the
options
from
one
package
and
then
you
have
an
exporter
from
different
package,
so
I'm
just
so.
The
options
already
have
some
internal
package
when
you
have
some
reusable
stuff
and
then
everything
as
far
as
wrapped
there
correct.
B
C
Yes,
I
just
like,
for
example,
the
options
I
wrapped
that
in
the
otlp
trace
jrpc
package,
so
the
user
can
only
interact
with
that
package,
so
it
doesn't.
B
So
I
would
say
how
I
imagine
it
could
save
us
if
there
will
be
some
specific
thing
you
have
in
one
of
these
exporters
that
you
want
to
have
only
for
one
specific
implementation.
I
have
no
idea
currently
what
could
be
some
performance
improvements.
You
know
something
like
additional
that
I
don't
want
to
post
something.
I
just
have
completely
no
idea,
but
but
I
know
that
you
know
in
some
time
they
make.
I
I
imagine
there
may
be
some
specific
exporter.
B
G
B
G
Right
so
there
isn't
a
separate
exporter
for
grpc
or
http.
There's
a
separate
client
which
is
used
by
the
exporter
to
to
say
here's.
Some
protocol
data
send
it
on
the
wire,
however
you're
supposed
to
send
it
on
the
wire.
G
B
B
G
Yes,
it's
just
a
convenience
wrapper.
We
could
kill
it
entirely
and
not
lose
much.
It's
just
a
a
nice
convenience
for
the
for
the
end
user.
If
they
know
they're
going
to
be
using
otlp
grpc,
then
they
just
instantiate
that
directly
rather
than
instantiating
a
client,
then
sticking
it
to
an
exporter.
Then
sticking
the
exporter
into
a
spam
processor.
So
you
can
spam
processor
to
a
tracer
provider
and
just
just
cut
out
one
level
of
that
mess
of
instantiation.
That
has
to
happen.
G
Okay,
as
for
why
the
why
the
options
were
wrapped
in
in
their
own
type,
it's
because
the
the
actual
implementations
live
in
an
internal
package
that
wouldn't
be
visible
to
end
users
and
shouldn't
be
exported.
So
I
think
that's
why
they're
slightly
different
and
we're
wrapped
with
their
own
type,
that
is
local
to
the
grpc
client.
B
D
Okay
and
while
you're
doing
that,
I
think
that
aaron,
you
know
you
added
something
as
well.
A
I
literally
just
noticed
this
before
this
call.
We
don't
need
to
spend
too
much
time
on
this.
A
H
C
A
A
Exactly
I'm
fine
with
it
this
way,
just.
D
Bringing
that
okay,
just
we
may
need
to
change
it
in
the
future,
is
what
I'm
hearing.
Okay.
I
would
like
to
merge
this
right
now.
I
think
that
all
of
the
contention
has
been
resolved
in
this
conversation.
Robert's
gonna
put
in
the
comments
to
say
that
he's
resolved.
I
think
karen,
you
had
a
comments
here,
but
otherwise
has
three
approvals.
D
I
think
that
we
could
probably
iterate
on
this.
If
not
I
mean
99
of
this
is
good
to
go
from
what
I
saw
so
I
just
like
to.
I
don't
know
if
anybody
has
any
dissenting
opinions
on
that,
please
let
me
know,
because
I'd
rather
just
hit
the
merge
button
right
now,
maybe
nope.
I
can't
hit
it
right
now.
Okay,.
D
To
hit
it
right
now,
but
soon
soon
yeah
soon,
okay,
then
that
should
be
good.
This
is
gonna,
get,
I
think,
a
few
more
pr's
next
week.
These
I
feel,
like
these
pr's
are
gonna,
be
much
smaller
in
the
future,
given
the
way
that
this
one
kind
of
structured.
What
we
expect
here,
so
I
imagine
this
should
accelerate
next
week.
D
C
Yeah,
I
will
try
to
open
the
http
trace
one
today,
because
I
have
already
prepared
that
that's
going
to
be
a
shorter
one,
the
first
the
first
metric
one
is
going
to
be
somewhat
similar
to
this,
and
the
second
one
is
going
to
be
short
as
well.
I
C
I
just
kept
for
now,
but
after
all
them
get
measured
there
there
will
be
apr
about
deleting
the
old
one.
G
Can
we
delete
it
now
because
I
think
it's
going
to
sit
there
effectively
defunct
right
and
unless
we're
going
to
release
a
0.21
before
we
have
an
rc1
there'll,
never
be
another
release
that
includes
it.
D
Yeah
yeah,
I
would
definitely
recommend
that,
but
I
would
I
think,
that
that's
a
fast
approval
and
we
can
get
that
probably
done.
Maybe.
C
And
I
just
didn't
want
to
like
remove
the
otlp
trace
jrpc,
the
old
one,
because
I
would
like
they
are
very
tightly
sculpted,
the
metrics
and
the
trace
export.
So
I
would
need
to
removes
specific
parts
of
the
code
instead
of
just
deleting
the
the
whole
file.
D
Okay,
I
think,
if
that's
a
really
good
idea,
just
to
get
the
thing
moving.
Oh
man,
my
friend,.
D
I
will
come
back
to
that.
Okay,
I
think
we
are
all
set
on
the
in
progress.
We've
gone
over
the
two
of
them.
This
we've
talked
about
a
lot,
so
I'm
not
gonna
get
too
much
I'd
love
to
get
somebody
to
own
this.
This
is
just
pulling
out
the
http
json
otlp
part.
I
don't
know
if
anybody
has
any
cycles,
but
if
you
do
this
would
love
to
have
an
owner.
C
Yeah
in
the
following
one
that
I'm
going
to
made
about
the
ltlp
trace
8080p,
I
have
already
removed
that.
So
that's
going
to
be.
If
that's.
D
Cool,
I
think
we
are
on
a
roll.
Then
I
think
that's
about
it
for
going
on
a
review
of
the
project
board,
so
we
can
jump
into
the
agenda
here.
There's
only
one
item
and
I
think
I
have
some
things
probably
to
talk
about
afterwards
around
the
rc,
maybe
but
maybe
just
jump
in
here
anthony
you
want
to
take
it
away.
G
Yeah,
so
I
think
you've
seen
this
issue
before
too,
but
there's
a
question
of
whether
we
could
share
the
semantic
conventions
that
we're
generating
with
the
collector.
But
I
think
it's
it's
a
good
idea.
If
we
could
perhaps
move
into
a
separate
repo
where
we
could
have
a
new
vanity
domain
or
the
the
new
path
of
the
vanity
domain,
pointing
at
it.
G
My
only
concern
is
that
the
semantic
conventions
all
depend
on
the
attribute
package,
because
they're
all
attribute
key
or
key
value,
so
I
think
that
would
probably
need
to
go
with
it
and
that
doesn't
have
any
dependencies,
except
for
a
couple
conversion
helpers
in
internal
which
could
be
copied
or
moved
if
they're
not
used
elsewhere.
G
But
I
know
that
that
will
be
yet
another
significant
change.
Just
before
we
hit
an
rc.
D
A
Yeah,
so
I
have
a
question:
how
feasible
would
it
be
to
have
different
attributes
that
are
either
copied
from
one
one
type
to
the
other,
like
there's
the
semantic
convention
attributes
which
are
in
the
key
value
type
that
they
are
or
are
those
fundamentally
like
linked.
G
So
the
the
key
type
is
just
an
analysis
string
with
some
additional
functions
added
on
to
it.
So
I
think,
in
terms
of
representation,
it's
fairly
simple
to
move
between
from
one
to
the
other.
You
should
be
able
to
cast
any
string,
but
the
value
types
I
I've
tried
to
avoid.
Looking
too
deeply
into
them,
they
work
and
they
aren't
broken,
and
I
know,
there's
some
black
magic
in
there.
G
G
I
think
if
we
were
to
go
that
fast,
we
would
just
keep
them
entirely
separate
and
share
the
the
semcom
generator
with
a
different
template
so
that
we
could
get
the
same
names,
but
have
it
generate
different
types,
one
for
the
go
sdk
and
one
for
the
collector,
but
there's,
I
think,
slightly
less
value.
There.
G
They've
got
even
more
because
they've
got
t
data,
they
have
p
data,
which
is
the
internal
representation
that
it
uses
that
everything
is
marshaled
to
before
it
gets
sent
out
the
other
side.
So
it's
even
more
complex,
but
I
think
there
would
end
up
probably
needing
to
be
some
conversion
from
our
attribute
type
to
p
data,
but
the
the
stuff
that
we're
doing
for
otlp
conversion
is
probably
already
most
of
that,
because
p
data
is
kind
of
loosely
based
on
otlp.
D
I
think
if
that
kind
of
brings
up
a
good
question,
though,
is
that
if
ultimately,
the
package
that
we
produce
for
centcom,
is
it
gonna
be
usable
by
the
collector
or
I
mean
like?
Would
they
be
okay?
If
we
kept
it
in
the
attribute
form
or
would
they
want
to
like
change
that
form.
G
G
In
any
event,
I
guess
the
real
question
is
where's,
where
how
where
and
how
do
these
attributes
interact
with
p
data,
and
does
it
matter
that
they're
different
formats
or
would
it
would
there
be
any
value
to
just
generating
a
separate
p
database
attribute
package
or
some
kind
of
package
yeah.
A
If
they
don't
want
to
do
that,
and
they
want
to
do
semantic
conventions
directly
into
their
p
data
format,
then
maybe
we
have
a
package
that
is
literally
just
the
binary
and
we
don't
share
the
same
template,
but
have
a
similar
template
form
one
for
generation.
G
Move
the
simcom
generator
out
into
a
separate
repo,
where
I
suppose
yeah
it'd
be
easier
to
have
it
in
a
separate
repo.
So
they
don't
need
to
have
an
internal
tool
dependency
on
us
as
well,
but
yeah.
There
are
many
more
ways
to
skin
that
cat
than
the
direct
dependency.
A
And
that's
that's
kind
of
where
I
think
it
would
probably
be
prudent
to
ask
them
like
what's
their
willingness
to
depend
on
us
and
if
it's
they're,
okay
with
the
attributes,
I
say
just
leave
it
where
we
have
it
and
and
move
forward
with
that
and
have
them
generate
some
kind
of
conversion
tool
from
their
attribute
or
from
our
attributes,
to
b
data.
A
But
even
then
like
we
could
expose
just
a
command
tool
here
since
we're
the
ones
opening
it
but
like.
I
guess
that
really
ends
up
being
like
who
wants
to
maintain
that
right,
if
they're
willing
to
contribute
to
that
and
not
have
to
become
part
of
the
particularly
the
go
team,
then
cool,
I'm
fine
with
that,
the
easy:
how
to
be
upward
less
about
the
technical
solution
more
about
the
the
social
solution.
G
Yeah
and
I
think
the
the
advantage
of
sharing
the
generator
and
all
of
its
replacements
means
you
can
still
use
the
the
same
variable
names
with
all
of
the
go
idiomatic
weirdness
around
initialisms
and
all
of
that
stuff
that
we
handle
in
those
replacements,
and
you
could
probably
the
same
way.
We
were
doing
these
schema
versions,
some
kind
of
package
or
planning
to
do
it
right.
Somebody
could
have
a
semcom
reference
that
is
using
our
key
value,
attributes
and
move
to
theirs
very
easily.
A
The
only
warning
I
would
have
on
that
is:
we
need
to
make
sure
that
both
the
attributes
and
the
p
data
have
the
same
kind
of
conversion
interface,
not
like
an
actual
interface
that
we'll
be
using
throughout
but
like
when
we
say
an
int
or
or
generate
an
end.
It's
the
same
in
p
data
as
well
like
the
int
function,
or
so
as
long
as
those
match
up.
Then
it's
probably
probably
okay,.
D
So
one
of
the
things
is
also
that
I
think
there's
an
initial.
That's
an
interesting
point
that
you
just
made,
and
it's
maybe
related.
Maybe
I
don't
fully
understand
kind
of
what
you're
saying,
but
just
one
of
the
things
that
I
do
think
is
kind
of
interesting
is
that
all
of
the
semantic
convention
keys
right?
Those
are
going
to
be
strings,
but
the
types
that
we
kind
of
pointed
out
are
like
yeah.
Those
can
be
other
other
types
right.
D
So
currently
we
have
some
variables
that
are
actually
the
the
value
types
here,
but
a
lot
of
them
are
just
going
to
be
strings.
The
thing
that's
interesting,
though,
that
I
did
think
was
kind
of
interesting
was
sometimes
there
is
a
type
and
so
say
like
right
here
the
amount
of
memory
available.
D
Should
this
essentially
should
produce
an
int
type
right,
but
there's
not
a
real
restriction
here,
because,
like
the
user
could
take
this
key
and
produce
you
know
whatever
they
want
from
this
essentially
like
there's,
no
actual,
I
think
there's
not
even
a
helper
like
they
could
do
that
not
not
intentionally
like
they
could
always
do
that,
but
they
think
they
could
do
that
unintentionally
say
like
well.
I
feel
like
this,
you
know
should
be.
A
memory
should
well
just
keep
that
as
a
string.
D
That's
fine
by
me
and
they'll
just
use
it
as
a
string
but
like
maybe
it
needs
to
be
attached,
even
if
they
do
like
128
mb
right,
yeah,
no,
exactly
right,
and
so
one
one
thing
that
I
did
notice
is
in
some
of
our
instrumentation
packages
in
the
trib.
D
We
provide
like
functions
that
will
create
the
value
here
based
off
of
like
a
past
type
and
that's
where
you
can
start
using
the
go
typing
system,
and
you
know
that
might
be
something
I've
always
kind
of
kept
in
the
back
of
my
mics.
D
I
thought
it
was
something
we
could
do
and
I
still
think
it's
maybe
something
we
could
iterate
on
where
we
take
this
key
and
we
essentially
add
in
essential
like
another
function,
that
would
associate
that
key
to
a
produced,
key
value
and
it
would
use
the
right
type
system
that
you
would
actually
want
to
actually
preserve.
But
I
think
that
might
also
be
useful
here
if
we're
talking
about
like
if
we
have
to
unify,
make
sure
that
the
output
of
type
is
gonna,
be
the
same
across
the
collector
as
well.
E
D
D
A
So,
in
practice,
what
this
would
look
like
is
sort
of
like
if
you
you
have
the
like
faas
service
name
and
then
that
object
would
only
have
the
method
of
string
to
get
an
act
to
attach
an
actual
value
with
that
to
get
a
key
value
out.
That
is
of
the
appropriate
type,
and
it
wouldn't
have
an
int
method.
It
wouldn't
have
a
float64
method.
It
would
just
have
the
string
method.
D
D
Make
we
could
make
something
happen
there
we've
done
this
before,
but
I
think
yeah.
D
And
so,
but
I
think
that,
like
what
we
could
do
is
just
have
a
function
that
was
just
called
like
faas
max
memory
and
you
know,
and
then
that
would
accept
an
integer
and
then
it
would
produce
a
key
value
right
like
that,
and
I
think
that's
something
we
could
generate
honestly.
It
doesn't
seem
like
it'd,
be
too
hard.
This
file,
would
you
know
tenfold
essentially
but
like
yeah,
I
think
that's
something
we
could
do.
G
And
I
think
we
could
do
that
in
an
additive
manner
as
well,
because
these
are
all
suffixed
with
key.
Yes,.
E
G
If
we
strip
the
key
off
and
turn
that
into
a
function
that
that
takes
a
value
of
the
appropriate
type,
then
I
think
we
can
do
that.
A
I'm
not,
I
don't
think
you
can.
You
can
just
you
can
just
do
equals
function
and
then
just
an
anonymous.
D
Yeah,
I
think
we
could.
We
could,
I
think,
prototype
it,
but
that's
the
other
thing.
It's
like
I've,
always
kind
of
thought
about
that
as
like
a
next
iteration.
In
fact,
that's
really
low.
On
my
like
prioritization
stack
here
and
so
like
I
I
just
like
you
know
I
it
may
be
useful
here
if
we're
gonna
try
to
like
preserve
that
with
like
the
collector
and
try
to
like
unify
and
type.
D
You
know
thing
to
reference,
especially
if
you're,
like
you
know,
you
know,
be
on
the
other
end
of
this,
where
you're
receiving
telemetry,
and
you
want
that
unified
form
of
what
this
key
value
should
be
in,
like
you
could
just
reference
the
key,
but
it
may
still
be
useful
like
to
have
a
function,
that's
associated
with
this,
like
with
the
type
I
don't
know,
just
a
thought.
I
don't
know
I
didn't
know
want
to
do
that,
but
I
think
it
may
be
something
that
might
help
inform
this
conversation.
D
G
If
we're
going
to
move
it,
we
need
to
move
it
before
1.0
and,
if
we're
not,
then
1.0
means
it
lives
forever.
Where
it
is
right,
so
we
should
make
the
decision
quickly.
I
I
think
it's
probably
best
to
keep
it
as
it
is,
especially
if
the
collector
eventually
intends
to
take
a
dependency
on
the
go
api
anyways.
D
I'm
kind
of
leaning
towards
that
as
well,
and
also
for
the
reason,
the
fact
that,
like
the
semantic
conventions
themselves,
have
a
unified
source
of
truth,
and
that
is
the
yaml.
D
So,
like
I
mean
we're
taking
that
source
of
truth
and
generating
it
in
the
form
that
we
want,
it
may
make
sense
to
like
share
the
generation
code
like
we're
talking
about,
but,
like
the
collector
just
may
find
that
information
you
know,
conformed
to
a
different
standard
may
be
more
useful
to
them
and
it
may
just
be
making
more
sense
to
standardize
off
that
centralized
point
of
truth.
Rather
than
our
point
of
truth
and
then
translating
from
that.
D
Yeah,
I'm
kind
of
leaning
towards
just
keeping
it
where
it
is,
but
I
like
this
idea
of
splitting
it
out
if
we
can
make
it
happen.
So
it's
worth
thinking
about
at
least.
D
Okay,
cool,
I
think,
with
that
we're
through
the
agenda,
so
we
have
a
fair
amount
of
time
to
go.
Maybe
we
can
talk
a
little
bit
about
what
the
plan
is
for
the
rc.
D
I
think
this
has
been
a
kind
of
a
topic
in
the
back
of
everyone's
minds
and
I
don't
have
any
like
strong
timelines
here,
like
we're
getting
close,
obviously
we're
as
we
can
see
on
the
project
board,
so
I'd
like
to
maybe
like
we
we've
talked
about
it
in
the
past,
but
maybe
define
a
little
bit
what
this
rc
process
is
gonna
look
like
and
for
I
don't
know,
how
long
are
we
gonna?
Let
it
bake?
How
long
do?
How
do
we
iterate
on
it
and
that
kind
of
thing?
D
G
Sure
so
I
I
think
earliest
we
we
should
do.
It
is
after
the
next
spec
release,
because
that
will
include
the
schema
url
and
I
would
like
for
us
to
fold
all
of
that
in
and
include
semantic
conventions
with
schema
urls
and
all
of
that,
but
that's
also
supposed
to
be
next
week,
so
that
shouldn't
be
much
of
a
delay.
If
any.
D
D
D
Yeah
I
mean
I
heard
gustavo
just
found
out
what
red
bull
is
so.
D
Sorry,
maybe
gustavo
wasn't
paying
attention
but
yeah.
I
I
think
he's
got
a
lot
to
do
so.
Okay,
I
think
that's
good.
I
like
to
I'm
wondering
what
people
think
about
just
letting
it
sit
as
like
a
baking
period
of
timelines.
Like
are
we
people
thinking
weeks
months
or
years?
I'm
not
thinking
years.
I'm
not
really
thinking
months.
Honestly,
I'm
thinking
about
two
weeks,
but
I
don't
know
what
anybody
else
is
thinking.
G
That's,
I
think,
I
think
we
need
to
make
a
blog
post
once
we
hit
1.0,
announce
it
to
the
world
yell
it
far
and
wide,
and
ask
for
feedback
explicitly
in
that,
so
that
I
I
know
people
are
starting
to
instruments
like.
I
just
saw
a
container
d
pr,
where
they're
starting
to
build
that
in
they're
building
it
at
the
o20.
A
Yeah,
like
I
would
say,
rather
than
just
timeline,
it'd,
probably
be
prudent
to
have
like
at
least
some
bit
of
feedback,
whether
it's
good
or
not.
Who
is
there
anybody
we
can
reach
out
to
to
help
us
like
a
both
announce
it
and
be
run
any
kind
of
testing
gather
data,
have
people
use
it
and
and
give
us
feedback,
even
if
it's
in
a
contrived
example
like
anything
like
literally.
D
I
think
that's
a
really
good
question.
There's
a
lot
that
comes
to
mind.
I
definitely
know
that
people
on
the
call
are
probably
our
best
bet,
because
we
have
the
direct
contact
with
them.
So
I
know
that
there
are
some
uses
at
splunk.
I
know
anthony
at
aws
and
I
know
steve
over.
There
is
definitely
you
know
described
use
cases
we
have
new
relic
on
the
call
as
well
rich.
D
I
don't
know
if
you
have
any
like
understanding
of
like
where
we're
using
that
in
the
relic,
but
yeah
like
I,
I
think
that's
the
first
place
we
should
reach
out
to
is
try
to
get
commitment
once
we
do
the
rc
from
the
people,
I
just
said,
and
you
know
see
if
they
can
get
some
internal
updates,
as
well
as
just
poking
people
at
the
company.
D
You
know
if,
if
somebody
else
is
using
it,
oh
david
around
here
as
well
from
google,
like,
I
definitely
know
google's
using
it
as
well,
so
yeah
like.
If
we
could,
you
know
when
the
rc
happens.
Poke,
just
I
think
here
is
is
internally,
is
a
good
good
source
and
we
could
try
to
get
some
sort
of
like
you
know,
channel
going,
so
we
can
ask
if
they
will.
You
know,
update
any
sort
of
thing
and
and
could
get
back
to
us
within
a
time
frame.
D
I
think,
outside
of
that
we
have
ted.
Young,
has
been
super
interested
in
this
kind
of
things,
and
definitely
we
can
touch
base
with
him
well
and
that's
also
a
good
thing
at
lightstep.
I
guess
this
gustavo
is
also
up,
so
we
could
ask
cassava
to
touch
lights
up,
but
also
ted
is
really
big
in
the
community
as
well,
and
so
we
can
touch
with
them.
D
I
know
that
jana
might
also
have
some
really
good
advice
on
that
one
as
well,
so
I
think
there's
definitely
some
good
people,
I'm
probably
missing
some
as
well,
but
I
think
that
the
question
is
going
to
be
really
important,
that
we
ask
like
hey,
we
have
a
rc
out.
Can
you
get
back
to
us
within
two
weeks
of
you
know,
updating
it,
giving
it
a
new,
try
and
finding
any
bugs
and
reporting
them,
and
and
that's
the
question
we
need
to
be
asking
so.
A
Good,
I
was
gonna
say
bugs
are
showstoppers,
but
I
would
also
ask
for
more
general
feedback
of
like
usability,
because
remember
this
is
our
our
1.0
like
we're
stuck
with
this.
So
so,
if
there's
something
that
we
can
make
better
before
we
go
fully
live,
then
I
would.
I
would
like
to
see
that
so
I
know
I
am
no
usability
experience,
engineer
whatsoever
like
it's
completely
out
of
my
field.
D
Yeah,
I
think
that's
that's
a
really
good
thing
and
I
think
ted
might
be
the
best
to
help
in
that
respect.
I
think
he's
done
a
few
of
these
just
in
general,
across
the
telemetry.
So
maybe
we
can
try
to
reach
out
to
him
as
we're
going
into
that
rc
process
to
get
get
a
question
or
going
out
for
that.
I
think
that
that's
a
good
good
point,
yeah.
G
Yeah,
I
think
ted's
already
got
a
questionnaire
that
he's
been
using
we're
trying
to
get
out
to
early
adopters
and
get
feedback
on
which
there's
you
know
some
usage
of
go
in
there,
but
it
may
be
good
to
be
able
to
point
people
at
it
at
that
and
say:
okay,
if
you're
using
the
rc
now
go.
Please
give
us
some
feedback
through
this
also
yana
has
certainly
been
in
contact
with
people
who
are
instrumenting
with
the
sdk.
That's
why
I
heard
about
the
container
d
instrumentation
that's
happening
so
I'll.
G
So
hopefully
he
will
be
able
to
ask
people
for
usage
and
feedback
notes.
D
Yeah
and
janna
herself,
hopefully
just
time
because
in
the
past
we've
gotten
really
valuable
feedback
from
her.
Although.
D
It's
conflicted
with
balkan's
feedback,
but
that's
another
story.
Yeah
yeah
I
mean
we
should
also
get
bogged
feedback
because
I
I
really
value
that
or
tigran
as
well.
You
can
definitely
point
out.
Some
tigran
is
extremely
good
at
performance
issues.
So
if
he
sees
an
api
design,
that's
just
never
gonna
have
a
good
performance.
Then
I
think
that's
also
something
to
kind
of
know.
You
know
if
the
implementation
doesn't
meet
his
performance
standards.
D
Maybe
we
can
like
pause
on
that
one
for
the
rc,
but,
like
you
know,
let's
just
make
sure
the
interface
doesn't
have
any
like
glaring
errors.
F
Yeah,
you
know
I'm
happy
to
say,
maybe
I
could
actually
start
contributing
to
this
group
with
once
we're
at
rc
one
is,
I
would
love
to
write
some
stuff.
I
will
check
with
the
new
relic
blog
folks
and
see
where
their
direction
is
and
where,
where
they'd
like
to
start
posting
on
this
we're
having
this
future
stack
event.
Right
now-
and
I
know
there
was
a
big
open,
telemetry
talk
given
at
it.
Strangely,
I
wasn't
there,
but
the
other
part
is.
F
I
am
a
consumer
of
this
api
because
we're
writing
a
shim.
It's
a
it's
a!
We
have
a
go
agent
right
that
talks
to
the
new
relic
endpoints.
We
would
like
to
help
our
customers
customers
migrate
from
that
over
to
open
telemetry,
and
so
we're
writing
this
shell
around
the
go
agent
that
that
acts
a
whole
lot
like
the
go
agent
is
binary.
I
mean
api
compatible
with
it,
but
but
uses
open
telemetry
under
the
hood
and
and
as
we
finalize
it
for
with
that
rc
underneath
it
I'll
definitely
be
giving
you
feedback.
D
D
I
think
there's
also
some
action
items
to
just
kind
of
start,
reaching
out
to
people
I'll
try
to
reach
out
to
ted
young
and
bogdan
in
the
next
week
or
two
to
ask
them
about
the
some
questionnaires
or
some
feedback
as
well,
and
I'm
guessing
anthony
is
already
in
contact
with
ghana
and
other
people,
so
yeah
that.
G
Sounds
good
in
terms
of
timeline.
We
internally
at
aws
do
monthly
releases
of
our
adot
packages.
We
just
released
one
yesterday,
I
think,
maybe
the
day
before
with
a
bunch
of
lambda
instrumentation,
we
would
love
to
be
able
to
get
this
into
the
june
release
our
current
target
and
our
hope
that
javascript
python
and
go
will
be
1.0
and
available
in
the
june
release,
but
I
also
don't
want
to
pressure
to
release
just
so
that
we
can
hit
that
date.
D
I
would
like
that
as
well.
I
have
been
going
full
steam
on
this.
It
feels
like
for
way
too
long
yeah.
I
would
love
to
get
this
out.
I
think
end
of
june
seems
totally
reasonable,
based
on
our
current
project
board,
I'm
also
in
software
development
for
a
long
time-
and
I
know
that
that's
sometimes
not
gonna
happen
but
yeah,
I
think
that's
a
reasonable
target.
I
think
we
should
try
to
shoot
for
it,
which
means
that
we
should
probably
try
to
get
this
out.
D
Get
the
rc
cut
in
about
two
weeks
just
to
get
enough
time
to
actually
get
feedback
from
it.
If
we
want
to
get
some
sort
of
1.0
out
by
the
end
of
the
month,
but
yeah,
okay,
and
with
that,
I
think,
we've
gone
through
the
whole
agenda.
If
there's
anything
else,
anybody
else
didn't
want
to
put
on
the
dock
or
just
wants
to
bring
up
in
a
conversation.
D
Week,
ooh,
as
I
saw
steve
thinking,
I
was
like
really
excited
for
a
second.
E
E
I
I
didn't-
I
just
saw
the
fix
yesterday
and
no,
I
haven't
tried
it
again
very
strange
because
I
was
toying
around
with
you
know
what,
if
I
include
this
option,
if
I
exclude
that
one
and
trying
different
combinations,
you
can
see
my
buck
report.
I
think
lid
down
the
wrong
road
of
thinking.
The
problem
was
with
one
of
them.
I
don't
really
know
why.
D
Play
with
it
some
more,
the
error
handling
is
asynchronous,
so
I've
gotten
bit
by
that
one
before
so
like
it
can
definitely
like
it
can
be
delayed.
But
there's
there's
a
lot
of.
D
Be
delayed
or
not
so
yeah.
I
depends.
D
Provider,
it
depends
on
like
how
the
the
exporter
you're,
using
like
actually
like
there's
so
many
things
involved
in
the
error
handling.
In
fact,
I
really
hate
it
when
it
comes
to
testing
and
I
I
don't
have
a
better
way
but
like
it's
not
like
it's,
I
don't
know
it
needs
to
be
improved.
Yeah.
E
It
just
made
me
wary,
because
I
it's
good
that
we're
adding
capabilities
over
time,
but
then
they
all
come
with
a
liability,
and
I
every
time
I
update,
I
want
to
have
to
struggle
with
proto-dependencies
with
basil.
There's
always
something
that
goes
awry,
so
that's
two
hours
usually
and
then
and
then
try.
Maybe
one
new
thing
and
then
see
if
all
the
stuff
that
used
to
work
seems
to
still
work.
That's
kind
of
my
mo
right
now,
with
with
the
releases
to
the
extent
that
the
dependencies
settle.
E
I
think
a
lot
of
us
can
ignore
that
stuff
when
we're
just
building
with
go.
But
when
you
use
basil
a
lot
of
times
like
three
levels
up
in
the
dependency
chain,
they
change
something
with
their
proto-generation
and
there's
a
new
conflict
that
comes
in
it's
very
strict
about
resolving
those
things.
G
E
J
No
I'm
watching
as
jordan
liggett
has
been
slogging
through
godep
dependency
hell.
So
I'm
watching
and
hoping
and
praying
that
here's
he
manages
to
resolve
all
the
protobuf
updates
and
stuff
and
then
I
get
to
reap
the
benefits
and
go
implement
my
feature
but
I'll.
Let
you
know
when
that
happens
for
sure.