►
From YouTube: 2021-07-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hi
how's
it
going
it's
going
well,
it
sounds
like
you're
coming
from
the
bottom
of
the
ocean.
There
steve
is
just
me
or.
A
Headphones,
hello,
hey
got
it
oh
yeah
there
you
go
cool,
seven
people.
I
think
we
could
probably
just
jump
into
it.
I
think
that
we
could
probably
start
off.
So
let
me
open
up
a
share
screen
here.
Make
sure
I'm
showing
the
right
thing.
A
A
Actually,
cool
yeah,
so
also,
if
you
have
anything
you
want
to
talk
about
in
the
agenda,
please
be
sure
to
add
it.
I've
got
a
few
ideas
of
some
things
here,
but
I
imagine
there's
more
to
talk
about
that.
I
haven't
actually
added
sorry,
I'm
not
like
the
most
prepared
for
this.
I
was
definitely
nerd
snipes
by
an
issue
recently
and
so
I'm
a
little
bit
unprepared,
but
we
can
jump
in
and
talk
a
little
bit
about
the
current
project
board.
A
Here
I
don't
think
there's
actually
been
any
movement.
We
have
had
two
people
pick
up
action
items
on
these,
but
I
haven't
seen
any
pr's
coming
in
so
probably
gonna
leave
them
in
the
to-do,
I
think,
seems
fair,
but
yeah,
I
think
otherwise.
This
has
been
pretty
stable.
I
haven't
been
able
to
dedicate
too
much
time
this
past
week
to
do
working
on
some
of
these
things,
but
it's
still
on
my
periphery.
Mostly
this.
I
could
probably
assign
this
to
me.
A
It
is
something
I
am
looking
at
enough.
I've
started
a
draft
on
a
release
feedback,
but
I
haven't
actually
had
the
time
to
formalize
that
into
something
I
wanted
to
share
yet
so
I
will
try
to
get
that
done
in
the
next
few
days
and
hopefully
we
can
then
get
this
out
to
the
community
which
is
going
to,
I
think,
tie
into
some
of
the
agenda
items
in
a
little
bit
but
yeah.
I
think
that's
it.
I
don't
think,
there's
anything
else.
We
probably
want
to
talk
on
here.
A
Let's
say
somebody
else
on
the
call
wants
to
pick
up
this
unifying
the
otlp
http
and
grpc
retry
settings.
I
think
we
can
probably
I'll
pick
that
up
eventually
and
move
on
there,
not
seeing
any
super
excited
faces,
which
is
pretty
understandable.
I
think
that
sounds
good
cool,
so
we
can
jump
into
the
agenda
here.
The
first
I
itemide
here
is
not
that
it's
gonna
be
me.
It's
gonna
be
welcoming
robert
as
an
approver.
A
I
don't
know
if
anybody
else
in
the
call
hasn't
seen
the
pr,
but
we
just
added
robert
from
splunk
as
an
approver
to
the
project
just
growing
the
list.
We've
really
valued
your
contribution.
So
it's
definitely
a
positive.
You
know
thing
to
have
you
come
in,
so
I
just
want
to
make
sure
it's
recognized
and
with
people
that
weren't
aware
just
let
them
know
that
that's
that's
happening,
but
yeah
thanks
and
welcome.
B
A
B
A
You
know
I
may
be
tyrannical
at
times,
but
I
swear
I'm
consistently
tyrannical,
so
maybe
that'll
help
so
yeah
next
anthony,
please
I'll
hand
it
off
to
you.
We
can
talk
a
little
about
some
police
timing
from
1.0
sure.
B
So
I
just
wanted
to
get
the
question
and,
having
just
looked
at
the
board,
there
are
a
couple
of
things
we
will
have
to
address
before
oneplus.
They
will
affect
the
api,
but
just
want
to
know
when
we
think
we
will
be
confident
with
the
you
know
the
rc
having
soaked
in
are
we
going
to
want
to
do
a
second
rc?
After
some
of
these
changes
we
make,
or
are
we
going
to
make
them
and
go
directly
to
a
1.0,
get
the
the
groups
out
on
that.
A
C
My
suggestion
would
be
to
do
rcs
on
a
regular
schedule,
as
we
have
been
until
we've
meet
whatever
criteria
we
set
for
releasing
1.0,
so
we
would
do
just
about
monthly,
a
minor
version
bump.
If
we
go
for
longer
than
a
a
month,
we
probably
need
an
rc2
or
if
there
is
any
major
change
that
we
have
coming
forward,
that
should
have
an
rc2
and
re-solicit
feedback.
A
Yeah,
the
only
worry
I
have
right
now
is
that
we
haven't
actively
solicited
feedback.
Mostly,
you
know
we,
I
try
not
to
blame
people,
but
I
think
I'm
just
blamed
for
that.
I
haven't
actually
gone
through.
I
said
I
might
even
have
to
try
to
get
that
to
happen,
but
I
haven't
been
able
to
prioritize
time
right
now.
I
do
think
that,
even
without
that
there
is
valuable
feedback
that
is
coming
from
the
community.
A
We've
gotten
a
few
issues,
pretty
normal
issues
actually,
sometimes
where
we
can't
figure
out
the
update
process,
which
hopefully
this
will
be
the
last
time
that
happens
for
any
stable
package
that
we're
releasing,
but
some
of
the
other
ones.
I
think
that
are
just
you're,
seeing
the
community
start
to
rely
on
the
rc,
which
is,
I
think,
positive,
and
I
think
giving
it's
been
about
two
weeks
that
it's
been
out
there,
that
that
is
a
positive.
A
You
know
some
positive
feedback
you're
seeing
a
lot
of
dependencies
starting
to
be
taken
on
this.
Without
you
know,
major
issues
coming
in
saying
like
this
api
doesn't
make
any
sense.
We
can't
accomplish
these
things,
things
that
we
had
filled
in
the
past,
so
I
think
that's
viable.
I
like
this
idea
of
giving
it
a
full
four
weeks.
A
I
think,
if
that's
fair,
I'd
like
to
get
more
feedback
active
feedback
for
people
before
we
actually
do
a
lot
of
release,
and
the
last
thing
I
think,
aaron,
based
on
what
you
just
said
like
I
would,
I
would
probably
prioritize
trying
to
get
the
100
release
out
before
we
just
start
adding
in
functionality.
A
I
think
there's
a
lot
that
we
still
could
do
in
the
in
the
project
for
for
a
lot
of
the
stable
stuff,
like
we've,
talked
a
lot
about
a
whole
host
of
things,
one
of
which
is,
I
wanted
to
talk
about
in
a
second,
but
I
just
want
to
make
sure
that
there's
not
scope
creep
where
we're
continually
adding
functionality
every
two
weeks
and
getting
another
rc
up,
because
there's
more
functionality
when
in
reality
it
could
have
just
been
a
stable
release
and
could
have
released
that
after
the
stable
release,
is
my
only
concern.
C
So
the
final
comment
that
I
had
was
more
target
on.
If
we
find
something
that
necessitates
a
change
that
should
indicate
that
we,
like
a
large
change,
not
like
you,
know,
small
documentation,
but
we
needed
to
add
an
api
method
that
should
solicit
a
fairly
quick
rc
time
frame
so
that
we
can
start
getting
feedback
on
that
against
that.
C
B
Yeah,
I
agree,
and
I
think
that,
like
the
the
the
issue
that
was
mentioned
in
the
the
review
of
the
the
backlog
about
reconciling
the
retry
settings
for
grpc
and
http
and
the
otobx
porter,
that's
something
that
I
think
would
necessitate
us
having
a
second
rc
right.
My
goal
would
be
that
from
rc
x,
whatever
that
the
highest
x
ends
up
being
2
1.0
is
no
change.
We
end
up
making
the
same.
You
know
the
1.0
tag
at
the
same,
commit
that
the
the
rcx
was.
B
That
would
be
the
ideal,
so
we
will
probably
have
to
have
an
rc2.
Hopefully
we
won't
need
an
rc3.
I
would
say,
let's
target
next
week
for
an
rc2
if
we
can
get
these
changes
done
and
then
another
couple
weeks
of
soap
testing.
For,
for
that,
and
you
know
trying
to
get
actively
solicited
solicit
some
feedback.
A
I
think
that's
a
really
good,
yes,
seems
reasonable
and
I
think
that's
a
really
good
timeline
in
order
of
operations
there,
based
on
what
you.
D
Just
set
up,
though,
just
if
there's
changes
that
we
think
we're
going
to
make
like
api
changes
like
the
one
that
you
have
open
here.
It
seems
to
me
like
that's,
for
that
to
come
in
in
between
release.
Candidates
is
a
little
odd
because
it
means
that
you
there's
something
you
intended
to
do,
but
the
candidate
that
you
published
isn't
really
the
one
that
you
would
publish
you're
still
intending
to
introduce
new
changes.
B
Yeah-
and
I
think
this
was
created
after
the
release
candidate
one
was
released.
It's
probably
something
we
should
have
noticed
before,
but
we
didn't
and
it's
an
opportunity
to
reconcile
that
we
have
to
take
now
if
we're
going
to
do
it.
Otherwise
we
live
with
the
fact
that
there's
going
to
be
two
different
sets
of
retry
configurations.
D
Sure
yeah
I
mean
I
understand
it
looks
like
a
good
idea,
I'm
just
I'm
just
wondering
whether
or
not
it's
it's
a
cause
to
back
off
from
rc
status
and
say
issue
like
a
you
know,
a
0.21
or
whatever
the
next
thing
would
be,
and
then
after
you
like
that,
then
come
along
with
a
release
candidate.
After
that.
C
I
think
the
cat's
out
of
the
bag
on
that.
I
don't
think
we
can.
B
We
we
could
particularly
given
the
thing
that
turned
sniped
tyler
and
how
go
mod,
treats
pre-release
versions.
It's
it's
a
bit
weird
and
there's
probably
ways
we
could
do
that,
but
I
also
think
that
if
we
make
an
rc
next
week,
we
get
these
changes,
and
you
know
there
are,
I
think,
three
issues
here
that
might
result
in
api
changes.
B
If
we
get
all
of
those
in
there's
nothing
else
that
we
know
right
now.
We
anticipate
the
change
so
that
that
would
be
a
true
rc,
and
I
think
that
that
rc2
is
something
that
would
be
a
viable
candidate
for
1.0
as
well.
If
there's
no
feedback
that
tells
us
oh
there's
something
else
that
needs
to
change.
B
A
Yeah,
I
think
your
point's
really
well
made
steve
just
like
we
do
need
to
switch
into
this
mentality
that,
like
we're
about
to
support
a
stable
release
and
that
you
know,
there's
going
to
be
design
flaws,
that
we
are
going
to
release
and
they're
not
going
to
be
perfect,
but
you
have
to
just
work
with
them
going
forward.
I
don't
think
we're
quite
there
yet.
A
I
think
that
exactly
like
kind
of
what
anthony
was
just
saying,
this
is
something
that
we
identified
in
the
api
in
in
the
interim
between
the
two
and
we
realized
like
it
really
should
get
fixed,
and
I
think
that
this
is
something
that
I
think
it
warrants
enough.
You
know
user
functionality
that
they're
gonna
benefit
from
this
having
this
unified
that
it's
worth
actually
making
that
fix
at
this
point,
but
I
think
that
your
point
is
well
made
that
saying
something
like
well
actually
looking
in
the
backlog
of
all
our
issues.
A
We
found
this
one
enhancement
that
we
really
would
like
actually
include
like.
I
don't
think
that's
that's
appropriate
and
I
think
that's
well
said,
but
yeah.
I
think
that
this
is
one
and,
as
was
pointed
out,
and
these
other
two
ones
are-
I
mean
the
documentation
is
not
going
to
be
an
api
breaking
one.
This
again,
this
is
another
one
that
was
kind
of
josh
found.
This
is
a
bug,
it
says
enhancement,
but
honestly,
it's
kind
of
a
bug
because
it
doesn't
actually
include
with
the
built-in
detectors.
A
So
I
think
this
is
kind
of
like
the
feedback
we
were
looking
to
get
for
this
release.
Candidate
was
to
say
like
well.
Here
you
go
and
they're
going
well,
these
parts
don't
really
make
any
sense.
Can
we
fix
those
and
they're
small
enough?
That
makes
a
lot
of
sense.
I
think
it's.
I
think
that
justifies
this.
B
Yeah-
and
I
think
with
that
one
if
we
take
the
decision
to
update
the
the
function
with
the
new
detectors
that
have
been
added-
we're
not
breaking
the
api,
we're
not
changing
anything,
we
can
do
that
now,
we
can
do
it
later.
We
should
probably
do
it
now
because
they
exist
and
we
want
to
keep
it
consistent.
B
But
if
we,
if
we
decide
to
remove
it,
then
we
do
need
to
do
that
now,
because
that's
not
something
we
could
do
going
after
one
point
on,
so
I
think
for
this
the
we
should
make
the
decision
before
we
decide
to
make
an
rc
and
then,
if
we
decide
to
remove
it,
do
it
if
we
don't
probably,
we
should
still
update
it,
but
we
could
do
that
at
any
time.
Strictly
speaking,.
A
Okay,
does
that
make
sense
steve.
D
I
follow
yeah.
I
now
that
I
understand
more
about
when
we
came
up
with
the
idea
for
that
other
fix.
It
makes
sense
that
it's
a
response
to
the
release
candidate.
It
wasn't
just
opportunistic
breaching
into
the
backlog,
as
you
said,
so
I'm
fine,
yeah,
okay,.
B
A
E
A
Said
like
there
is
also
still
support
for
http
json
that
we're
not
including
in
the
rc,
but
it's
probably
going
to
be
added
at
a
later
point.
For.
Oh
I'm,
sorry,
not
jason,
that
wow,
I
spoke
wrong.
Metric
support
for
http
protobuf,
I
think,
is
what
I
meant
to
say,
but
that's
something
we're
planning.
I
think
it
might
actually
already
be
back
in,
but
yeah
that's
something.
I
think
that
we
just
understood
that
we
were
going
to
add
back
in,
but
that's
for
panel.
A
Okay,
I
think
that's
enough
on
that
one
does
anybody
else,
have
any
ideas
based
on
anthony's
release,
timing,
what
he
just
said,
like
some
feedback
on
that.
A
One
of
the
things
I
was
kind
of
getting
some
feedback
on,
I
think
internal
to
splunk
recently
is
just
this.
In
fact,
there
might
even
be
a
discussion
on
it,
which
I
realized.
A
I
might
have
skipped,
how
do
we
debug
the
api
and
sdk
or
just
the
implementation,
that
we
have
it's
kind
of
a
big
open
question
right
now,
we're
saying
you
can
start
to
take
a
you
know:
a
solid
dependency
on
a
stable
release
that
we're
gonna
do
and
we
don't
really
have
a
way
to
say,
like
you're
running
in
this
in
production,
but
it's
not
showing
you
the
traces
that
you
want
or
they're,
showing
up
in
an
incorrect
manner.
How
do
you
debug
that
process?
A
That's
the
problem
like
that's,
definitely
something
that
I
think
we
need
to
solve,
and
I
think
I
wanted
to
include
this
in
our
discussion
next,
just
to
kind
of
like,
because
I
think
this
is
a
part
of
what
we
can
do
to
resolve
that
issue.
But
it
is
it
kind
of
raises.
This
question
is
like.
Is
that
something
that
we
need
to
require
for
the
100
release?
Or
do
you
think
that,
like
does
everyone
feel
comfortable
with
saying
we're
going
to
do
that
in
a
follow-on
release?
D
Are
you
talking
about
introducing
new
capabilities,
for
you
said
debugging,
but
I'm
just
trying
to
figure
out?
Is
this
like
including
more
information
logging
more?
What
are
you
thinking
of
there.
A
Well,
our
logging
strategy
right
now
is
zero,
so
yeah
logging
more
would
be
ideal.
Currently
we
have
some
error
pipelines
which
are
helpful
but
they're
not
like
there's
not
a
way
to
say.
Like
turn
on
the
debugging
and
all
of
a
sudden
you
get
like
these
really
verbose,
like
you
know,
pictures
of
what's
actually
transpiring,
there's
no
z
pages.
Some
people
are
really
critical
on
z,
pages,
being
really
helpful
and
helpful
for
debugging
and
yeah
and
then
like.
A
A
Exist
right
because
we
don't
even
have
a
logging
pipeline
little
logging
implemented
and
plugged
in
yeah
yeah.
I
think
there's
a
few
things
that
we
could
do
for
debugging.
Our
current
solution
and
current
strategy.
As
far
as
I've
ever
like
been
able
to
tell
is
just
like
also
register
a
standardized
exporter
in
that
center.
That
expert
see
what
it's
producing
and,
like.
A
I
think
that's
it's
it's
better
than
nothing.
Don't
get
me
wrong,
but
it
doesn't
really
help
with
the
situation
like
well.
What
happens
with
some
of
my
fans
are
going
to
the
back
and
some
of
them
aren't,
and
so
I
plug
in
the
standard
exporter
and
they're
all
showing
up
like
well
what
happens
in
that
other
pipeline,
like
I
can't
really.
I
still
have
no
visibility
in
the
pipeline
right,
so
yeah.
I
think
I
think
I
could.
D
One
thing
we
could
consider
is
saying
something
like
if
we
don't
already,
we
reserve
use
of
environment
variables
with
this
name
or
this
prefix,
or
something
so
that
you're
kind
of
opening
the
door
to
a
control
that
you
could
build
in
later.
You
know,
set
this
environment
variable
to
this
comma
separated
list
of
features
to
activate
or
something
so
that
you
can.
A
Yeah,
I'm
not,
I
guess
not
too
concerned
about
that.
In
fact,
in
an
environment,
variable
thing,
I'd
want
to
make
sure
it's
specified
across
all
projects,
that's
something
we
want
to
do
consistently
across
with
telemetry.
So
I
think
if
that
would
be
something
we
would
make
a
specification
change
on.
I
think
it's
more
of
my
question
of
like
you
know,
right
now,
we
have
no
debugging
capabilities.
Let's
say
that
we
all
in
this
meeting
decided
that
we
should
add
logging
to
the
project
to
understand
like
where
things
are
going.
A
A
E
F
Me,
the
hotel
project
at
large
has
said
not
too
much
at
all
about
how
to
handle
errors
in
sdks
period,
and
I
think
every
repository
is
probably
grappling
with
this
a
bit.
There
was
a
spec
pr
that
I
reviewed
yesterday,
where
somebody
proposed
very
explicit
instructions.
I
handle
errors
and
I
didn't
like
that
at
all
either,
so
I
I'm
going
to
drop
this
note
in
the
chat
just
so.
F
You
read
what
I
said
there
I
feel
like
there
should
be
high-level
guidelines
about
debuggability
of
the
sdk,
but
when
someone
wrote
we
should
log
no
more
than
once
per
request.
That's
like
way
too
often
for
me
to
start
with,
because
I
want
much
more
control
and
I
what
I
always
my
dream
solution
here
is
just
that
the
sdk
does
telemetry
of
its
own,
it's
meta
to
limit,
and
then
we
have
configurability
like
we
want
for
our
own
sdk
for
real
the
user's
plan.
Plummeters
configure
the
meta
telemetry
as
a
separate
matter.
F
The
only
requirements
are
that
you
have
to
be
a
lot
stricter
on
memory,
usage
or
verbosity
or
cardinality
in
that
meta,
telemetry
stream,
because
you
like
it's
almost
always
an
expectation
that
it's
going
to
cost
very
little.
So
you
you
never
want
to
like
compete
users
telemetry
anyway,
you
can
read
the
comment.
C
The
other
thing
that
you
have
to
consider
with
that
is,
we
probably
shouldn't
be
using
our
own
pipeline
that
we're
monitoring
to
monitor
the
pipeline,
there's
just
a
chicken
and
egg
problem.
There
is,
if
the
pipeline's
failing,
then
the
monitoring
of
the
pipeline
would
also
be
failing.
So
this
would
have
to
be
something.
F
F
The
record
that
if
it's
interesting
in
a
side
topic
I
produced
over
the
last
year,
this
open
telemetry,
prometheus
sidecar
using
the
outgo
sdk,
and
I
encountered
the
same
type
of
problem
like
I've-
got
this
fire
hose
of
metrics
that
I'm
sending
and
I'm
also
sending
another
fire
hose
myself.
It's
a
much
smaller
pose
and
then
like
the
process,
crashes
and
nobody's
sending
anything
and
I've.
Actually.
So
I
have
three
process.
Three
senders
one's
the
primary
output,
one
is
the
secondary
output
and
one
is
the
supervisor
it's
and
I
I
might
just.
F
I
can
point
that
code,
it's
all
open
source
and
it's
sort
of
a
opinionated
way
to
use
the
hotel
go
sdk
and
it
does
have
a
lot
of
telemetry
about
itself,
and
I
don't
know
I
feel
like
it's
the
type
of
answer
I
would
promote,
but
I
don't
want
to
delay
this
conversation
any
further.
A
So
one
of
the
things
that
I
took
away
that
joshua
said
is
that
there's
not
a
really
good
solution
across
sdks
here,
unfortunately,
I'll,
add
that
to
that
statement,
and
so
I
think
that
yeah,
it's
a
robert's
plan
as
well.
I
think
that
might
be
just
be
useful,
then,
to
not
block
our
stable
release
on
getting
some
sort
of
debugging.
I.
A
Yeah-
and
I
think
that's
something
that
you
know
like
I
know
in
the
z
pages,
you
can
change
the
well.
Sometimes
you
can
change
the
configuration
on
the
fly
or
something
like
that
like.
I
think
those
are
all
really
good,
positive
things.
We
could
try
to
look
at
adding
again
like
there's
just
a
whole
host
of
really
cool
things.
I
think
we
should
be
adding
get
me
super
excited,
but.
A
Like
canada
or
stable
release,
blockers
is
what
I'm
I'm
gathering,
though,.
B
Yeah,
I
I
think
it's.
We
should
all
be
very
fairly
confident
that
we
can
add
this
in
a
point
release
after
1.0
and
I
think,
there's
still
much
discussion
to
be
had.
I
think
I
would
be
on
the
opposite
side
of
the
changing
configuration
at
runtime
philosophy.
I
I
think
if
you,
if
you
want
to
have
your
system,
behave
differently,
you
redeploy
it
with
a
new
configuration
and
so
that,
maybe
that's
a
discussion
we
have
to
have
about.
Do
we
support
one
or
both
of
those
models?
B
How
do
we
support
it
and
that'll
be
a
longer
conversation
than
we
have
before?
We
should
have
a
stable
release
out
yeah
100.
A
Cool
all
right,
I
think
that
answers
my
question
with.
That
said,
I
think
we
can
jump
on
to
the
next
agenda
item.
Oh
no,
I'm
sorry!
This
isn't
complete
anthony.
You
still
have
another
part
to
this
right.
B
Yeah,
so
so
there
are,
with
the
with
the
rc
release
for
the
api.
I
didn't
do
anything
to
move
any
of
the
contrib
modules
to
a
1.0
rc,
largely
just
because
it
was
much
easier
to
handle
the
release
that
way,
but
I
think
going
forward.
We
should
consider
one
of
those
modules
we
can
and
should
move
to
a
1.0
status,
whether
that's
with
an
rc2
or
with
the
the
official
release.
B
I
would
propose
that
the
detectors
and
propagators
at
minimum
since
they
depend
on
the
the
sdk
interfaces
that
we've
said,
are
stable
and
will
never
change
again,
probably
can
can
be
stabilized
and
they
none
of
them
depend
on
the
metrics
or
any
of
the
unstable
interfaces.
I
don't
believe
so.
Those
would
be
two
things
I
would
suggest
could
be
stable
immediately
in
terms
of
the
instrumentation.
I
think
that
is
a
broader
question
and
probably
has
more
varied
answers.
B
C
C
To
I
like,
if
it's
particular
to
a
particular
to
like
aws,
it
has
sign
off
from
that
portion
of
the
community
and
three
like
if
there's
some
usage
stat,
that
we
can
pull
like
more
than
10
users
or
something
like
that.
I
don't
know,
but
you
know
just
some
guidelines
so
that
we
can
evaluate
like
when
the
different
components
can
actually
be
called
stable.
A
I
think
if
that
sounds
reasonable,
I
I
like
your
proposal
of
having
some
sort
of
like
documentation
around
like
it,
helps
I
think
developers
understand
like
what
they
need
to
achieve
to
actually
reach
stability.
A
I
also
am
looking
at
this
and
I'm
I'm
remembering
that
we,
I
I
don't
know-
we've
been
holding
off
on
a
lot
of
updates
here
and
I
think
that
there's
meta
organizational
issues
that
we
should
probably
identify
and
and
address
first
one
being
the
the
documentation,
the
other
being
like
the
ci
pipeline.
I
know
this
is
like
very
diverged
from
what
we're
doing
in
the
the
main
repo.
A
A
It
seems
weird
that
this
is
a
singular
module
and
it
may
not
be
useful
people
that
are
importing
v3,
probably
don't
want
to
import
a
lot
of
the
jaeger
stuff,
although
I
guess
that
isn't
the
case,
maybe
not,
I
think
maybe
I
still
got
a
term
on
that
one,
but
like
I
think
that
we
should
probably
do
a
little
bit
of
cleanup
on
this
repo
before
we
actually
do
any
sort
of
stable
release
of
anything
in
this
repo
is
my
feeling
on
the
matter.
C
B
Yeah
there
was
recently
an
issue
created
in
electric
and
trip
to
do
precisely
that
for
all
of
the
collector
components,
I
think
those
are
perhaps
slightly
easier
because
it's
it's
fairly
clear
what
the
public
api
of
those
components
ought
to
be,
and
anything
else
probably
needs
to
be
hidden
away,
and
that
makes
it
easy
to
say
yes,
this
public
api
will
be
stable
because
it's
just
what's
needed
to
implement
the
collector
component
interface,
but
I
think
for
for
detectors
and
propagators.
We
can
do
the
same
thing
here.
B
It's
the
the
the
instrumentation
where
that
probably
gets
more
complicated
because
they
will
have
much
more
varied.
I
think
public
apis
that
might
depend
on
what
is
being
instrumented.
A
Right
yeah,
I
agree,
anthony
I'm
guessing
you're,
getting
some
some
pressure
from
from
work
to
get
these
instrumentation
libraries
out,
I'm
not
opposed
to
it.
I
just
want
to
make
sure
that
we
have
especially
the
ci
system
and
documentation
would
be
ideal
before
we
do.
That.
Is
that
something
that
we
could,
that
is
reasonable.
B
Sure
yeah,
I
know
the
actually
the
any
pressure
that
I'm
getting
is
more
directed
towards
the
detectors
and
propagators.
I
think
actually
making
sure
that
x-ray
will
be
functional.
The
the
aws
sdk
instrumentation
would
be
nice
to
have
out,
but
I
think
that
that's
a
secondary
concern
for
us.
A
Yeah,
okay,
yeah
and
I
like,
like
you're,
saying
I
don't
think
that
the
detector
and
the
propagators
are
going
to
be
as
much
a
challenge
as
the
instrumentation
honestly
like.
I
think
I
think
the
instrumentation
is
going
to
be
a
huge
can
of
worms
and
we're
probably
gonna
have
to
iterate
on
it
because
you
know
just
I
don't
know.
Two
weeks
ago
we
were
talking
about
the
tracer
provider
coming
from
the
span
being
a
useful
pattern.
A
That
may
be
something
we
need
to
like
look
back
at
our
own
interpretation
and
reevaluate,
how
we're
writing
some
of
this
stuff.
So
I
I
that's
a
big
one,
but
if
we
can
try
to
focus
on
just
the
propagators
and
the
detectors,
I
think
that
that's
achievable
with
some
really
quick
turnaround.
Unlike
the
ci
system,
I
I
really
want
to
fix
that
that
and
having
it
declare
like
what
what
we're
using
for
our
metric
of
stability.
B
Yeah
and
the
the
http
instrumentation
I
know,
there's
been
requests
for
additional
metrics
to
be
added
to
this
micros
component,
but
I
think
that
we
probably
need
to
separate
out
metrics
and
traces
in
some
manner
so
that
either
there
are
two
separate
http,
instrumentations
or
there's
one.
That's
composable
and
the
metrics
is
optional,
because
I
don't
know
that
we
can
wait
for
a
1.0,
metrics
sdk
release
to
ship
1.0,
http
instrumentation.
A
Okay,
in
the
interest
of
time,
I'm
gonna
say
we
move
on
is
that
okay,
anthony
yep,
okay
next
thing
on
the
agenda,
I
had
looked
at
this
pr
for
a
draft
implementation
of
the
vlogging
instrumentation.
It
looks
like
robert's
also
looked
to
anthony,
mostly
who
I
was
kind
of
piggybacking
off
of
some
of
my
comments.
I
don't
know
if.
A
Yeah,
I
guess
that
was
like
gonna,
be
my
first
question.
Okay,
that
makes
sense,
then
maybe
it
makes
sense
to
to
talk
about
this
asynchronously,
because
I
had
some
questions
as
to
the
design
here,
but
yeah
if
they're
not
on
the
call.
Maybe
this
is
kind
of
a
good
point.
I
was
really
kind
of
wondering
why
we
weren't
just
implementing
something
similar
to
the
package
structure
we
already
have,
but
I
also
haven't
looked
back
at
this,
so
maybe
I
should
spend
a
little
more
time
instead
of
wasting
everyone
else's
time
here.
A
Oh
schedule,
okay,
that
makes
sense.
Well,
then,
I'll
just
the
way
I'll
rephrase
this
now,
because
I
don't
see
him
on
the
call,
if
I'm
not
mistaken,
is
if
you
have
time
to
take
a
look
at
this,
because
I
think
this
is
a
really
good
thing
that
we
could
introduce
right
after
we
get
the
stable
release
based
on
the
communication
we
just
had
and
would
really
benefit
the
project.
Having
some
logging
of
some
sort
that
we
could
have
severity
levels
on
would
be
extremely
useful
for
users.
A
I've
got
a
lot
of
feedback
on
this
one,
so
yeah.
I
think
this
is
something
if
you
have
some
time
and
some
cycles
take
a
look
at
it.
I
think
there's
a
good
proposal
here.
I
think
you
could,
like
I
said,
like
there's
some
there's
some
iteration
to
do,
but
I
think
it's
useful
cool.
I
don't
want
to
take
up
too
much
more
time
on
that,
I'm
going
to
pass
it
off
to
josh
with
the
draft
metrics.
F
Api
thanks
I'll
try
to
also
keep
it,
or
at
least
my
remarks
quickly.
So
one
thing
I
should
say
before
I
start
is:
I
don't
have
a
strong
opinion
here.
I'm
really
trying
to
help
finish
this
project
and
I
haven't
been
coding
as
much
in
the
last
year
as
I
used
to
so
like
my
go.
Contact
is
falling
off
because
of
so
much
spec
work.
F
I
did
put
together
an
api
that
feels
about
kind
of
I
mean,
there's
different,
many
different
rights
to
me
and
like
there's,
you
could
choose
like
like
there's
different
choices.
You
can
make
the
existing
api
that
we
have.
I
don't
don't
have
a
huge
problem
with
it.
It's
just
that
go
doc
is
like
really
dense
and
hard
to
read,
because
it's
cluttered-
and
I
one
thing
we
can
try
to
do-
is
which
I
haven't
done
yet
is
try
to
just
simplify
the
go
doc
without
moving
things
and
breaking
things.
F
If
we
took
that
route,
all
we
need
to
do
is
rename
value
recorder
to
histogram,
and
the
sum
of
server
becomes
asynchronous
counter
and
stuff
like
that,
like
there's
just
some
renaming,
and
we
could
make
a
very
non-disruptive
path
1.0,
but
we
have
received
feedback,
the
one
that
stuck
out
to
me
was
from
jana
dogen.
He
came
in
almost
a
year
ago
and
gave
some
some
pretty
detailed
feedback,
and
I
and
I
my
recollections
of
that
feedback
and
put
together
this
draft.
F
F
There
was
a
lot
of
duplication
and
like
repetition,
of
nearly
identical
apis
in
the
current
api.
You
see
that
in
the
go
doc.
What
I
the
route
I
took
here
was
to
split
like
the
integers
and
the
floating
points
into
separate
packages.
You
can
read
one
of
the
other
documentation
like
document
pages
and
like
understand
them
both
because
the
only
difference
is
integer
and
floating
point.
F
Then
there's
also
synchronous
and
asynchronous
where
the
calling
patterns
are
just
slightly
different
and
there's
so
there's
questions
here
about
how
to
structure
the
apis,
and
it's
also
worth
saying,
there's
several
types
of
complication
that
I
just
removed
and
I'm
just
I
just
wanted
to
keep
it
simpler.
There
was
an
apparatus
for
from
must,
like
must
register
instruments
and
returning
no
errors,
so
you
could
do
one-liners,
I'm.
I
don't
think
it's
worth
the
trouble
at
this
point.
F
You're
you're
saying
the
user
has
to
deal
with
that
and
they
can
add
one-liners
for
themselves
or
that
or
generics
will
come
along
and
whatever
the
other
thing
I
took
away
was
bound
instruments.
If
that
is
a
demand,
I
think
we
can
add
it.
It's
still
compatible
with
like
this
api
either
of
these
apis,
but
it
just
adds
a
lot
of
like
interface
and
personally
from
using
the
api
for
a
year.
I
don't
find
it
all
that
useful.
The
other
thing
I
took
out
is
well.
F
I
started
to
take
out
and
I'm
sort
of
on
the
fence
about
is
batch
and
the
observer
color
pattern.
I
I
use
the
word
callback.
One
of
you
asked
me
about
that.
I
it's
just
from
my
hotel
metrics
like
I'm
using
that
word,
because
it's
what
we've
been
using,
I'm
not
sure
it's.
The
right
word
handler
might
be
the
right
word.
F
The
thing
I
was
experimenting
with
here
is
to
have
essentially
a
one-to-n
relationship
between
callbacks
and
instruments
and
asynchronous
instruments,
because
for
some
of
the
common
instrumentation
packages
that
we
know
about
like
runtime
metrics,
you
have
this
read
mems
stats
and
it's
got
variables
and
you
kind
of
want
to
have
the
callback
fire
once
and
output
10
asynchronous
variants.
F
If
we
lose
that
and
it's
it's
not
exactly
clear
what
the
spec
will
say
about
stuff
like
that,
but
I
think
we
end
up
saying
languages.
Do
what
feels
right
for
yourselves.
If
that
feels
like
an
important
thing,
I
kind
of
kept
it
so
you'll
see.
I
have
a
callback.
My.
F
And
the
instruments
are
associated
with
callbacks
and
then
the
instrumentation
sdk
is
called
the
callbacks
expecting
to
see.
You
know
the
instruments
used
from
for
the
correct
callback,
so
there's
a
little
bit
more
burden,
the
programmer
of
the
asynchronous
instruments
to
use
them
correctly.
You
can
only
use
them
from
the
callback
where
you
registered.
This
structure
avoids
some
issues
that
are
known
about
the
current
api.
There's
a
race
condition
that
are
on
startup,
like
you
start
observing
right
away,
or
do
you
wait
till
the
sdk
starts?
F
There's
no
start
call
for
the
sdk,
so
there's
no
proper
way
to
register
batch
instruments
today,
anyway,
that's
that's
some
of
what
went
into
this.
It
helps
if
you
look
at
the
godoc
which
I
could
present.
If
you
wanted
to.
Let
me
share
the
screen.
I'm
not
sure
this
will
be
a
very
long.
I
just
I
promise
not
to
talk
very
much,
but
this
is
I'm
looking
at
here.
F
So
this
would
be
the
float
interface
and
so
there's
there's
four
types
of
the
meter
that
each
instrument
type
has
a
primary
method
for
a
single
measurement
like
add
or
record,
and
then
also
has
this
measure
function
and
it
returns
a
measurement
that
can
be
used
for
the
batch
api.
So
there's
a
batch
record.
F
And
if
you
want,
if
you
want
to
call
a
single
operation
with
many
measurements
at
once,
you
could
use
this.
This
measurement
api.
That
was
another
feature
that
was
part
of
open
census,
which
was
allegedly
for
performance
to
have
the
ability
to
make
multiple
observations
or
measurements
in
one
one
call,
but
it's
not
exactly
clear
today
why
that
matters?
F
If
we
end
up
moving
towards
multivariate
metrics
in
open
telemetry,
which
I'd
like
to
see,
then
multi
multi
variable
recording
totally
makes
sense,
it
gives
you
an
optimization,
as
well
as
a
semantic
like
benefit,
but
today
the
bigger
use
for
batch
recording
is
asynchronous
ones,
and
that's
where
you
get
this
complicated
one-to-many
relationship
between
callbacks
and
instruments
anyway,
I'll
I'll
stop
talking.
I
I
really
appreciate
aaron's
feedback
and
robert's
feedback
I've
already
it's
already
given
me
like
at
least
some
confirmation.
F
One
of
one
of
the
suggestions
was
to
use
number
essentially
to
to
combine,
float
and
integer.
What
that
means
is
that
you're
going
to
have
to
wrap
every
number
you
pass
to
the
metric
api
without?
Is
it
float
or
is
it
integers
and
then
the
problem
is,
can
you
make
mistakes?
Can
you
or
or
does
it
not
matter
like
some
of
the
code
is
going
to
care
about
it,
but
the
protocol
lets
you
mix
floats
and
integers.
So
I'm
not
sure
anyway,
I'm
here
to
serve
you.
C
I
would
have
one
ask:
I'm
not
exactly
sure
how
I
would
use
the
especially
the
asynchronous.
I
think
the
the
synchronous
sort
of
makes
sense,
but
could
you
just
have
like
a
prototype
example
like
I
know
this
doesn't
have
any
other
working
stuff
in
there,
but
like
just
some
example,
code
of
like
hey,
you
know
to
measure
something
via
the
callback
method
or.
A
Runtime
instrumentation.
F
F
This
nil
is
the
set
of
labels
that
would
apply
to
every
measurement
here,
but
there
there
aren't
any
specific
to
the
runtime
measurement
here.
This
pattern
has
a
race
in
it.
I
I
may
call
this
function
be
before
I
make.
I
can't
remember
how
to
describe
it.
It's
filed
in
an
issue
somewhere,
but
there's
a
there's,
a
raised
condition
in
this
api,
and
I
was.
A
F
C
But
but
yeah
just
some
kind
of
because
I'm
not
I
I
I
haven't
been
in
the
metrics
world
too
much
so
I
don't
have
a
very
good
mapping
of
one-to-one
or
even
like
how
the
prometheus
metrics,
which
I
was
familiar
with,
mapped
to
even
how
hotel
metrics
work.
So
just
just
at
this
point
the
synchronous.
F
Apis
are
basically
the
same,
never
mind
gauge
it's
complicated,
but
prometheus
does
have
the
sort
of
like
custom,
collector
concept
and
you're
basically
allowed
to
do
asynchronous
instruments
that
way,
and
so
it
was
possible,
but
the
docs
made
didn't
make
it
clear.
F
Function
would
go
a
long
way
from
my
understanding
with
what
I
have
I
I
snapchatted
on
tuesday
morning,
because
someone
asked
sean
who's,
not
in
the
call
asked
me
to
if
what
was
happening,
and
so
I
put
that
draft
and
no
no
examples
but
yeah
next
time,
maybe
next
week
I'll
come
with
some
examples,
and
we
can
just
do
this
discussion
again.
Thank
you
all.
B
F
It
kind
of
matters
because,
like
I'm,
changing
the
way
you
register
instruments
so
now,
if,
if
the
primitive
to
red
certain
instrument
returns,
two
values
you're
going
to
have
to
add
a
helper
method
or
you're
going
to
have
to
do
it
in
a
function,
an
initialization
or
a
constructor,
and
some
people
really
like
the
idea
of
static
instrument,
registration,
and
that
was
why
I
created
that
must
pattern.
But
it's
it's
just
every
interface.
You
need
two
of
them
and
it's
really
irritating
yeah.
F
F
A
A
Snapshot
so
yeah,
I'm
looking
forward
to
that.
I'm
gonna
head
over
to
garrett
he's
payback.
Let's
talk
about
hotel
with
aws
land
overview.
G
All
right,
it
might
be
easier
if
I
just
share
my
screen
in
a
sec,
but
for
those
of
y'all
who
I
haven't,
met,
I'm
garrett,
I'm
an
intern
at
aws
and
I've
been
working
on
adding
open,
telemetry
instrumentation
for
lambda,
which
is
its
own
bag,
worms
versus
all
the
other
instrumentation,
and
so
I
just
kind
of
I'm
not
ready
for
like.
Oh,
it's
not
gonna.
Let
me
shoot
my
screen
yet,
let's
see
what
happens
here,
I'm
not
ready
for
like
a
pr
or
anything
like
that.
G
I
just
want
to
give
you
all
a
heads
up
of
what
I'm
working
on
before
I
do
get
there
so
that
it's
not
just
here's
a
bunch
of
code.
You
know
added
into
all
your
repos
and
stuff
like
that.
So
let
me
enable
my
screen
sharing
for
zoom.
G
Oh,
that's
a
shame.
Okay!
Maybe
we
won't
do
that
if
you
could
just
share
it
I'll
jump
through
it.
It's
just
gonna
restart
everything
sounds
good
cool,
so
a
lot
of
the
information
in
here
is
very
specific
to
lambda,
and
so,
if
you're
interested
in
it,
please
look
into
it.
I
won't
go
through
the
super
nitty
gritty,
because
I
know
that's
not
totally
relevant
for
everybody
here,
but
the
information
is
here.
Should
you
want
it
so
just
to
start
off?
G
Here's
basic
example
of
what
having
a
lambda
function
and
go
looks
like
just
for
your
code,
you
pretty
much
import
lambda,
and
then
you
have
some
handler
that
will
get
called
every
time
your
lambda
function's
invoked,
and
you
do
whatever
logic
you
want.
It's
got
a
couple
optional
parameters
here,
a
context
and
then
some
event.
This
is
whatever
you
want,
and
this
is
just
your
payload.
G
The
context
will
end
up
being
important
later
down
the
road.
So
I
wanted
to
point
it
out
here
we
can
skip
over
the
components.
Those
will
kind
of
come
up.
Naturally,
as
we
talk
about
it,
so
what
the
whole
goal
we're
doing
here
is
we
want
to
provide
a
wrapper
that
a
user
can
wrap
around
their
handler
and
then
that
will
enable
tracing
for
all
their
downstream
things.
G
It'll
provide
the
trace
id
resource
attributes
everything
like
that
for
them
garrett,
the
example.
Yes,.
D
Well,
this
will
you
also
provide
a
wrapper
that
doesn't
take
the
the
reflection
based
wrapper
but
where
you
could
dig
one
level
deeper,
so
you
know.
D
That
you
showed
above
it
requires
reflection
on
each
call
to
decode
the
parameters
and
find
the
right
handler
function
and
all
that.
G
We
were
planning
to
just
do
it
with
the
single
rapper,
because
that
kind
of
matches
the
style
of
how
lambda
does
their
start.
They
provide
you.
Two
entry
points
start
start
handler
and
then
they
use
reflection
to
figure
out
what
parameters
you've.
Provided
it
there's
only
a
couple.
I
think,
there's
like
six
options,
total
of
what
you
can
provide
and
so
it'll
spit
back
out
at
you
hey,
you
didn't,
you
know,
give
us
the
correct
parameters
here
or
return
values
and
stuff.
So
we
were.
G
Do
it
the
same
as
that,
so
that
you
don't
have
multiple
wrappers
that
go
into
one?
You
know
lambda
start.
G
D
G
Yeah,
so
the
the
overall
goals
will
provide
a
wrapper
of
some
sort
that
the
user
just
wraps
around
their
handler
and
then
when
they
call
start
or
the
other
start
handler
entry
point,
it
does
all
the
stuff
for
them
scroll
down
a
little
bit.
Let's
see
yeah,
so
here's
the
basic
data
lambda
gets
invoked.
It
goes
into
the
you
know.
G
What's
now
the
customer
application
first
the
wrapper
and
sends
sends
stuff
using
the
urban
telemetry
sdk
it
sends
stuff
to
a
collector
which
then
sends
their
trace
data
to
either
x-ray
or
whatever
service
you're.
Using
for
that,
obviously,
where
mostly
interested
in
next
story
but
it'll
work
for
whatever,
depending
on
your
collector
configuration
and
then
it'll,
do
your
customers
handler
the
implementation
process?
I'll
briefly
go
through
this
in
case
you're
interested?
G
What
is
required
of
a
customer
is
they
have
to
change
their
lambda
runtime,
there's
stuff
that
the
go
runtime
doesn't
actually
support,
but
sort
of
supports.
It's
kind
of
a
quirky
thing.
You
just
have
to
change
your
runtime
and
then
it
will
work
as
as
planned.
Without
changing
your
source
code,
then
you'll
add
the
lambda
supports.
G
Lambda
layers
is
what
they're
called
if
you're
not
familiar.
It
allows
you
and
lambda
extensions,
so
they
allow
you
to
run
a
sub
like
a
separate
process
alongside
your
lambda
function.
So
that's
how
the
collector
will
run
it'll
just
run
locally
in
the
same
run
time,
and
then
you
can
access
it
locally
and
yeah
it'll
get
started
up
automatically
for
the
user.
G
They
don't
have
to
deal
with
any
of
that
manually,
so
you'll
have
to
add
a
collector
as
a
lambda
layer,
and
then
your
all
your
lambda
setups
done,
and
you
just
need
to
instrument
this
stuff
kind
of
subscribe
to
above
and
then
here
as
well.
G
The
one
quirk
that
comes
up,
as
I
mentioned
above
with
the
context
since
everything
is
very
explicit
and
go
in
order
to
actually
instrument
your
downstream
services,
whether
that
be
you
know
our
aws
clients,
whether
that
be
http
requests.
Anything
like
that.
You
do
need
to
use
that
context.
G
So,
while
it
is
an
optional
parameter,
if
you
are
not
the
most
downstream
service,
you
will
want
to
use
that
in
order
to
instrument
things,
so
our
wrapper
automatically
taps
into
that.
If
it
exists
and
we'll
add
the
stuff
like
the
trace
id,
you
know
various
attributes
and
resources
and
stuff
like
that,
adds
it
all
into
the
context
for
you,
and
so,
when
you
later
use
the
context
everything's
already
set
up,
the
user
doesn't
have
to
customize
the
contents
in
any
way
themselves.
G
Let's
see
what
else
yeah
all
this
is
done,
and
you
know
there
will
be
a
new
package,
the
future
in
the
contrib
repo
for
hotel
lambda,
that's
going
to
be
both
the
wrappers
all
that
stuff
and
then
also
we
need
a
new
resource
detector
for
aws
lambda,
with
the
specifics
there
of
how
how
we
pass
data
out
of
lambda
and
stuff.
Like
that,
I
know
that's
a
lot
of
information,
a
lot
of
stuff
lambda
related,
that's
skimmed
over
there
there's
a
little
more
information
in
there.
G
D
G
I
yeah-
I
should
have
mentioned
that
as
well.
That's
something
that
all
the
various
lambda
implementations
have
had
to
deal
with.
Pretty
much
lambda.
Will
it's
the
lambda
extension.
There
is
not
directly
related
to
that.
However,
that
is
an
important
thing
to
talk
about
the
lambda
extension.
Is
it's
just
a
way
that
they
can
have
the
collector
running
locally
in
their
lambda
function,
since
they
don't
have
or
in
their
lambda
environment,
since
they
don't
have
direct
access
to
their
environment?
G
Without
that
you
know,
without
being
able
to
just
manly
manually
kicking
off
the
collector,
we
had
to
provide
some
way
to
do
that
and
the
lambda
extensions
are
our
way
of
doing
that,
but
for
the
freeze
stuff,
what
how
lambda
works
is
it'll
freeze
when
your
application's,
not
in
use
so
as
soon
as
it
finishes,
invoking
whatever,
whatever
your
function
is,
lambda
will
come
in
and
arbitrarily
freeze
your
application.
So
just
stop
it
exactly
how
it
is
you
get
no
warning
you
get.
No,
you
don't
even
know
what
happened.
G
Yeah
there's
no
signals
there,
and
so,
if
you
have
stuff
waiting
in
the
collector
that
hasn't
been
sent
out
yet
it'll
have
no
idea,
it
hasn't
been
sent.
You'll
just
be
missing
your
traces,
and
so
what
we
do
and
we've
had
to
do
this
in
all
of
the
all
of
the
implementations
using
lambda
is
we
have
to
force
flush
the
collector
to
ensure
that
our
traces
get
sent
out
before
lambda
freezes.
B
Yeah
this
brings
up,
I
think,
probably
an
important
point
of
differentiation
from
between
this
and
most
of
the
other
instrumentation
that
we
have
most
of
the
other
instrumentation
will
take
like
a
tracer
provider
and
propagators
as
configuration
options,
whereas
this
instrumentation
is
actually
responsible
for
setting
up
the
sdk
and
it
will
set
it
up
in
an
opinionated
manner.
It's
going
to
create
the
tracer
provider.
B
It's
going
to
configure
the
propagators
it
will
set
them
into
the
globals,
but
then
it
will
also
handle
force
flush
out
to
the
collector
and
that
collector
is
set
up
in
a
specific
way
such
that
it's
actually
got
just
a
an
oclp
receiver
and
an
x-ray
exporter
in
a
single
pipe
line
so
that
they're
all
single
threaded
and
the
data
flows
straight
through
it.
There's
there's
no
pausing,
no
buffering
or
anything
like
that,
so
that,
as
soon
as
the
wrapper
calls
force
flush
on
the
exporter,
it
goes
wrapper
collector
x-ray.
C
So
there
was
like
a
recent
otlp
http.
C
Discussion
about
public
api,
I
believe
in
the
hotel
go
channel
and
the
idea
was
the
concept
behind
the
public
api
is
instead
of
having
the
the
context
that
you're
creating
be
a
child
of
the
incoming
context
right,
so
there's
a
child
parent
relationship.
It
would
be
created
with
links
instead,
meaning
that
there's
like
a
security
boundary
that
you've
crossed,
and
you
don't
trust
that
one
to
be
the
parent.
C
Your
current
context
is
there
any
kind
of
consideration
for
things
like
that
that
might
occur,
or
is
it
just
always
going
to
be?
I
figure
you
probably
get
90
of
the
use
cases
with
the
opinionated
version,
but.
C
B
In
this
case,
that
level
of
distinction
between
public
and
private
would
have
to
happen
up
at
your
api
gateway
or
alb,
or
something
like
that
right
by.
E
B
Time
a
lambda
function
is
invoked
and
x-ray
segments
fair
trace
has
already
been
started
by
the
api
gateway
and
gets
passed
to
the
lambda.
If
there
are
already
segments
for
the
invocation
and
cold
start
periods
that
will
be
caught
up,
we're
actually
just
inside
of
a
trace.
That's
already
been
started
by
a
larger
portion
of
the
lambda
system.
G
Okay,
yeah,
I
guess
yeah
so
lambda
does
automatically
start
our
trace
off
for
you
before
before
it
even
reaches
your
your
customer
code,
yeah.
A
G
That's
mostly
it
it's
span.
Wrapping
the
function
as
well
as
it'll
then
allow
you
to
your
internal
spams
will
now
be
connected,
obviously
to
your
created
choices,
right
yeah,
so
it's
pretty
much.
A
One
of
the
things
we
learned
from
the
grpc
instrumentation,
I
think
it
still
exists
for
the
streaming
api.
Is
that
there's
a
span
and
we
just
add
events
to
it
and
it's
an
unbounded
memory
situation.
There
isn't
a
way
to
update
the
span.
While
this
is
actually
you
know
running
the
handler
or
is
there
is
there
a
way
to
do
that.
B
B
D
One
thing
I
think
that
would
be
useful
thinking
of
this
as
a
consumer,
not
the
grief.
D
This
would
bring
on
you
as
an
implementer
would
be
if
you
could
maybe
layer
this,
so
that
if
I
wanted
to
use
my
own
sdk
exporter
and
not
rely
on
the
sidecar
here
that
I
could
I
you
know
as
long
as
I
bear
the
responsibility
for
flushing
or
deciding
that
I
don't
mind
if
I
lose
a
few
traces
due
to
a
you,
know,
unfortunate
kill
that
I
could
use
the
kind
of
auto
instrumentation
without
buying
into
the
whole
sdk
configuration
and
sidecar
business.
D
So
I'm
wondering
whether
or
not
that's
like
you
could
achieve
that
through
layering,
maybe
making
one
layer
down
public
exported
where
there's
a
higher
level
wrapper
that
takes
care
of
all
that
stuff.
But
then
there's
a
lower
level.
One
that
could
you
know,
do
the
propagation
and
set
up
the
context
and
that
kind
of
stuff,
but
but
expect
that
I
bore
the
burden
of
configuring
exporters
and
such.
G
I'll
definitely
look
into
that
and
see
what
that
would
entail
and
stuff
like
that,
because
I
do
see
how
that'd
be
really
useful
for
people
using
it.
Obviously
you
don't
want
to
just
be
tied
into
x-ray.
However,
we're
doing
it
yeah.
D
You
lose
some
performance
by
by
force
flushing
every
time
yeah,
you
know
calling
I'm
calling
flush
helps
but
yeah.
A
I
think
it's
a
really
good
one.
Actually,
I
didn't
think
about
that.
I
need
to
call
time
we're
a
minute
over
and
I
have
got
lost
and
thought
on
this
one.
So
thank
you
for
showing
this
please.
I
don't
know
if
you
haven't.
A
Yeah,
it
doesn't
look
like
there's
any
suggestions
enabled
on
this.
I
don't
know
if
you
can
change
that,
but
maybe
if
people
want
to
provide
some
more
feedback
on
the
dock,
please
cool.
Otherwise,
I
we're
out
of
time
so
I'll
see
you
all
next
week
and
thanks
for
joining
a
really
great
discussion
today
and.