►
From YouTube: 2021-03-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Portland
chat,
portland
chat
and
now
we
got
how's
the
weather
in
tokyo.
C
A
A
B
C
C
C
D
Texas,
all
right,
we
had
an.
D
Interesting
morning
meeting
yesterday
earlier
meeting,
we
really
need
names.
Like
I
don't
know,
we
come
up
with
like
trees
like
yeah.
I
don't
know
what
to
name
our
meetings
to
keep
them
making
sense.
D
D
Yeah
correlate
those,
so
the
proposal
is
to
pull
out,
and
so
this
is
just
the
neti
client
instrumentation.
We
think
that
we
have
we're
not
sure
how
that
we
have
this
problem.
C
D
D
This
is
so
painful
yeah.
This.
C
D
C
Not
not.
I
mean
obviously
these
we
can
implement
it's
more
just
again.
The
basement
usage
like
is
that
something
we
can
support
and
if
it
is
what
we
support
it,
and
I
there's
probably
still
cases
of
media
where
we
can't
support
it
like
we
just
have
to
be
specific.
You
can't
use
pipelining,
whatever
people
might
be
benefiting
from
that,
but
I
don't
know.
D
Yeah
and
there's
I
mean
questions
that
I
sort
of
have
that
I
don't
know
the
answers
to,
are
I
mean
the
nettie,
so
we
do
I
mean
there's
the
nettie
http
codec,
and
so
that
does
have
a
concept
of
request
and
response,
and
so
I
mean
that's
sort
of
at
the
layer.
Our
instrumentation
is
so
like.
I
agree,
I
mean
for
sure
instrumenting
bass
nettie
without
you
know
just
really
bass.
Netty
is
hopeless,
but
this
the
http
layer
on
top
of
it,
gives
you
some
clues.
C
D
C
C
D
That's
just
an
arg
like
a
temporary
kind
of
structure.
That's
passed
in
like.
C
So
it's
like
even
we're
using
the
word
pipelining,
the
one
that
people
care
about
more
is
http
too,
because
http
pipeline
no
one's
supposed
to
actually
use,
but
http
2,
I
think,
has
the
same
problem
where
it's
the
application
that
is
actually
managing
the
request
response
pairs.
Not
many
I've
seen
this
in
our
mirror.
C
C
D
C
I've
talked
about
propagation
being
so
hard.
I
never
remembered
this
anecdote,
but
this
reminded
me
of
it
and
that,
like
we
did
like
when
we
were
considering
automatic,
propagation
or
maria
one
of
the
main
reasons
we
decided
against
was
that
like
when
writing
to
the
channel,
we
don't
actually
know
which
request
is
currently
active,
and
so
we
weren't
even
able
to
come
up
with
a
way
to
propagate
automatically
through
through
the
through
the
netty
handler.
C
C
A
C
D
Okay,
yeah,
that's
good
to
hear
about
the
army
experience,
because
that
was
one
of
my
questions,
for
you
was
what
does
I'm
curious?
What
treston's
thoughts
on
the
matter
would
be.
C
D
Yep
so
web
flex.
D
Okay,
okay,
so
we
have
to
yeah
yeah
I
mean
like.
I
know,
for
I'm
not
too
sad
about
it.
For
my
particular
customers
like
like,
I
primarily
you
know,
it's
all
web
flex
reactor
nitty
stuff,
the
people
who
are
using
neti.
C
So,
like
I
guess,
nikita
has
looked
into
this
a
lot.
Did
he
happening
to
know
whether
these
libraries
provide
a
instrumentation
book
type
of
thing?
I
would
definitely
assume
red
pack
does
because
that's
sort
of
an
http
library
it
was
designed
for
that
probably
reactor
and
id.
I
don't
know,
because
that's
more
of
a
low
level
thing.
D
C
C
D
D
It's
a
separate
project,
that's
not
under
spring,
so
you
can
use
it
separately.
C
D
D
D
Does
that
sound
reasonable?
To
you,
I
mean
I
kind
of
was
it
feels
like
something
that
would
be
ideal
to
pull
out
before
one
zero
from
and
then,
but
that
it
would
leave
kind
of
a
big
gap
there
until
we.
C
D
But,
oh,
I
guess
yeah,
I'm
not
thinking
about
pulling
it
out
from
the
perspective
of
there's
bugs,
but
from
the
perspective
of,
if
we
know
we're
gonna
pull
it
out.
I'd
rather
pull
it
out
sooner
than
later,
and
not
have
people
depending
on,
say,
they're,
the
netty
instrumentation
working
in
one
zero
and
then
dropping
it
in
one
one.
D
C
D
But
that's
I
I'm
totally
fine
with
dropping
it
in
1-1.
Also
I
mean
we're
we're,
definitely
making
it.
You
know
we're
not
declaring
stability
of
the
telemetry
so.
C
I
guess,
ideally,
we
have
maybe
included
only
for
these
cases,
but
I
there's
no
way
to
do
that
right,
so
I
mean
I,
I
don't
have
a
strong
opinion
either
way.
I
think
I
mean
I
guess
I
lean
towards
keeping
it
if
we
can't
fix
all
these
very
quickly
which
we
can't.
I
think,
because
these
are
still
some
important
things
to
instrument.
C
D
C
D
D
D
I
have
a
web
flux
smoke
test
that
fails
about
maybe
once
every
30
or
40
times
nci,
and
so
I
have
a
branch
where
I
run
it
in
parallel
20
times
and
I've
been
re-running
that
and
starting
to
add
some
debugging
into
there
and
curious.
If
that's.
What's
not,
it's
not
capturing,
it's
capturing
the
the
request,
but
it's
not
capturing
what's
happening
inside
the
request,
so
there's
a
like
the
webflux
internal
span,
so
it
feels
like
a
correlation
issue
which
feels
like
it
might
be
this,
but
I'm
not
sure
yet.
D
John
asked
everybody
to
go
and
start
watching
and
start
contributing
to
the
spring
cloud.
Sleuth
hotel
project.
A
Yeah
you
and
I
chatted
about
this
a
little
bit,
but
I
think
that
needs
help.
I
think
it's
it's
really
a
mess
right
now,
and
it
would
be
very,
I
think,
it'd
be
helpful
for
those
of
us
who
know
the
open
telemetry
side
pretty
well
to
help
kind
of
firm
up
the
implementations
of
especially
of
the
propagation
in
here
to
think.
The
context.
A
Propagation
is
kind
of
haphazard
at
best
at
the
moment,
but
then
there's
also
we
they
are
using
the
instrumentation
apis
that
which
okay,
the
hotel,
http
server,
handler
or
sorry
tracer
and
client
tracer
as
well,
and
I
wonder
whether
they
need
to
be
or
whether
we
could
give
them
a
narrower
api
to
use.
A
A
C
A
Mean
you
and
I
chatted
a
little
bit
about
the
content
stuff.
I
think
the
way
that
the
context
propagation
bridge
is
done
right
now
is
way
more
complicated.
I
think
than
it
needs
to
be.
A
I
think
also
it
will.
It
will
take
that
will
be
a
pretty
big
project
because
it'll
involving
ripping
out
a
whole
bunch
of
code
and
then
making
the
tests
somehow
work
with
the
ripped
out
code,
and
I
have
not
looked
at
baggage
at
all,
and
I
know
the
baggage
propagation
is
broken
right
now.
D
What
was
the
the
idea
about
context
propagation,
using
brave.
C
So,
in
our
open
term,
tree
java,
we
have
one
example
project
which
integrates,
which
is
context
bridging
between
hotel
and
brave,
where
we
store
open,
telemetry
context
into
the
brave
context,
so
that
because
this
is
based
on
brave,
it
should
work
here
and
that
would
probably
fix
a
lot
of
problems.
So.
C
D
A
D
A
Yeah
yeah,
it
wouldn't
be.
Oh,
the
concern
is
that
I
think
it
may
have
been
made
into
an
interface
and
may
not
I
I
haven't
looked
into
it
but
there's.
No,
I
don't
there's
no
complex
implementation
in
in
sleuth.
I
think
that's
left
up
to
the
individual
implementations
so
anyway,
it
needs
to
be
looked
at.
A
It's
a
it's
a
good
idea,
but
we
need
to
actually
figure
out
if
it's
possible,
the
the
the
propagation
apis
confuse
me
a
lot
in
sleuth
also,
which
may
be
it
seems
like
there's,
there's
too
many
ways
to
do
things,
and
I
don't
understand
when
one
should
be
used
versus
the
other,
which
I
think
confuses
the
context.
Propagation
story
as
well.
D
And
then
we
talked
about
the
what
takes
precedence,
java
or
library
instrumentation,
and
you
probably
saw
the
comment.
I
summarized
our
our
decision
that
I
I
agreed
to
change
my
mind
about
and
have
library
instrumentation
take
precedence
and
I
think
that's
a
good,
a
good
plan
and
it's
easy
and
it's
a
good.
It's
consistent.
D
So
yeah,
so
thanks
for
starting
to
break
out
the
library
instrumentation
that
definitely
you
know,
I
think,
motivates
this
discussion
to
be
real.
Now.
C
Yeah,
like
okay
http,
I
have
no
real
reason
to
do
it,
but
I
noticed
that
it
was
already
an
interceptor
in
like
okay.
How
long
does
it
take
for
me
to
migrate
this
delivery?
It
was
just
sort
of
a
test
thing.
It
was
very
easy,
like
especially
most
like
matias
has
done.
Such
awesome
work
on
the
testing
side
so
like
it
was
like
less
than
30
minutes
just
to
move
some
files
around
and
just
all
worked,
obviously
really
easy
if
it's
already
using
those
hooks.
So
that
was
nice.
D
Cool,
oh
yeah,
so
I
did
ask
and
I
think
the
preferences
for
the
one
zero
that
I
think
we're
still
probably
making
tomorrow
to
still
only
be
the
java
agent
and
then
we
can
line
up.
You
know
decide
on
kind
of
do
a
final
review
of
any
of
the
library
instrumentations
that
we
want
to
mark
stable
and
do
that.
You
know
with
one
one
or
whenever.
C
D
Yeah
yeah
yeah,
but
let
me
go
and
add
that
to
the
maintainer
meeting
agenda
to
make
sure
that
I
want
to
make
sure
that
we
keep
that
sort
of
front
and
center
with
people
in
people's
minds.
See.
Okay,.
D
D
D
Oh
yeah,
I
noticed
the
we
don't
have.
I
don't
think
some
people
know
we're
on
slack.
D
Probably
because
I
deleted
the
getter
before
before,
we
had
a
slack.
D
D
And
I
already
think
john
and
you
for
the
excellent
versioning
doc
in
the
java
sdk
that
we
just
we
can
reference
now.
So
I
had
added
you
had
asked
about
that
debug
one
under
experimental.
D
So
I'd
added
this
that
these
were
not
stable.
But
if
we
already
kind
of
know
that
we're
going
to
deprecate
it,
then
I
will
put
that
under
experiment,
debug
debug
experimental
to
be
extra,
clear.
C
D
And
we
have
fail
on
context
leak,
which
is
part
of
that
debug.
The
propagate
debugging
propagator,
which
I
mean
could
be
under
debug,
but.
C
D
C
D
We
have
oh,
the
transform,
safe
the
class
file
transformer
safe
logging
for
gradle.
That.
C
D
C
D
So
we
don't
have
anything
else
under
debug.
Currently
this.
C
D
Especially,
if
I
add
the
I'll
move,
the
yeah,
so
that
makes
sense
I'll
move
this
one
to
experimental
or
debug
experimental
or
something
and
then
bring
drop.
This.
C
C
C
C
D
So
what
would
like
yeah
so.
C
C
D
Flags,
so
do
you
think
we
should
call
out
here.
C
D
Cool,
so
I
will
add
that
basically
I'll
add
a
note
down
here
other
than
the
name
spaces
above
here
those
are
considered
stable.
D
D
So
those
could
change,
but
I
agree
the
intention
is,
for
those
anything
outside
of
that
to
be
considered
part
of
our
one.
Zero
stability,
cool.
D
D
About
okay,
let's
see:
are
there
any
prs
that.
D
C
D
D
No,
no!
No
companies.
C
D
Yes,
yes,
last
last
90
days,
last
last
six
months,.
D
D
C
C
A
I
just
ran
the
batch
spam
processor
benchmark
on
my
laptop
with
the
vr
from
the
contributor,
and
it
only
looks
better
if
the
when
the
exporter
is,
it
takes
zero
seconds
to
do
its
export,
otherwise
it's
otherwise.
It's
basically
a
wash
and
there's
no
improvement.
B
A
B
I
thought
I
mean
based
on
that
and
the
based
on
those
initial
benchmarks
that
we
didn't
really
see
the
code
for
initially.
I
thought
it
was
a
worthy
experiment,
but
it
didn't
really
pan
out
so.
A
Yeah
I
mean
my
hunch
is
the
well
a.
I
think
that
they
whatever
was
being
done
there,
there's
way
too
much
noise
from
other
things
to
actually
make
any
conclusions,
but
also
yeah
the
the
everything
performs
really
really
well,
when
you
sample
out
all
your
spans
and
you
don't
have
anything
to
process.
A
B
A
B
Was
error
bars,
probably
yeah?
There's
no
error
bars
in
any
of
those.
No,
I
know,
but
you
think
that's
probably
what
it
is
I
mean
I
don't
I
don't
know.
I
have
no
idea
what
they're
doing
so:
they're
spinning
up
they're
spinning
up
a
pod.
That's
like
feeding
test
data
in
through
sure,
but
I
don't
know
like
how
many.
A
C
C
A
So
yeah
I've
been.
I
was
pondering
quite
a
bit
last
night
and
this
morning
whether
it
might
be
an
interesting
exercise
to
write
a
kind
of
a
custom
sdk
for
otlp
that
didn't
have
double
transformation.
Because
right
now
we
transform
to
span
data
which
is
pretty
cheap
because
it's
mostly
just
wrapping.
But
then
the
exporter
has
to
also
transform
to
otlp
and
if
we
wrote
kind
of
a
custom
sdk
that
just
used
the
otlp
representation
as
the
represent
the
internal
representation
for
the
span.
C
A
C
A
A
Very
yeah
exactly
and
there's
always
going
to
be
ways
that
we
can
make
it
faster
for
narrow
use
cases,
but
I
think
we
should
resist
trying
to
do
that,
because
we're
always
going
to
be
making
compromises
for
other
use
cases
and
I'd
rather
focus
on
making
something.
That's
really
solid,
general
purpose,
rather
than
trying
to
be
be
a
like
I'd,
rather
be
a
swiss
army
knife
in
this
case,
rather
than
like
a
super.
A
C
C
We
yeah
and
like
if
those
optimizations
seem
generally
applicable.
Of
course,
I
think
it's
still
okay,
we
definitely
don't
want
optimize
at
the
expense
of
any
other
use
case.
That's
a
no-no,
and
so
this
bsp,
like
I'm,
still
not
quite
clear
on
which
it
is
yet
again,
like
generally
signaling,
seems
sort
of
better
than
pulling.
But
I,
unless
we
have
more
stable
numbers,
it's
hard
to
see
anyways.
A
A
C
A
Then
that
isn't
I
mean
that's
literally,
not
even
benchmarks,
that's
more
like
that
was
that
thing
was
built
for
what
happens
if
the
network
is
having
problems
with
the
otp
exporter
and
just
to
make
sure
that
we
weren't
dropping
things
in
the
or
that
we
were
dropping
things
before,
rather
than
causing
memory
leaks,
so
it
had
a
different.
That's.
C
A
I
don't
know
how
you
would
I
don't
know
how
you
would
get
cpu
numbers
out
of
it.
They're
stable.
C
A
D
B
Trying
to
put
a
day
a
week
in
on
it,
but
yeah.
A
It
has
a
little
it's
just
basically
yeah
inside
this
directory.
My
goal
was
basically
what
happens.
Let's
try
to
see
what
happens
if
you
stress
the
pipeline
and
in
this
particular
case,
like
what
happens
if
you
actually
just
make
the
network
have
problems
like
drop
connections
or
have
trouble
connecting
for
a
couple
seconds,
and
it
then
measures
what
basically
it
uses.
A
It
does
two
crazy
things,
one
it
parses
the
logging
output
of
the
collector
to
say
how
many
spans
the
collector
has
seen,
and
it
also
uses
the
metrics
that
are
recorded
by
the
bsp
and
the
maybe
it's
just
the
bsp,
and
it
spits
all
that
stuff
out.
So
you
can
actually
see
like
how
many
spans
did
the
exporter
see?
How
many
spans
did
the
spam
processors
see?
How
many
did
it
does?
I
think
it
dropped,
how
many
actually
made
it
to
the
collector,
so
it
actually
it
outputs
all
that
data.
B
A
A
Down
a
little
bit,
maybe
let
me
let
me
let
me
look
at
the
code.
I
haven't
looked
at
this
in
detail
in
a
while.
A
A
Do
we
connect
properly
and
we
do
reconnect
properly
and
everything
works
well,
but
this
is
really
just
more
of
a
like
a
testing
harness
it's
not
supposed
to
be
a
reboot
reproducible
benchmark.
There's
commented
out
code
in
there
there's,
but
it's
basically
a
way
where
you
can
just
set
it
to
set
things
up
and
play
around
and
see
how
things
behave.
D
Yeah,
I
I
want
to
try
this
out.
I
mean
pull
out
the
toxic
proxy
taxi
proxy
stuff
and
try
it
out
without
our
exporter,
because
yeah.
A
D
A
Fun,
stuff,
you
can,
you
can
see
it's
actually
interesting
when
I
was
interviewing
with
john
bly
at
splunk
he's
like
hey.
If
there's
one
thing
you
should
you
think
we
should
do
what
would
it
be
and
I'm
like
we
should
test
to
see
what
happens
when
the
network
misbehaves
like
all
right
when
you
get
hired,
do
that
and
you're
right
here.
This
was
my
experiment.
A
C
C
So
that
issue
yeah,
and
so
then
tigran
was
like
yeah
but
there's
actually
like
five
use
cases.
One
of
them
is
this
buffering.
One
of
them
is
to
say
you
wanted
an
archive
and
whatnot,
but
yeah.
That
sounds
right,
but
what
what
I
gathered
was
that
the
main
one
they're
looking
at
is
this
buffering,
which
makes
sense
like
even
like
the
amazon
frameworks
team
like
like,
if
you
can
just
make
this
exporter,
use
a
file
we'd
be
much
more
comfortable.
This
weird
grpc
thing
and
like
this
happens,
so
it's
definitely
an
interesting
conversation.
C
C
A
D
Go
yeah.
I
asked
because
all
of
the
the
azure
monitor
exporters
have
or
traditionally
in
the
past
have
had
that
feature
and
so
there's
I
guess,
we're
supposed
to
keep
doing
that
the
future
to
save
on
network
failure
yeah
to
store
it
to
disk
on
network
failure.
Up
to,
like
you
know,
some
number
of
small
number
of
megabytes.
C
D
B
It's
an
interesting
use
case,
though
I
mean,
whenever
I
think
about
this,
I'm
like
well.
If
your
network
goes
down
or
is
being
sporadic,
isn't
your
app
that
you're
trying
to
apm
up
gonna
have
all
kinds
of
other
problems
anyway,
although
the
count,
maybe
the
counterpoint,
is
that's
exactly
when
you
want
all
this
data
and
so
losing
it
is
bad.
I
don't
know
I've
heard
both.
I
think
about
both.
D
So
I
think
with
our
so
with
our
customers,
sort
of
we
were
more
historically
or
have
been
more
like
a
logging.
They
almost
treat
it
as
more
like
logging
than
apm,
and
so
they
do
want.
You
know
that
more.
I
can
see
that
as
a
use
case.
For
you
know,
you
don't
want
to
lose
your
logs
versus
apm
data,
which
is
like
you
know,
sampled,
heavily
sampled,
not
a
big
deal
if
you
miss
out
on
you
know,
and
yes,
you
have
bigger
problems.
If
your
network
is
not
connecting.
B
Yeah,
I
mean,
I
guess,
I'm
using
apm
as
like
an
umbrella
term
there,
but
like
any
sort
of
like
forensic
diagnostic
telemetry,
including
logs
right.
Like
that's,
I
guess
when
you
do
want
to
know,
but
again
it's
probably
just
going
to
show
you
that
there
was
some
external
problem.
It's
like
when
you're
like
20
years
ago.
B
The
database
used
to
always
go
down
right
and
we'd
always
talk
about
the
databases
down,
and
so
you
knew
that
if
your
database
was
down
or
your
network
was
down
your
app
had
problems
and
you
didn't,
it
was
just
dead
in
the
water
and
they
do
you
say:
databases
don't
go
down
anymore.
Well,
they
definitely
do
they're.
Just
it's
more,
like
they're,
slow.
B
Yeah,
whatever
I
think
you
know,
we've
we've
passed
from
this
topic
before
this
buffering
through
files,
though,
is
interesting.
I
haven't
heard
it
being
asked
for.
C
Yeah
I
it
was
just
so
coincident
like
to
see
that
issue
come
up
around
the
same
time.
I
heard
people
at
amazon
also
sort
of
wanting
it,
and
so
this
might
be
a
more
broader
thing
than
I
thought
it
was
like.
I
personally
find
that
work
fine,
but
especially
like
some
grpc,
especially
it's
so
different
than
what
people
might
be
used
to,
that
a
simpler
solution
could
be
better.
Maybe.
A
Just
dropping
with,
I
think
there
might
be
a
packet
dropping,
but
I'm
not
100
sure
and
also
the
other
thing
I
found
out
was
especially
since
with
http
2
grpc
keeps
open,
like
a
connection
is
open
for
a
long
time.
So,
if
you're
trying
to
introduce
latency
in
the
connections,
it
doesn't
really
work
because
the
connection
is
held
open,
and
so
it
doesn't.
Let
you
put
latency
in
the
middle
of
the
connection.
Like
the
http
connection,
it
only
lets
you
do
connection
like
connection.
A
A
Well,
if
you
know
how
to
do
it,
that's
open
source
pr
is
welcome.
B
Much
well,
I'm
I'm
going
to
drop
off
here.
I
think
same,
I
think,
watson
were
you
going
to
pr
that
thing.
You
were
working
on
earlier
that
span
process.
It's.
A
That's
worth
actually
chatting
about
for
a
moment.
How
do
we
feel
about
me
hijacking
the
internal
we
cash
week,
map
from
the
context
implementation?
I
don't
know.
A
That's
good
cool.
I
learned
a
lot
while
I
was
implementing
that
it's
interesting
stuff.
C
We
can
consider
filing
a
spec
issue
to
do
what
zipkin
does,
which
is
actually
flush,
the
export
the
spam
which
isn't
like
you
can
actually,
instead
of
having
a
map
to
like
true,
you
can
have
a
map
to
the
spam
context
which
allows
you
to
actually
export
the
span
if
it
doesn't
get
ended
and
you
had
to
attribute
like
force
flush
down.
Oh
interesting.
C
D
Yeah
I
had
noticed
that
the
it's
a
good.
D
That
was
open
for
days,
bogdan
yeah.
I
noticed
that
I
mean
this
was
frank
just
submitted
yesterday.
I
guess
that
was
hasn't
even
been
open,
but
yeah
they
used
used
to
get
like.
You
know
some
comments
right
away
on.
C
C
C
D
D
Cool
all
right
yeah,
it's
actually
I've
been
really
happy
with
that.
The
hotel
java
went
1-0,
that's
been
very
nice.
C
Yeah
yeah
so
yeah
aws,
so
I
was
pushing
it
pushing
everyone
to
release
quickly,
which
is
good
because
without
release
can't
do
anything,
but
in
the
end
we're
probably
going
to
release
ours
in.
I
think
two
weeks,
because
we
decided
to
do
a
monthly
cadence
instead
of
just
release
whenever
and
so
that.
So
then,
instead
of
this
week
or
next
week,
whatever
it
lines
up
with
two
weeks
from
now.
So
that's
when
we're
gonna
release.
I
think.
D
Cool
and
so
we
release
means
our
distribution
java
agent.
C
D
D
Release
release
zero
point:
you
know
19.
C
D
I'm
gonna
release
a
303
beta
next
week
that
has
the
1-0
yeah,
but
mostly
happy
because
on
the
azure,
sdk
folks
have
had
this
weird
dependency
on
open
telemetry
also,
and
so
we
just
haven't
like
we
always
have
to
stay
in
sync,
which
we
never
do,
and
so
I
basically
had
told
users,
don't
even
don't
even
try
until
we
hit
hotel.
C
D
D
D
C
Yeah,
it's
actually
like.
I
don't
think
we
can
talk
too
much
in
detail
right
now,
but
I
was
looking
at
the
kubernetes
instrumentation.
C
A
C
Actually,
having
progress
pr
where
I
might
change
it
back
to
just
using
little
http,
and
we
can
look
at
it,
because
I
noticed
that
we're
basically
just
re-implementing
location
instrumentation
in
the
kubernetes
one
right
now,
because
they
both
instrument
the
same
types
locations.
You
could
call
objects
and
whatnot,
and
so
I
put
in
my
mind
I
had
these
things
like.
What
would
we
draw
this
line
like?
C
Why
would
we
instrument
kubernetes
api
in
this
case
versus
ok,
http,
but,
like
I
think
one
line
is.
If
the
library
provides
a
tracing
abstraction,
then
of
course
we
should
use
it.
If
it
doesn't,
then
instrumenting
only
the
transport
is
better
than
just
like
re-implementing
their
api.
Maybe
I
don't
know
it's
just
something
that
just
came
to
mind
just
yesterday.
D
Yeah,
so
I
think
if
I
remember
there's
a
couple
of
spans
like
about
the
name
like
kubernetes
namespace,.
D
C
D
Oh
these
ones,
these
this
was
the
your
the
caching
one.
D
No,
no,
I
I
mainly
put
it
there
as
a
to
do
for
trask.
But
yes
no,
I
agree,
it's
not
a
blocker
yeah,
and
this
is
the
yes,
the
debug
yeah.
So
I
think
we're
all
good.