►
From YouTube: 2022-06-01 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
A
Okay-
so
I
just
saw
in
this
this
wednesday
meeting
I
just
wanted
to
give
like.
I
want
to
give
first
quick
update,
what's
going
on,
because
not
everyone
attends
both
meetings,
so
I
think
the
the
only
place
where
we're
making
progress
right
now
is
the
events
api,
where
we
have
an
otep
that
that
santosh
opened
there
is
some
discussion
happening
there.
If
you
haven't
seen
it
take
a
look
and
then
I
also
worked
on
a
prototype
for
the
events
api
within
the
existing
javascript
sdk.
A
A
We
have,
we
have
other
lots,
but
lots
of
other
kind
of
areas
where
we,
where
we
have
this
different
discussions.
Yesterday,
we
talked
about
mostly
about
optimization
of
the
payload
there's
a
concern,
there's
a
concern
that
repeating
repeating
the
namespaces
specifically
for
for
attributes
or
even
the
attribute
names
are
too
long
for
for
browser.
A
So
santosh
was
proposing
that
we
add
that
we
add
short
codes
for
for
attributes,
which
will
be
part
of
the
semantic
conventions.
A
B
Wanted
to
talk
about
go
ahead.
Sorry
I
I
was,
I
mean
I
myself
am
not
100
convinced
that
would
be
a
you
know,
an
approach
without
any
issues.
I
think
it
reduces
debuggability
right.
I
think
so.
Anytime
there
are
obfuscated
variable
names,
it
reduces
readability.
B
So
you
know
I
I'm
not
100
sure.
If
we
want
to
go
that
route.
B
So
one
alternate
approach
is
that
you
know
the
short
codes
need
not
be
auto
generated,
but
we
could
have
explicitly.
You
know
we
should
specify
some
shorter
variable
names
in
in
place
of
those
namespaced
variables.
C
So
after
everyone
like
left
the
meeting
yesterday,
I
might
that
I
had
a
bit
of
a
chat
and
I
don't
think
we
should
go
with
the
short
codes
just
yet.
I
think
we
have
bigger
fish
to
fry
and
define
before
we
worry
about
coming
along
and
figuring
out
how
we
do
that
in
terms
of
having
a
hybrid
name
space.
Where
part
of
it's
shortened,
I
think
you
either
go
the
full
length
or
you
don't.
B
Okay,
yeah,
I
think
we
we
talked
about
it
only
in
the
context
of
the
attribute
value
being
anything
any
object.
I
think
this
is
only
in
response
to
that
concern,
so
yeah.
Otherwise.
I
agree.
It's
not
a
important
item
right
now,.
A
Okay,
so
I
think
with
that
yeah
I
wanted
to
kind
of
lead
to
the
next
topic
that
I
have,
which
is:
we've
had
lots
of
discussions
and
lots
of
potential
things
to
focus
on.
So
I
wanted
to.
I
just
talked
about
briefly
with
this
group
on
how
we
want
to
continue
with,
like
what
kind
of
the
high
level
plan
should
be
like.
We
have
a
number
of
different
proposals
or
oteps
that
we're
thinking
about.
A
We
have
the
the
existing
events
api
proposal,
that's
out
there
that's
under
discussion.
I
know
that
ted
you're
going
to
work
on
the
ephemeral
resources
otab.
A
B
A
Yeah,
so
so
you
know
a
few
months
ago,
we
worked
on
a
data
model
proposal.
The
link
to
that
is
up
here
in
this
document.
A
We
had
two
revisions
for
that
for
that
proposal,
but
it
it
did
not
get
finalized,
because
I
think
at
that
time
we
thought
that
we,
we
should
have
a
proposal
for
sessions
and
also
proposal
for
the
events
api
that
should
go
along
with
the
data
model,
so
we've
had
those
discussions,
so
I
I
do.
I
do
want
to
circle
back
now
to
the
data
model
otep
and
see
if
it,
if
it's
still
needed.
A
Basically
what
what
that
proposal
was
was
was
was
about.
It
was
basically
saying
that
for
for
client-side
telemetry,
we
are
proposing
to
use
events
that
was
kind
of
just
the
you
know
the
the
main,
the
main
thing
in
that
proposal
it
looks
like
there
is
a
consensus
overall
consensus
that
that's
the
direction
you
want
to
go
and
I'm
not
sure
that
we're
gonna
get
pushback
from
the
community
on
that.
But
I'm
not
sure
so.
A
My
question
was:
should
we
put
this
out
out
there
as
we
wrote
it
or
ted
mentioned?
Maybe
that
we
should
split
it
into
browser
and
mobile,
make
it
more
specific.
A
You
know
to
those
domains
and
so
that,
with
the
intent
that
somebody
who
wants
to
implement
like
a
browser
sdk,
for
example,
they
would
they
could
look
and
know
what
to
do.
But
are
there.
D
My
main
proposal,
I
think,
was
that
having
a
vague
otep
that
just
says
we're
going
to
use
events-
I
don't
know,
I
don't
know
how
necessary
that
that
that
otep
is,
but
I
think
something
that
would
be
helpful
for
this
group,
if
not
as
an
otep
would
be
to
to
start
getting
into
the
specifics
of
all
of
the
conventions
we
actually
want
to
define
right.
Like
you
know,
we
need
what
we
need
to
get
into
the
spec
for
the
browser.
D
Is
you
know
in
enumeration
of
all
the
different
browser
events,
for
example,
that
that
we
want
to
capture
and
same
thing,
goes
for
for
ios
and
in
android,
so,
rather
than
having
a
generic
events
otep,
could
we
start
on
a
document
for
browser,
a
document
for
ios
document
for
android?
That
starts
getting
into
all
the
specifics
of
those
platforms
and
what
what
actually
needs
to
get
updated
in
the
spec
in
order
for
us
to
to
do
a
better
job
of
recording
those
platforms.
C
Yeah,
I
think
we're
gonna
find
that
there
is
a
high
level
of
crossover
between
what's
required
in
in
the
different
platforms,
but
in
terms
of
like
the
having
a
starting
point
to
say
for
this
platform.
C
D
C
Yeah,
I
think,
ram
and
I
had
planned
on
being
a
lot
more
involved
at
this
point,
but
we
just
haven't
had
the
bandwidth
to
do
so,
just
yet
I'm
hoping
to
to
change
this
in
a
big
way
in
june,
but
that's
still
still
planning
that
was
in
june
today.
A
D
Proposing
the
semantic
conventions
right
like
we
have
to
actually
go
through
and
write
all
of
those
down
and
then
go
and
implement
them.
Okay,
I
think
that's
that's
a
lot
of
work
and
if
you
put
all
of
them
all
in
one
one,
google
doc,
this
could
be
a
big
big
google
doc,
but
that
might
be
fine.
At
any
rate,
I
mean
you
could
still
have
like
a
high
level,
doc,
that's
describing
our
overall
plan,
but
as
far
as
like
trying
to
just
submit
that
as
a
generic
otep,
I
don't.
D
I
don't
know
how
useful
that
is.
I
think
it's
more
useful
for
us
to,
as
just
like
a
planning
document
to
help
us
stay
organized
and
get
into
the
specifics.
C
Yeah
I
I
guess
we
still
have
the
question
of
nested
objects
like.
I
would
prefer
to
see
the
samanthine
conventions
being
you
know,
nested
based
rather
than
flattened,
but
we
have.
We
have
that
issue
and
I
would
I
would
like
assuming
the
issue
that
I
raised
last
week
to
have
it.
So
we
could
just
define
the
events
once
and
then
we
could
actually
say
this
is
how
they're
going
to
gonna
play
in
either
a
span
or
a
or
a
log,
but
I
can
see
a
lot
of
challenges
now.
D
Yeah,
I
I
think
it
would
be
difficult
for
us
to
change
our
approach
to
our
semantic
conventions
at
this
point
which
are
flat,
but
I
think
you
make
a
good
case
for
being
able
to
record
nested
objects
as
values
when
what
we're
trying
to
record
our
diagnostic
objects,
coming
back
from
coming
back
from
the
browser
or
another
environment,
rather
than
flattening
those
objects
or
manipulating
them,
because
that's
really
expensive.
C
Which
is
why
I
think
we
mention
it
with
logs,
because
that
is
what
the
attributes
and
logs
are
defined
to
support
right,
and
you
know
you
know,
we
definitely
have
the
requirement,
both
internal
and
external,
of
having
nested
objects
that
get
sent
off.
That
aren't
always
just
sterilized.
D
D
You
need
a
trace
to
to
model
the
event
handler
right,
so
you
know
there's
an
event
handler
tree
that
kicks
off,
and
you
want
to
measure
the
latency
of
that
and
provide
parent-child
relationships
with
those
operations
and
stuff,
but
then
there's
also
like
recording
the
event
itself
and
some
events
don't
have
event
handlers,
but
when
they
do
it's,
it's
helpful
to
record
the
event
in
the
context
of
that
event
handler
just
so
that
it's
captured,
you
know
it's
probably
a
context
well
and
like
go
ahead.
D
We
wouldn't
want
to
be.
Maybe
recording
all
of
this
information
twice
right.
So
maybe
there's
just
a
pattern
of
like
here's.
What,
if
here's,
the
information,
the
attributes
you
put
on
the
event
handler
span
and
here's
all
the
information
you
put
on
the
event?
Object
and
kind
of
keep
it
consistent,
yeah.
C
Like
the
the
the
good
example
is
ajax
requests
in
a
browser
like
they,
they
naturally
are
involved
in
a
span
and
you've
got
two
different
types
of
events.
There.
One
is
the
just
recording
the
fact
that
that
dependency
call
happened,
yeah
and
in
a
browser.
You've
got
all
the
extra,
depending
on
the
browser,
timing
that
you
want
associated
with
that
as
well,
so
they
really
just
naturally
fall
into
the
span
trace
category
but
yeah
mostly
other
events.
Don't
they
are
just
fire
and
forget,
as
you
call
them
diagnostic
logs
type
things.
C
D
C
A
So
with
this
nevin
I
also
talked
yesterday
about
if
there
is
a
way
to
make
a
lot
of
faster
progress
or
incremental
progress,
because
I
think
one
of
one
of
my
concerns
is
that
with
these
oteps,
I'm
not
sure
all
these
old
steps,
I'm
not
sure
how
long
that's
gonna
take
to
to
you,
know,
get
them
discussed
accepted
and
then
you
know
turned
into
spec.
A
And
that's
that's,
even
and
after
that,
we're
thinking
about
optimized
sdk
for
browser
or
mobile,
so
that
that
could
be.
You
know
like
six
months
from
now
or
longer.
A
I
was
wondering
if
and
I'm
curious
about
anyone's
everyone's
thought
if
it
was
possible
to
to
maybe
first
so
we
have
as
we're
working
on
this
events
api
and
these
some
of
these
concepts
then
implement
them
in
the
existing
javascript
sdk
as
experimental
and
maybe
try
to
get
them
actually
merged
as
experimental,
so
that
people
could
try
them
and
give
us
feedback.
A
Also
as
we're
working
on
some
of
these
semantic
conventions
for
individual
events
like
we
could
actually
implement
them
and-
and
you
know
see
what
that
looks
like,
and
you
know
that
would
that
would
help
us
get
something
out
there
to
users,
because
we
have
like.
We
have
customers
who
want
to
use
browser
now,
and
you
know
once
once
we
have
all
these
oteps
accepted
and
we
can
start
working
on
the
optimized
sdks.
That
would
be
great,
but
in
the
meantime
at
least,
we
have
something.
C
Yeah,
so
the
the
plan
is
that
we'll
talk
to
daniel
after
this
meeting
in
the
jsc-
and
I
I
I
think
in
terms
of
the
events
api
that
martin
just
talked
about-
I
I
don't
see
that
there'll
be
an
issue
with
putting
that
into
the
experimental
area
for
the
js
stuff.
C
I
don't
think
that's
going
to
fit
into
the
existing
framework,
so
I'm
not
quite
sure
where
to
host
that.
I
spoke
to
ram
yesterday
in
terms
of
you
know,
could
we
do
that
microsoft?
He
would
prefer
to
see
it
in
open,
telemetry
somewhere.
So
I
guess
ted.
Is
there
a
an
experimental
name
space
somewhere
already
in
or
under
open
telemetry?
Where,
like
I
would
want
this
thing
called
their
package
name
will
be
experimental
web
js
or
something
you
know
something
that
says
this
thing
is
not
going
to
live
forever.
C
It's
here
to
provide
investigation
and
feedback.
You
know
it
will
die
at
some
point.
D
Yeah,
well,
you
know:
github
orgs
are
flat
so.
E
C
D
Yeah,
I
I
think
I
would
suggest
just
putting
it
in
its
own
repo
at
the
at
the
org
level.
Maybe
talk
to
the
js
sig
about
package
naming
and
things
like
that,
since
that's
kind
of
like
a
js
specific.
C
Thing
because
then
we
did
the
owners
set
up
to
be
able
to
manage
that
one.
D
Yeah
yeah
you'll
have
to
get,
I
think,
a
tc
member
to
create
the
repo
for
you,
but
I
I
think
that
would
be
good
because
then
you
know
you
can
create
a
different
permission
structure
for
it.
So
I
think
that
the
next
step
there
would
be
to
create,
like
a
community
issue,
requesting
the
the
new
repo
for
the
project
to
get
started
in.
I
I
wonder
though,
like
is
it?
D
Is
it
going
to
be
temporary
like
if
yeah,
if
the
plan
is
to
say
like
look?
Obviously
we
need
like
a
web
js
version
of
this
thing.
That's
that's
so
different
from
you
know
the
node.js
one
I
do
believe
like
the
js
stuff
is
already
split
up
in
several
different
repos.
Maybe
that
was
recombined
again,
but.
C
Well,
there's
three
there's
the
api
there's
the
main
js
and
then
there's
the
contrib
and
then
there's
like
individual
packages
within
those.
So
the
idea
would
be
at
some
point
like
we
want
to
like,
go
off
and
play
with
it
and
where
possible,
we
want
to
contribute
this
back
to
the
main
course,
so
it
helps
node
as
well
right
and
you
know,
because
we're
playing
with
the
api
which
ideally
would
be
compatible.
I
say
ideally
because
the
concept
of
environment
variables
is
problematic
in
the
web,
because
there
are
none.
D
D
D
Yeah
but
yeah,
I
would
talk
to
the
jsc
after
this
meeting
and
then
make
a
community
issue
so
that
you
can
get
the
repo
started.
That
sounds
great
okay,
cool.
A
D
A
That's
actually,
I
think
you
mentioned
it.
You
had
an
apr
open
that
like
a
year
ago
or
something
I
have
an
old
one.
I
can
try
to
yeah
yeah,
so
I
do
do
we
need
this.
D
It's
not
necessary,
I
just
think
having
the
conventions.
Refactored
makes
it
easier
to
talk
about
the
conventions
holistically.
It's
just
a
little
hard
when
they're
spread
across
there's
no
high
level
place
where
you
can
say
generically.
This
is
like
how
we
handle
say,
http
or
browser
events,
or
something
like
that
right,
you
have
to
say,
here's
how
we're
handling
trace
stuff
or
log
stuff
or
metric
stuff.
So
I
think
it
could
could
be
helpful,
but
it's
not
it's
not
necessary.
I
can.
I
can
try
to
resurrect
that
pr.
I
might.
C
And
that's
probably
where
we
would
want
to
add
short
code
definitions
later
yeah,
something
like
that.
So
you
look
at
that
and
say
well,
this
is
the
standard
name
and
this
could
be
the
short
name
or
potential
short
line,
but.
D
Yeah,
so
for
for
the
short
codes
I
think
we
were
discussing,
maybe
this
could
be
like
a
v2
thing,
one
one
thing
that
I
think
would
be
helpful:
if
people
have
with
any
optimization,
I
think
it's
important
to
have
have
some
data
and
out
of
this
meeting,
I
think
my
first
question
would
be
like
well
how
how
how
short
do
they
have
to
be
in
order
to
have
savings
that
are
worth
the
you
know
the
the
work
right
like
because,
as
mentioned
they're
gonna
cause
confusion.
D
It's
going
to
have
complexity,
it's
going
to
work,
but
what
what
do
we
get?
If
we,
you
know
if
they're
half
as
long
is
that
worth
it
right
like
if
we
just
delete
the
prefixes?
If
do
they
have
to
be
like
two
two
characters?
What
would
we
get
out
of
them?
If,
if
they
were,
I
don't
know
if
there's
existing
research,
but
that's
that
seems
like
before
we
before
we
implement
short
codes.
We'd
want
we'd
want
to
run
that
experiment
right
and
like
prove
out
what
kind
of
cost
savings
we
would
get.
C
Yeah,
it's
it's
not
an
easy
algorithm,
so,
for
example
in
in
javascript,
if
you're,
what
if
you
want
to
remove
all
the
this
stop,
because
this
this
isn't
minifiable
like
this-
takes
up
four
characters
to
define
a
local
variable.
C
You've
got
like
the
ova
space
variable
so
say:
one
equals
this,
so
you've
already
got
like
eight
additional
characters
on
the
semicolon
on
the
end
that
you're
gonna
play
with,
which
means
it's
not
until
you
effectively
use
that
at
least
three
times
before
you
break
even
and
after
that
is
where
you
start
seeing
the
gains
and
that's
assuming
it
gets
minified
into
a
single
character.
That's
for
a
full
character
name
if
you've
got
a
really
long
function.
Name
like
has
own
property
well
effectively.
Instead
of
repeating
has
owned
property
several
times.
C
You
have
that
once
with
these,
these
eight
characters
to
define
the
local
variable.
It
starts
adding
up
pretty
quick,
so
you
probably
get
away
with.
If
you
use
it
two
times
or
more,
then
you
start
seeing
gains.
So
it
really
is
a
case
of
it
depends
on
the
name,
the
semantic
name,
how
long
it
is
to
define
how
much
saving
you're
gonna
get.
Ideally,
you
want
it
one
or
two
characters,
because
it's
only
one
or
two
characters
versus
ten
fifteen
twenty.
B
Yeah,
I
I
also
feel
we
need
to
get
a
better
understanding
of
how
this
will
work
end
to
end,
like
you
know,
let's
say
statically,
you
know
you
would
generate
a
semantic
conventions,
library
that
will
have
both
the
original
names
and
the
short
names.
And
then
you
know
you
would
have
a
flag
that
will
you
know,
decide
which
one
is
effective
and
then
on
the
client
side,
you
know
we'll
use
the
short
codes
on
the
wire.
B
You
know,
there'll,
be
you
know
shorter
sized
packets,
but
on
the
on
the
server
you
know
what
needs
to
be
done
and
on
the
wire
you
know
for
troubleshooting.
What
are
some
options
available
to
quickly?
You
know
toggle
the
flag,
like
you
know,
people
who
I
have
not
ever
worked
on
obfuscating
the
data
I
have.
B
I
have
only
looked
it
up,
first,
getting
the
code,
so
at
microsoft
I
have
only
like
whenever
I
see
any
of
the
you
know,
data
passed
by
some
of
the
you
know
popular
products
like
yahoo,
google,
it's
all
of
us,
get
it
right.
We
I
I
never
understand,
you
know
what
gets
sent,
but
how
do
they
troubleshoot
things?
I
think
if
there
are
standard
practices
being
followed,
I
think
if
somebody
can
present
them,
then
I
think
that's
a
minimum
requirement
for
us
to
even
think
about
this.
B
C
Yeah,
but
I
I
think
we
we
push
it
down
the
road.
I
think
we
need
to
spend
time
defining
the
events
first,
yeah,
because
there
are
also
libraries
like
message
pack,
which
we'll
do
this
stuff
on
the
fly
too,
that
we
can
look
at
playing
with.
I
think
most
of
the
things
you're
talking
about
is
is
where
you,
you
pack
things
into
binary,
which
makes
it
look
crappy.
D
You
know
something
that
might
be
helpful
in
general,
for
you
know
for
things
like
short
codes,
but
just
in
general,
the
whole
experimental
web
js
thing
we're
doing
is
just
the
the
point
of
the
whole
thing
is
is
optimization
right.
That
seems
to
be
the
main
thing
we're
going
after
there
and
just
developing
some
kind
of
test
environment
where
we
can
actually
record
record
the
information
we
care
about
right
like
how.
D
What
is
the
footprint
size
of
everything
being
loaded
up
when
you
then
run
like
some
average
js
app
or
something
like
that,
it
could
be
from
the
the
demo
environment
people
are
putting
together,
but
but
some
way
of
being
able
to
say.
Okay,
we
made
a
change
to
how
this
experimental
sdk
worked,
and
now
we
want
to
see
how
it
performs
compared
to
prior
versions
of
that
or
the
the
the
stock
sdk
that
we
currently
have
basically
like
having
benchmarks
for
the
the
targets
we're
trying
to
hit.
C
Yeah,
I
was
just
gonna
grab
some
links
and
drop
them
in
there
in
the
chat,
because
yeah
there's
multiple
levels,
some
links
I
can't
put
in
because
they're
internal,
that
we
measure
one
is
perfectly
basic
payload,
which
is
a
lovely
big
table
of
app
insights
and
the
the
there
is
reason
for
the
colors
effectively
we're
worried
about
the
gzip
size.
We
want
the
gzip
size,
ideally
to
be
less
than
30k
and,
as
you
can
see,
we're
nowhere
near
that.
C
So
this
is
like
one
view
of
affecting
the
payload
size,
because
this
this
directly
affects
the
plp
in
terms
of
that
raw
minified
size.
That's
effectively
the
size
of
the
code,
once
it's
extracted
in
the
browser,
so
that
that
directly
affects
initial
memory,
because
that's
memory,
consumption.
C
We
have
a
bunch
of
internal
tests
which
take
that
further
and
we
actually
have
testing
frameworks
that
measure
the
amount
of
memory
and
drop
that
out
into
graphs
similar
to
this,
based
on
the
the
version
number,
but
just
loading,
the
sdk
initializing,
the
sdk
running
with
events
with
20
fields,
40
fields,
100
fields,
running
with
multiple
numbers.
C
So
we
batch
them
up
internally
and
we
just
drop
them
out
and
we
call
them
our
perf
tests,
which
are
problematic
because
they're
virtualized
per
test,
so
we
actually
have
to
run
every
version
every
day,
just
in
case
the
foot.
The
infrastructure
running
on
is
overloaded
that
day,
but
yeah.
It's
it's
a
big
set
of
stuff,
and
this
doesn't
include
you
can
even
dig
deeper
down
a
news.
Oh,
what's
it
called?
C
Google
has
effectively
it's
lighthouse
stuff,
there's
a
web,
effective,
instrumented,
chrome
and
you
get
way
more
details
about
how
and
when
it
loads,
individual
things
and
first
paint
times
and
all
that
sort
of
stuff.
That's
a
bit
harder
to
graph,
but
you
can
do
it
manually
that
web
page
test
there's
actually
a
webpagetest.org
drop
in
a
url.
Then
they'll
actually
instrument
it
for
you
to
give
you
all
the
details.
C
The
impact
of
you
know
this
downstream,
like
I'm,
this
doesn't
measure
or
we
we're
not
measuring
the
actual
payload
size
today,
which
I
think
was
one
of
the
things
you
mentioned
yeah,
because
that
payload
size
then
directly
affects
the
server
as
well.
Right
but
yeah,
I
don't
think
it's
a
simple
thing.
This
took
us
like
six
months
and
we
had
like
an
intern
internally
playing
with
this,
to
get
it
all
going
so
using
existing
infrastructure
that
we
have.
You
know
within
microsoft,
so
the
table
that
I
dropped.
C
That's
easy,
like
anyone
can
do
that.
That's
just
like
using
very
public
stuff
as
long
as
it
hosts
another
on
a
cdn,
because
these
are
all
hosts.
It
gets.
Those
numbers
live
from
the
from
our
cv.
D
B
D
Great
okay,
well,
I
mean,
I
definitely
don't
think
we
want
to
get
mired
and
you
know
really
difficult
to
to
maintain.
You
know
a
perf
test,
ci
environment
or
something
like
that,
but
but
it
seems
like
it's
hard,
maybe
it's
just
as
part
of
developing
this
experimental
js
repo
trying
to
develop
these.
D
These
test
harnesses
right
to
kind
of
measure.
The
various
things
we
want
to
optimize,
maybe
starting
with
with
package
size-
and
you
know,
network
payload
size
would
be-
would
be
useful
right
because,
if
we're
gonna,
we
want
to
be
able
to
then
run
experiments
and
figure
out
what,
especially
for
things
that
are
going
to
add
complexity
right.
We
want
to
know
that
we've
got
we've.
We've
gained
something
valuable
from
from
adding
it.
D
So
maybe
that's
that's
just
as
part
of
developing
this
out.
We
want
to
keep
that
in
mind
to
be
to
be
measuring
these
things
in
a
way,
that's
repeatable,
so
we
can
publish
the
results,
and
you
know
stuff
like
that.
D
C
If
we
do
that
with
a
so
instead
of
sending
it
out
the
door,
so
you
know
sometimes
you're
saying:
let's
look
at
the
web
server
logs
and
see
how
big
they
are
today.
That
gives
us
a
basis.
We
then
create
tests
that
duplicate
that
we
can
effectively
do
that
at
the
exporter
level,
so
it
doesn't
actually
leave
the
box,
it
just
gets
it
packages
it
and
says:
oh,
it's
this
big
and
hey
by
the
way
I
accepted
it
and
then
yeah.
We
turn
on
our
config
flag
and
it
changes
that
value.
C
But
we
need
that
history.
The
history
is
always
the
fun
bit.
It's
like
effectively
the
database
somewhere
in
that
table
there.
Our
database
is
a
cdn,
so
we're
cheating
in
terms
of
published
packages
and
this
text
infrastructure,
I
don't
know-
does
open
telemetry
in
general
have
somewhere
for
this
stuff.
That's
just
not.
Google
docs.
C
D
C
Yeah
yeah,
so
we
have
to
probably
come
up
with
something
that
I
think
we
took
the
snapshot
and
then
stuffed
it
into
github
so
that
we
can
go
and
generate
crafts
or
something
from
it.
Yeah.
D
As
much
as
we
can
we're
trying
to
stay
on
github
for
all
of
that,
so
using
github
actions
as
like
the
place
where
we
run
all
these
things
and
then
yeah
committing
committing
the
results.
I
could
get
them
to
go
somewhere
because
that's
that's
the
infrastructure
that
we
have
that
we
can
get.
You
know
what
microsoft.
A
Cool,
so
one
more
one,
more
related
topic
that
I
think
actually
was
left
over
from
last
week
also
was
the
the
board,
the
the
project
board
that
we
have
and
if
my
question
would
be
like,
can
we
start
using
it
for
tracking
some
of
these
things
and
does
it
work
the
way
it
is
now
here's
the
link.
B
D
B
Or
do
we
need
to
add
them
manually?
There's
an
ad
item
option
at
the
bottom.
D
So
I'm
not,
I
think
you
have
to
manually,
add
things
to
the
to
the
board
right
now,
when
you
create
the
issue
I,
but
I
they're
working
on
this
stuff.
D
But
because
it's
all
in
beta,
I
don't
know
you
know
where,
where
all
that
stands
martin,
I
think
you
had
some
ideas
about
a
better
like
layout
for
this
board.
Maybe
that's
like.
A
More
just
questions
like
like
I
didn't
understand.
I
guess
like
how
the
board
you
know
like
I
I
guess
I
was
expecting
like
this
is
the
to-do
list.
This
is
the
things
that
are
in
progress,
and
this
is
the
done
things
that
have
been
done,
whereas
I
think
this
is
more
like
a
plan.
Right
has
like
the
stages
of
things
we
want
to
work
on.
D
Yeah,
I
think,
there's
we
could
maybe
get
rid
of
this.
This
v1
v2
v3
was
was
a
bit,
I
think.
Just
like
a
guess.
D
We
could
just
simplify
this
down
to
yeah
to
do
in
progress
and
done
and
then
maybe
a
a
fourth
category
which
is,
I
don't
know
what
at
pivotal.
We
call
it
the
icebox,
but
things
that
issues
we
want
to
track,
but
aren't
aren't
on
our
to-do
list.
In
other
words,
you
want
your
you.
You
want
a
general
hopper
of
issues,
so
you
don't
lose
them
the
two
do
list.
D
You
ideally
want
ordered
right,
yes
or
at
least
partially
ordered
so
those
those
four
categories
tend
to
be
tend
to
be
good
enough.
You
know
to
do
and
then
weekly,
having
like
a
weekly
sprint
meeting,
which
essentially
this
meeting
with
a
tuesday
meeting
where
we
review
the
board
and
get
up
to
date,
and
you
kind
of
use
your
to-do
list
potentially
as
like.
D
Potentially
you
can
have
the
to-do
list
represent
a
a
set
of
time.
Right
like
this
is
the
stuff
it's
not
just
in
general.
We
want
to
do
this.
It's
these
are
the
the
things
we're
going
to
try
to
tackle
in
the
next
two
weeks
or
the
next
month,
or
something
like
that.
This
is
what
we're
gonna
try
to
churn
through
and
see.
If
that
can
maybe
help
us
move
a
little
bit
faster.
D
D
A
Yeah,
I
think
I
think
it'd
be
potentially
helpful
like
to
just
know
where
people
can
contribute
and
what
the
status
of
things
are.
D
D
D
These
boards
also
have
like
a
sidebar
where
you
can
add
additional
information.
I
don't
really
know
how
useful
this
is,
but
if
you
pop
that
open
project
details,
so
we
could
add
some-
maybe
additional
information
there
just
to
point
people
to
like
our
agenda
and
meeting
times,
and
things
like
that.
So
people
find
the
board
find
everything
else.
D
D
A
B
Yeah,
I
have
one
completely
off
topic,
but
a
general
topic.
I
I
want
some
understanding
from
from
you
ted.
B
This
is
a
general
hotel
concept,
so
in
in
all
your
talks,
you
know
you
highlight
that
applications
should
use
the
apis
and
the
sdks
can
be
plugged
in
at
runtime
so
that
there
are
no
dependencies,
but
the
term
sdk
is
often
you
know.
I
I
feel
is
misused
in
hotel
in
the
sense
that
when,
when
you
start
writing,
you
know
when,
when
the
application
developers
want
to
integrate,
you
know,
let's
say
doing
manual
instrumentation,
you
know
they.
B
D
Yes,
so
that's
where
we
separate
things
out
is
instrumentation
from
setup
right.
So
when
you're
instrumenting
you're
writing
api
calls
right.
Like
start
a
span
at
a
metric,
create
a
log
and
but
that
api
is
separated
from
the
sdk.
D
So
at
runtime,
when
you
start
your
application
up,
that's
when
you
choose
to
install
what's
going
to
receive
those
api
calls-
and
you
know
one
option
is
the
sdk
we
provide.
You
could
write
your
own
sdk
or
some
other
thing
to
receive
them,
but
at
startup
the
sdk
is
you'll
notice.
They
have
all
these
builders
and
then.
B
But
as
a
nav
as
a
new
developer,
if,
if
I
was,
if
I
were
to
build
this
app
with
just
the
api
jar
files
in
my
class
path,
you
know
it
should
ideally
build
right.
If
I.
D
D
Right
right,
so
what
the
way
it
works
is,
if
you
don't
want
those
to
be
no
ops.
What
you
do
is
you
register
so
so
open
telemetry
is
broken
down
into
providers
right,
you'll,
be
a
trace
provider,
a
meter
provider,
a
log
provider,
a
propagator,
and
those
are
what
plug
into
the
api
calls
that
you
make.
So
in
your
instrumentation
you're
grabbing,
you
know
a
provider
and
doing
something
with
it
by
default.
That's
that
that's
going
to
be
a
no
op
provider.
D
So
if
you
want
to
do
something
with
your
data,
then
at
application
runtime,
you
create
an
sdk
and
you
you
register
it
as
as
a
tracer
provider
or
meter
provider
and
that's
to
do
with
the
fact
that
the
instrumentation
couple
reasons
one
instrumentation
may
be
installed
in
things
like
shared
libraries
right
where
this
library
gets
loaded
up
somewhere
in
some
environment,
where
no
one's
using
open
telemetry,
and
so
it
doesn't
haul
in
the
the
large
dependency
chain
that
the
sdk
holds
in
right.
D
D
The
other
reason
why
we
keep
them
separate
is
because
we've
provided
an
implementation,
but
we
don't
think
it's
realistic
to
say
this
is
the
only
implementation
or
the
one
that's
going
to
work
in
every
single
environment,
and
so
we
want
to
give
people
an
escape
hatch
if
there's
something
about
our
sdk
implementation,
which
doesn't
work
for
them
because
of
a
dependency
conflict
or
efficiency
or
some
other
reason,
then
it's
possible
for
them
to
to
make
their
own
sdk
or
fork
this
sdk
or
do
something
with
it,
but
then
load
load
their
their
implementation
up
in
an
ordinary,
ordinary
manner
right,
which
is
hard
to
do
if
it's
just
implicitly
dependent
upon
our
sdk
implementation.
D
So
a
couple
examples
here
are
in
testing.
We
provide
like
a
mock
sdk
right,
so
the
implementation
is,
is
like
a
mock
or
a
fake
that
you
can
program
to
to
induce
certain
behavior
and
it
just
records
everything
in
memory
and
lets
you
access
it.
You
know
the
way
you
would
access
a
mock.
D
D
So
in
that
case,
rather
than
loading
up
the
native
sdk,
you
could
choose
to
import
the
the
c
plus
plus
sdk
and
load
that
one
up,
which
I
think
would
would
have
a
huge
performance
boost
in
say,
python
or
node.js
or
ruby.
D
But
the
flip
side
is
you're,
taking
on
a
c
plus
plus
dependency
when
you
use
that
version
right
and
so
there's
plenty
of
like
python
environments
or
other
places
where
that
dependency
wouldn't
work
very
well.
So
you
want
to
have
two
options
right
so
that
that's
why
we
have
the
sdk
and
the
api
separated
like
that.
D
It's
just
just
loose
coupling
so
that
you're
not
saying
just
because
I
instrumented
with
open
telemetry,
I'm
now
totally
strapped
and
chained,
to
this
particular
implementation
that
that
we're
building
in
these
different
different
languages,
right,
like
so
in
the
case
of
the
web
experiment
that
we're
doing
the
all
the
instrumentation
that
we
write.
There
should
work
with
either
implementation
right.
D
You
should
be
able
to
plug
in
the
current
sdk
to
that
and
have
an
app
instrument
with
the
api
work
or
you
should
be
able
to
just
go
into
your
your
app
boot
code
and
change
it
to
load
up
the
experimental
sdk
instead
and
see
the
differences
so
so
that
that's
why
it's
separated
out
like
that
and
why
you
see
sdk
stuff
in
setup,
but
setup
is
actually
set
up
and
tear
down
the
only
places.
You
should
see
people
touching
the
sdk.
D
C
D
Yeah
and
I
wish
more
languages
provided
more
support
for
this
kind
of
loose
coupling
because
I
think
loose
couplings.
This,
like
really
important
programming
pattern,
that
we
don't.
We
don't
utilize,
often
enough
and
it's
like
partially,
because
it's
not
like
particularly
well
supported
in
a
lot
of
languages.
D
B
B
B
You
know
that
would
be
the
you
know
that
we
would
be
giving
them
the
you
know,
sdk
correct,
so
so
the
customer,
the
any
code
that
the
customers
write
that
should
be
portable
across
vendors
right
and
there's
when
the
switch
vendors.
B
And-
and
I
was
just
surprised
that,
if
if
they
have
to
use
the
word
sdk,
then
they
their
their
bills
will
have
to
be
dependent
on
the
sdks.
D
D
It's
just
this
basic
pattern
of
separating
interface
from
implementation,
right,
like
when
you're
instrumenting,
you're
instrumenting
against
an
interface
not
against
an
implementation,
and
if
you
have
that
separation,
that
means
you
have
some
flexibility
over
which
implementation
you're
you're,
using
without
having
to
resort
to
like
really
hacky
techniques
of
faking
out
which
package
you're
loading
and
stuff
like
that
and
which
would
make
it
hard
to
to
provide
an
alternative,
distro
or
implementation.
D
If
you,
if
you
wanted
to
do
so
right,
you'd
have
to
do
some
hacky
thing
in
your
like
package
package,
manifest
to
say
like
actually,
this
library
is
really
this
other
library
or
something
so
so
that
that
that's
all
it
is
it's
just
just
keeping
that
clean
separation
which
comes
from
like
open
tracing
in
open
tracing.
There
was
no
implementation
right.
We
had
no
sdk.
D
All
it
was
was
the
interfaces
you
needed
to
to
to
write
your
instrumentation
and
then
this
registration
mechanism
for
being
able
to
load
an
implementation,
and
everyone
was
expected
to
provide
their
own
right.
So
jaeger
had
an
implementation
and
lightstep
had
an
implementation.
Everyone
had
their
own
implementation,
but
what
we
discovered
there
is
that
was
like
really
annoying.
D
D
The
problem
with
open
census
was
they
didn't
have
this
separation
between
interface
and
implementation.
So,
if
you
were,
if
you
were
instrumenting
against
open
census,
you
were
making
that
library
was
now
directly
linked
to
this
implementation
and
potentially
a
version
of
that
implementation,
and
that
was
like
we
can
look
at
like
grp,
like
with
with
instrumentation
code
like
instrumentation
libraries,
backwards,
compatibility
and
dependency
conflicts.
Are
these
like
really
really
big
problems
that
you
have
to
be
very
thoughtful
about?
D
I
think
grpc
is
a
classic
example.
Right.
Lots
of
libraries
depend
on
grpc,
but
then
they
take
a
like.
So
not
just
my
application
code,
but
I
depend
on
a
library
and
then
that
library
uses
grpc
to
do
whatever
it
does.
And
then
I
depend
on
another
library
that
uses
grpc,
but
they
use
different
versions
of
grpc
and
those
versions
are
incompatible
with
each
other.
D
And
now
I
have
a
dependency
problem
that
I
can't
solve
right,
because
it's
these
libraries
are
using
are
depending
on
something
that's
in
conflict,
and
so
we
want
to
ensure
is
that
open
telemetry
is
not
creating
that
situation
for
people
or
or
if
it
does,
that
they
have
some
escape
hatch
right,
and
so
that
means
the
api
has
to
always
be
backwards
compatible.
D
So
two
different
versions
of
the
api
are
not
going
to
create
a
dependency
conflict,
but
also
means
that,
like
if
the
sdk
has
some
dependency
conflict
with
like
other
stuff
you're
using
that
you
should
be
able
to
then
load
in
like
a
different
sdk
or
somehow
solve
that
dependency
problem
without
having
to
go
to
all
the
different
libraries
that
were
like
pointing
at
some
version
of
open,
telemetry
and
like
sorting
out
all
of
that
nonsense.