►
From YouTube: 2020-12-11 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
C
C
Could
probably
get
started
so
wow,
that's
pretty
rare,
a
minute
and
a
half
after
the
hour.
That's
really
good!
Today.
C
Yeah,
everyone
please
be
sure
to
put
your
name
down
on
the
attendees
list,
for
the
sig
meeting,
notes
and
I'll
start
sharing
here
in
just
a
second,
and
if
you
have
any
agenda
items,
please
be
sure
to
add
them
to
the
list
and
we
can
just
start
off
so
yeah
thanks.
C
Everyone
for
coming,
welcome
to
the
or
the
oklahoma
chica
sig
meeting,
just
a
high
level
overview,
looks
like
we're
kind
of
slowing
down
a
little
bit
on
our
progress
kind
of
similar
for
the
past
week,
or
so
I
imagined
with
the
holidays
and
everything
going
on.
It's
probably
going
to
continue
on
this.
One
of
the
first
things
I
wanted
to
talk
about
was
this
prison
versioning
policy
draft?
C
It's
been
the
majority
of
my
time
this
past
week
and
yes
last
week,
was
working
with
the
open,
telemetry
community
itself
and
trying
to
come
up
with
a
strategy
for
versioning
and
versioning,
just
specifically
for
this
project,
defining
this
policy
to
actually
describe
the
backwards
compatibility
and
stability
guarantees
as
well
as
how
we
can
progress
the
project
in
a
communicated
way
to
the
community
going
forward.
C
If
you
haven't
already,
please
take
a
look
at
this
there's
a
lot
of
detail
here.
I've
updated
to-
and
I
think
last
night
was
the
last
time
so
or
I
guess
more
accurately,
probably
about
12
hours
ago.
So
I
think,
there's
been
some
slight
changes.
The
the
main
idea
here
is
that,
as
I
was
presenting
last
week,
we
want
to
try
to
use
the
versions
to
actually
indicate
the
stability
of
the
particular
modules
partition
the
project
by
module.
C
Excuse
me,
sorry
and
like
the
trace
package
can
be
released
at
different
stability
guarantees
at
the
bottom.
Here
I
have
an
example
of
this
policy.
This
really
long-winded
policy,
above
which
please
verify
the
policy,
but
this
is
kind
of
an
example
to
show
you
kind
of
what
happens
if
you
initially
start
with
the
project.
We
kind
of
just
go
over
this
with
these
package
structures
kind
of
what
it
looks
like
today,
and
if
we
split
off
into
that
each
one
of
these
is
a
module.
C
We
can
then
go
through
some
sort
of
versioning
release,
so
we
could
release
the
hotel
package
if
we
disentangled
the
metrics
dependency,
the
trace
package
baggage
package
and
the
sdk
trace
package
as
a
release
candidate
independently,
given
that
they
would
be
their
own
modules.
At
this
point
we
can
maybe
even
have
revisions
on
the
release.
Candidate
is
the
idea
here,
maybe
there's
some
sort
of
release
candidate
too.
That
needs
to
go
out
before
we
finally
version
these
all
as
a
1.0
release.
C
That
being
said,
therefore,
if
like
we
wanted
to
update
something
in
the
metrics
package
and
at
the
same
time,
maybe
have
a
patch
release
for
the
baggage.
This
is
how
that
structure
would
look.
The
metrics
package
would
get
incremented
still
remaining
at
a
major
version
of
zero
same
with
the
sdk.
That's
also
related
on
it
there's
a
dependency
there,
so
that
would
also
be
incremented
in
lockstep
with
it
the
1.0,
the
v1
packages
themselves
would
have
a
patch
release.
C
C
That's
something
to
conform
with
what
the
open
telemetry
community
itself
wanted
to
do
into
making
sure
that
there
is
a
communication
is
to
like
a
coherence
of
a
supported
version,
so
there's
not
like
a
1.0
and
then
a
1.01
or
something
like
that
like
they
all
and
lockstep
have
the
same
major
version
we'll
have
the
same
version
as
the
idea.
C
That
being
said,
when,
like
the
versions
package
becomes
mature
and
you
wanted
to
become
released
as
a
1.0
again
in
lockstep,
all
of
the
packages
would
be
incremented
the
1.1
to
indicate
that
there'd
be
an
added
feature,
that's
being
added
in
conformance
with
the
december
v2,
so
that
would
that
would
take
resonance.
C
So
this
kind
of
describes
a
little
bit
of
the
versioning
scheme,
there's
a
lot
more
scenario
that
could
be
added
to
this
example.
I
just
wanted
to
start
with
the
bare
minimum
that
was
asked
from
the
open
to
launch
community.
The
original
dock
had
a
lot
more
details.
One
of
the
things
to
keep
in
mind
is
that
the
versioning
with
this
is
going
to
be
following
the
semantic
import
versioning.
So
as
we
go
to
v2,
there's
going
to
be
a
v2
directory
v3
of
v3
directory.
That
kind
of
thing
definitely.
C
If
you
wanted
to
do
some
more
reading
on
it
but
yeah,
I
can
probably
stop
talking
there
and
see
if
there's
any
questions.
Otherwise
we
can
progress.
The
agenda.
B
C
Yeah
filed
yes
21
hours
ago,
so
hot
off
the
presses,
yeah.
Okay,
let's
see.
D
Yeah,
it
was
hard
to
tell,
as
you
were
scrolling
through
it,
looked
like
all
the
packages
that
they
were
if
their
versions
are
supposed
to
increase
in
unison.
I
was
wondering
why
we
have
separate
versions
for
separate
packages
then,
but
then
I
saw
there
was
an
example
of
one
case
where
one
of
them
was
different
from
the
rest.
C
Yeah
right
here
yeah,
so
the
idea
here
is
that
at
this
stage
in
the
life
cycle,
sorry,
I'm
talking
really
fast
cause.
I've
gone
over
this
for
two
weeks
now,
but
yeah
good
question.
The
the
metrics
package
at
this
point
in
the
example
is
not
stable.
It's
not
released
as
a
mature,
stable
public
api
that
we're
ensuring
backwards
compatibility.
C
So
this
is
why
this
is
not
at
lockstep
at
this
point
in
time.
If
you
know
just
for
another
hypothetical
here
steve
like
a
grade
in
other
examples,
if
we
added
a
logs
package
at
this
point,
the
logs
package
could
also
be
a
you
know,
a
v001
or
something
like
that.
That
would
remain
at
its
own
versioning
at
this
point,
but
it
would
not
be
at
v1.
C
Whatever
is
the
idea
as
long
as
it's
not
at
v1,
then
it
doesn't
have
to
track
at
the
same
cadence
as
all
of
the
stable
packages.
Is
the
idea
to
this
this
versioning
scheme
here.
D
Yeah,
I
just
I
just
thinking
about
whether
it
would
be
possible
to
split
this
out
into
like
incubating
versus
graduating,
so
that
moving
forward,
if
we
had
you,
know
20
packages
and
they
start
graduating,
it
means
every
time
there's
a
version
change.
We
need
to
do
version
stuff
to
all
20
of
them.
Even
though
we're
saying
we
mandy
that
they
move
in
lockstep.
E
Yeah
I
mean
when
something
graduates
from
experimental
everything
can
break
at
any
moment
too
stable
and
supported.
I
think
those
are
the
only
two
states
we
have
are
contemplating
right.
So
all
of
our
release
infrastructure
is
already
set
up
to
version
everything
in
lockstep.
I
I
think
from
a
release,
mechanics
perspective.
This
doesn't
change
much
for
us.
D
C
Yeah
like
currently
as
it
is,
you
know
when
we
release,
you
know
the
v
0
15
or
something
like
that,
which
we
may
be
talking
to
later
about
this
meeting.
All
the
packages
go
to
v015
right
now,
like
there's,
not
a
or
all
the
modules
go
to
v015
like
that's
just
how
we
are
currently
versioning
and
then,
when
you
go
to
contribute
all
of
the
packages,
go
to
v450
changes.
B
C
To
those
packages,
those
are
those
are
going
to
be
15.,
so
it's
kind
of
the
same.
It's
just
that
there's
only
a
slight,
as
anything
pointed
out
as
I
split
now
is
that
there's
like
everything,
goes
to
that
same
lockstep
version
if
it's
a
stable
package.
Otherwise,
if
it's,
if
it's
as
you,
as
I
think
said,
incubating
or
experimental
or
something
that
is
that
we
don't
have
stability,
guarantees
on
or
backwards
compatibility
guarantees,
those
are
the
pre
1.0
releases
and
those
are
those
can
be
independently.
Versioned,
I
guess,
is
how
to
think
about
it.
D
Yeah
yeah,
it
transfers
some
burden
onto
users,
though,
because
if
you
can
imagine
if
they're
at
an
upgrade
step
and
they
they
run
a
go,
get
and
expect
to
edit
their
module
module
files,
they're
going
to
have
to
go
through
each
of
these.
D
E
So
I
think,
as
long
as
they
stick
within
the
same
major
version
right
well,
major
version
2
will
be
a
different
story
but
say:
let's
discuss
v1
for
now.
Right,
if
they
update
just
say
hotel
trace2
from
v10
to
1.1
and
hotel
trace
depends
on
hotel,
then
that
will
implicitly
get
pulled
up
and
it's
likely
that
they
won't
even
have
that
dependency
in
their
go
mod
unless
they
also
have
a
direct
dependency
on
it,
but
goes
minimum
version.
E
E
There's
a
bridge
we're
going
to
have
to
cross
when
we
get
there
regarding
how
do
we
ensure
that
we
can
do
in
process
context
propagation
between
them,
because
the
sdk
uses
non-exported
variables
for
the
context
keys
that
it
uses
to
store
traces
and
things
like
that
to
store
spans,
so
we'll
have
to
have
some
way
for
a
v2
sdk
to
get
access
to
a
span
from
a
v1
sdk.
If
we
expect
to
support
them
living
side
by
side,
which
I
think
is
possible,
but
we'll
have
to
work
through.
D
Yeah
this
what
you
point
out
about
the
the
v2
moment.
You
know
at
that
point,
given
that
there's
gonna
have
to
be
a
change
in
import
path.
Anyway,
you
really
could
just
consolidate
these
modules.
I
mean
unless,
except
for
the
benefit
that
you
said
about
letting
people
pull
them
in
piecemeal
if
they
have
subset
of
dependencies,
but
at
that
point,
if
users
are
going
to
have
to
edit
their
their
import
paths
anyway,
I
mean
an
example.
This
is
like
what
the
protobuf
library
did
recently,
where
they
said.
D
E
C
So
that
being
said,
there
was
an
alternative
proposal
that
might
still
be
getting
some
traction
I
saw.
Yana
was
reviewing
it
the
other
day.
This
is
the
original
dock
itself
here
that
we
were
kind
of
working
off
of,
and
I
think
this
is
kind
of
in
line
with
what
you're
talking
about
steve.
I
just
kind
of
want
to
like.
Let
you
let
you
know
it's
available
was
there's
an
idea
of
having
an
experimental
directory
as
well.
C
Instead
of
this
is
kind
of
like
an
outline
of
like
you
know
that
example,
I
just
gave
you
this
is
a
more
verbose
version
or
also
terse
at
the
same
time
somehow.
But
this
is
another
one
that
is
an
experimental
directory
where
there
would
be
an
explicit
experimental
directory
where
all
things
kind
of
existed
and
in
this
schema
like
the
idea,
was
like
as
things
graduated,
they
would
then
move
out
of
the
experimental
directory
into
like
the
main
hotel
directory.
C
But
you
know
I
was
not
really
satisfied
with
that,
because
you
know
in
moving
that.
That
means
that
you're
going
to
break
or
you're
going
to
change
the
import
path.
So
a
user
that
you
know
was
relying
on
the
experimental
metrics
package,
and
then
it
took
the
metrics
package
graduated
and
it
releases
release,
and
all
of
a
sudden
like
that
import
path
doesn't
exist.
C
Anymore
would
not
be
a
really
great
user
story,
although
yada
was
kind
of
pointing
out
that,
like
what
happens,
if
we
just
maintained
both
of
those
packages
going
forward,
so
I
I
I'm
not
exactly
sure
I
thought
through
all
of
the
pluses
and
minus
in
here
but
like
there
may
be
a
way
to
not
have
to
maintain
two
packages
there,
but
it
also
may
like
allow
you
to
extend
packages
in
a
backers
incompatible
way,
I'm
not
exactly
sure
here,
but
she
was
thinking
about
this
as
well
so
yeah.
D
C
I
wasn't
aware
that
yeah
yeah
I
do
want
to
get
it
right,
though
that's
kind
of
the
the
thing
like
as
right
as
possible.
I
I
guarantee
that
there's
going
to
be
mistakes
in
this,
no
matter
what,
but,
like
you
know,
I've
been
looking
a
lot
at
other
go
projects
and
that
kind
of
thing,
so
all
the
feedback
that
you
want
to
offer.
It
would
be
great.
C
Cool
we
spent
way
too
much
time
on
this,
and
I
know
I've
bored
yeah
see
josh
on
the
cost.
I
know
aboard
josh
at
least
probably
many
other
people
so
we'll
move
on.
So
I
kind
of
wanted
to
talk
a
little
bit
about
the
tracing
open,
pr's
that
are
kind
of
need,
some
attention.
These
are
just
highlights.
These
are
really
call
to
actions.
This,
I
think,
read
only
read,
writes
fan,
one
is
probably
top
of
the
list.
C
C
I
feel,
like
the
rest
of
the
pr's
are
also
gonna
be
asking
for
reviews,
but
I
think
this
is
a
good
one
to
if
you
have
some
time
have
some
cycles
to
take
a
look
at
because
it
is
restructuring
how
our
trace
sdk
is
handling
just
the
span
interfaces
right
now,
and
I
think
it's
one
that's
addressing
an
underlying
issue
that
we've
had
for
a
while
in
the
sdk.
So
it's
something
that
needs
to
be
addressed.
It
definitely
is
blocked
blocking
the
ga.
C
In
fact,
I
don't
know
why
it's
not
in
there
yeah,
so
we
definitely
need
to
actually
address
this
in
some
way,
whether
positive
or
negative.
So
if
you
have
time
it's
an
important
one,
please
take
a
look
less
of
a
severity,
but
also
something
that
is
blocking
gas.
We
need
to
support
trace
date.
This
is
a
pr
to
add
that
support.
Many
jg
has
done
a
great
job
on
this
really
responsive.
If
you
have
some
cycles,
I
think
it's
another
great
one
to
spend
some
time
on.
C
These
are
specifically
the
trace
ones.
They're,
as
josh
pointed
out
last
week,
there's
some
really
good
ones
for
the
metrics.
In
fact,
there's
a
lot
of
really
good
one
for
the
metrics
coming
in
right
now,
but
we
might
get
to
those
at
the
end.
Well,
we'll
talk
about
it.
So
next
I
wanted
to
keep
going
on
the
agenda.
Chris
mira,
you
had
this
open
issue
here.
Maybe
we
want
to
hand
it
over
to
you.
B
B
So
I
was
thinking
that
the
follow-up
pull
request
would
be
adding
a
protocol
driver
that
handles
like
multiple
grpc
connections.
B
So
we
can
have
like
multiple
end
points,
multiple
different
endpoints,
but
I
was
now
thinking
this
is
probably
pointless
that
having
a
like
two
pro
jp
rpc
protocol
drivers,
one
that
manages
one
connection
and
another
one
that
manages
multiple
just
two
in
our
case,
because
I
also
added,
like
a
protocol
driver
that
wraps
two
other
protocol
drivers,
which
is
like
it
uses
one
protocol
driver
to
start
to
use
for
matrix
and
another
one
to
do
for
traces.
B
So
I
just
thought
it
was
probably
pointless
to
have
a
separate
protocol
driver
that
maintained
multiple
connections
that
would
affect
the
naming
of
the
functions
and
how
we
do
the
configuration
now
so
yeah.
I
just
wanted
to
have
your
opinions,
like
probably
adding
a
separate
protocol
driver
for
multiple
grpc
connections,
and
that
would
in
turn
later,
if
we
implement
other
protocols
that
they
will
also
end
up
having
those
two
variants
of
protocol
drivers,
and
I
think
it's
probably
pointless
in
my
opinion.
B
C
That
sounds
reasonable
to
me.
Yeah.
I
think
that
this
kind
of
gets
back
down
to
like
the
minimal
need
for
the
implementation
of
the
specification.
I
know
that
the
specification
requires
that,
like
we
have
the
ability
to
send
traces
and
metrics
of
two
different
endpoints,
you
know,
as
josh
pointed
out
when
we
initially
looked
at
this,
it
was
like.
C
Well,
I
don't
understand
why
you
don't
just
run
two
collectors
at
that
point
or
sorry,
two
exporters
at
that
point,
but
like
yeah,
I
you
know,
I
think,
to
implement
the
specification
we
need
to
support
like
both
the
metrics
and
traces,
but
like
yes,
kind
of
the
same
thing,
it's
just
like
if
you're
doing
that,
having
multiple
grpc
connections
and
you're
rapping
in
the
way
that
you're
kind
of
talking
about
it
seems
like,
maybe
something
we
could
add
in
the
future.
Somebody
really
wants
it,
but
I
I
agree.
B
Yeah,
I
was
thinking
like
because
the
specifications
say
that
you
can
decide
what
kind
of
protocol
you
want
to
use
just
with
the
some
environment
variables.
I
suppose
we
at
some
point.
We
could
provide
a
function
like
new
default
exporter
or
something
like
that.
That
would
just
configure
all
the
protocol
drivers
according
to
the
environmental
variables
and
so
on,
and
if
someone
wants
to
hard
code
that
they
want
just
a
single
jar
of
pc
connection,
they
can
do
so
in
code
freely.
B
But
there's
a
big
question,
I'm
asking
I
was
asking
because
I
was
asking
this
question
because
yeah
it's
just
adding
more
code
for
more
apparent
reason
and
it
makes
a
configuration
even
harder,
because
there's
also
one
issue
we
will
want
to
unify
the
conf.
The
way
we
configure
things
and
having
a
protocol
driver
that
maintains
two
or
more
connections
makes
this
configuration
more
even
more
awkward
and
inconsistent
with
the
rest.
So
if
I
don't
have
to
do
it,
then
you
can
make
it
like.
B
B
F
B
G
B
Differentiation
because
say
that,
basically,
the
format
of
the
json
message
is
as
grpc
document,
with
the
exception
of
spam
ids
and
trace
id,
that
they
should
be
serialized
to
hexadecimal
strength
instead
of
bite
our
eyes
or
something
like
that,
and
basically
this
this
makes
it
more
annoying
to
implement.
So
for
now.
I
only
implemented
this
for
protobus,
but
yeah.
C
No
yeah,
I
I
was
kind
of
I
think,
sean.
I
think
you
have
the
right
idea.
I
think
we're
just
going
that
direction.
So
that's
that's
to
come.
Christmas
is
hard
at
work.
B
Yeah
next
item
was
about
so
josh.
Had
this
pull
request
for
fixing
the
sumo
servers?
I
might
be
wrong,
but
I
was
just
wondering
so.
The
issue
was
that
someone
created
a
sumo
server
that
always
reports
a
number
100
and
let's
say
that
after
four
collections
you
get
400,
that's
apparently
wrong.
B
If
this
is
a
like
a
valid
fix,
because
I
would
like
the
fact
that
after
four
collections,
I
repo
the
observer
actually
bought
400.
Instead
of
because.
C
So
if
it
always
reports
100
like
you're
not
going
to
increment
those,
like
that's
the
whole
point
of
the
sum
observer,
if
you
just
use
a
generic
value
observer,
I'm
looking
at
josh,
hopefully
for
when
I
talk
out
of
my
side
of
my
mouth,
that
that
would
be
the
thing
that
would
actually
like.
If
you
wanted
to
aggregate
it,
it
would
sum
those
four
independent
measurements
on
top
of
it.
But
if
you're,
if
you're
measuring.
B
C
B
Yeah
this,
this
might
be
also
like
very
specific
to
from
interviews
exporter,
because
prometheus
exporter
is
using
a
selector
that
choose
that
picks
a
zoom
aggregator
for
sumo
servers
and
then,
after
every
collection
you
want
to
reset.
It
then
makes
like
zoom
aggregate
or
rather
pointless
before
swap
server.
So
I
I
I'm.
G
So
the
reason
why
we
specify
a
sum
aggregator
for
some
observer
is
that,
if
you're
applying
aggregation
to
reduce
the
number
of
label
sets,
then
you
want
to
combine
those
values
through
sum:
if
we're
not
doing
any
label
set
reduction,
a
last
value,
aggregator
is
gives
you
the
right
result,
and
the
processor
code
has
a
lot
of
special
cases
and
a
lot
of
tests
to
make
sure
this
stuff
works,
because
it's
aware
that
the
inputs
are
already
sums
and
it
shouldn't
be
using
the
last
value
in
this
case,
so
the
behavior
that
is
implemented
in
the
processor
that
we
need
and
the
accumulator
was
just
doing
the
wrong
thing.
G
That's
why
we
were
getting
double
summing
of
these
things.
I'd
rather
not
delay
this
meeting,
however,
to
discuss
this
in
any
more
depth.
It's.
G
G
There's
a
reducer
processor.
If
you
were
to
wire
up
the
reducer
processor,
it
would
matter
which
one
you
did
and
that's
the
reason
why
this
is
rational
and
by
the
way
this
is
still
the
most
point,
like
the
biggest
point
of
confusion
in
the
metrics
design.
Right
now
and
it's
going
to
be
discussed,
but
not
in
this
meeting.
C
Clear
cool
yeah
thanks
for
bringing
it
up
thanks
for
going
into
that
again,.
B
C
That
this
is,
as
josh
pointed
out
at
the
bottom
of
this,
if
you
wanted
to
help
the
metrics
implementation
of
open
source,
this
is
the
top
issue
to
please
review.
I
am
guilty
of
not
reviewing
this
yet,
even
though
I
committed
to
doing
it
last
week,
so
I'm
doubly
guilty
at
this
point.
C
A
A
Gonna
hand
it
over
to
you,
okay,
so
I'm
just
wondering
about
the
releases
because
I
think
anthony
just
merged
his
pr
last
night
and
I
think
in
order
for
me
to
actually
use
that
code,
I
think
there
has
to
be
a
release
made.
E
E
C
There's
a
lot
of
stuff
I'd
love
to
get
into
the
next
release.
I
know
josh's
pr
for
the
accumulator
fix
he's
asked
to
make
sure
that
we
get
that
into
the
next
release.
So
it'd
be
really
ideal
if
we
could
get
that
merged.
I
wonder
if
also
we
could
make
releases.
I'm.
G
Gonna
volunteer
to
go
write
a
paragraph
at
the
point
where
that
fix
happens,
to
explain
more
in
detail
what
I'm
I'm
gonna,
try
and
help.
Thank
you.
G
C
That's
that
I
think
that
that's
more
than
generous
but
anthony.
I
think
if
you
and
I
could
review
this
pr-
hopefully
today
we
could,
we
could
include
it.
I
think
the
release
which
we
get
out
today
or
tomorrow.
I
think.
E
Yeah
so
I'll
take
a
look
at
that
today
and
hopefully
we
can
do
the
the
main
repo
release
tonight
or
early
tomorrow.
I
think
I
I
will
have
time
tonight
if
we
can
get
things
sorted.
C
Yeah
I
as
well
will
also
have
time
tonight
to
do
a
release
and
the
latest
get
a
review
done
for
this
pr.
But
yes,
tomorrow,
I'm
technically
off,
but
I
probably
have
an
hour
or
two
to
follow
up
with
another
release
as
well,
since
those
are
pretty
non-brain
intensive
as
well.
So
we
can
get
something
out.
A
I
was
also
wondering
if
it
was
possible
to
do
a
release
for
the
control
repo
as
well,
because
I
think
aws
is
releasing
something
next
monday
and
yeah.
E
Yeah
yeah,
so
we
normally
do
a
release
of
the
contrib
repo
right
after
we
do
a
release
of
the
the
primary
repo.
But
I
think
what
we
can
do
is
if
we
can
get
the
primary
repo
out
tonight.
Would
you
be
able
to
make
the
changes
to
the
x-ray
generator
that
are
necessary
based
on
the
changes
to
that
id
generator
interface
and
get
them
in
for
a
review
to
us
tomorrow,
so
that
we
can
try
to
do
the
contributor
release
tomorrow
after
that
lands,
yeah.
A
I
can
get
it
in
by
tomorrow
morning,
by
friday
morning,
yeah.
C
Okay
did
that
work
for
you
tyler
yeah
yeah.
That
sounds
reasonable
to
me:
okay,
well,
cool
yeah,
we'll
try
to
coordinate
things!
Sorry
for
the
slowdown.
There's
a
lot
going
on
right
now,
yeah.
C
Yeah
more
than
need
to
be
sometimes
but
we'll
keep
going
off
as
far
man.
I
hope
I
did.
I
hope
I
did
better
this
time
this
week.
Oh.
A
Yeah,
I
think
he
had
to
drop
off
for
another
meeting,
but
I
think
he
was
just
asking
for
a
view
on
his
pr's,
because
I
think
he
knows
that
things
do
slow
down
a
bit
during
the
holidays.
C
Yeah,
I
I
think
I
just
saw
anthony
review
these
pr's,
so
I
have
high
hopes
that
it
should
be
a
pretty
s
easy
one
for
me
to
review
as
well.
But
I
don't
know
anthony
if
you
wanted
to
talk
at
all,
because
I
think
you've
taken
a
look
at
them,
but
we're
not.
E
I
think
I
would
need
to
go
look
at
whatever
these
are.
I
don't
recognize
the
numbers
sure
I
ironically,
my
my
time
will
probably
become
more
available
as
we
enter
the
holidays,
because
I'll
be
taking
some
time
off
of
my
day
job.
Oh,
yes,
the
the
circle
ci
to
github
actions.
This
looked
pretty
straightforward
dave.
I
know
you
had
a
question
about
whether
we
should
use
a
different
image
or
not.
Is
that
something
we
need
to
do
now?
That's.
E
C
Cool
yeah
this
actually
looks
really
concise.
I
don't
know
why
I
haven't
looked
at
that
yet
well.
I
know
why
I
haven't,
but
okay,
that
sounds
good
dave.
If
you
wanted
to
continue
on.
I
think
it
sounds
like
we're
talking
about
that
sense
of
shame
here.
H
Yeah
yeah
just
sort
of
next
steps
now
that
we
have
the
binary
propagation
all
done,
and
actually
this
is
a
fine
thing
if
we
decide
to
to
punt
till
later
the
main,
so
this
is
now
looking
at
metrics
and
for
metrics.
H
The
interfaces
between
open
census
and
open
telemetry
are
quite
different,
so
opencensus
has
like
is
primarily
based
around
views,
which
I
don't
think
we
have
anything
like
at
least
not
yet
so
at
a
high
level.
The
first
decision
we
have
to
make
is
whether
or
not
we
even
want
to
write
a
bridge
now,
given
that,
maybe
I
actually
I
haven't
been
following
the
metric
spec,
so
I'm
not
sure
if
we're
planning
on
trying
to
reach
feature
parity
with
open
census
or
not,
regardless,
whether
we
even
want
a
metric
bridge
right
now.
H
H
You
can
convert
an
open,
telemetry
exporter
into
something
that
implements
the
open
census.
Exporter
interface
such
that
I
can
take
one
exporter
and
use
it
to
export
all
the
metrics
for
both
open
census,
libraries
and
open
telemetry
libraries,
that's
different
from
the
tracing
bridge,
which,
actually,
as
soon
as
you
call
an
open
census,
api
function
simply
diverts
that
to
call
the
open,
telemetry
trace
apis.
H
G
I
also
think
I
mean
your
question
about
views
is
certainly
one
that
we
nobody
just
dislikes
or
nobody
says.
No,
we
don't
want
views.
The
there's
been
a
long-standing
discussion
about
configurable
sdks
in
general,
and
I
see
it
as
kind
of
just
like
a
a
way
to
configure
what
happens
with
your
metrics,
it's
very
specific
and
and
fine-grained.
G
So
we
think
that
all
the
sort
of
mechanical
semantic
structures
that
we
need
to
do
open
census
views
are
present,
but
we
haven't
attached
the
machinery
that
we
need
to
do
that
and
I
think
there's
some
question
in
my
mind
about
how
far
we
need
to
go
to
meet
sort
of
like
the
99
of
open
census
use
cases.
G
I
just
converted
some
open
census
code
myself
recently
and
I
was
able
to
convert
it
into
very
straightforward
hotel,
metrics
usage,
because
all
the
views
that
I
saw
were
very
conventional,
and
so
I
saw
basically
to
me
the
usage
of
the
views
that
were
being
used
were
so
simple
that
I
could
just
replace
them
with
straightforward
hotel,
instrumentation,
and
I
don't,
I
didn't,
see
anything
complicated
in
other
words,
so
I
I
don't
know
what
we
should
do.
I
I
think
you've
described
a
viable
way
to
go.
G
It's
just
like
in
to
redirect
all
the
metrics
apis
into
open
census
for
now
to
get
sort
of
status
quo,
but
I
have
a
feeling
that
most
open
census
views
could
be
rewritten
in
hotel,
doesn't
mean
that
you're
they
want
to
do
that
and
you'll
have
to
answer
how
to
sort
of
keep
compatibility
as
we
move
forward
and
get
views.
But
right
now
I
think
I
think
you've
described
the
shortest
path
and
I
think
it's
a
little
unfortunate
because
most
of
the
functionality
we
need
are
is
implemented.
H
G
That's
coming,
in
fact,
it's
in
under
review
that
was
discussed
last
week,
there's
sort
of
a
it
was
an
open
pr
about
it.
The
biggest
question
was
terminology
so
and
that's
not
about
views.
So,
if
it's
about
context,
we
should
focus
on
that.
C
So
yeah,
in
your
opinion,
would
this
be
wasted
effort
if
we
built
a
shim
for
the
exporters
or
with
this
you
know
like?
Would
it
be
more
helpful
if
david
came
back
and
said
like
here's
the
features
we
need
to
get
parity
on
for
open
census
and
then
we
can
kind
of
work
towards
that
or
building
this
ship.
G
G
I
think
the
typical
example
that
I
saw-
and
this
was
in
the
stackdriver
sidecar-
that
I
converted
for
prometheus.
It
was
the
open
census.
Implementation
had
views
for
what
would
be
a
value
recorder
and
they
were
basically
configuring
a
sum
and
account
and
like
that's,
basically,
a
default
output
for
hotel.
I
don't
think
we
need
to
configure
that
view
anymore.
So
I
just
erased
a
bunch
of
views.
Things
are
okay,
so
my
goal
is,
is
that
the
hotel
default
should
behave
approximately
like
most
of
the
views
that
were
getting
configured
in
open
census.
H
H
G
H
I
prefer
not
to
do
that,
because
that
makes
the
final
step
it
makes
it
so
that
the
no
one
has
an
incentive
to
convert
their
stuff
to
open
telemetry.
If
there
are
any
inconsistencies,
I
prefer
to
have
a
bridge
that
works
99,
to
convert
open
census
to
act
like
open
telemetry.
So.
G
I
believe
that
there
should
be
a
one-to-one,
well-defined
conversion
from
open
census.
Api
calls
into
hotel
api
calls,
it's
just
a
question
of
whether
you
get
all
the
views,
functionality,
working
and-
and
maybe
it's
possible
to
imagine-
a
sort
of
intermediate
kind
of
combination
here,
where
you,
you
bridge
all
the
open
census.
Api
calls
into
the
open,
telemetry
sdk,
and
then
you
bridge
the
outputs
of
the
open,
telemetry
sdk
and
do
an
open
census.
G
Exporter,
assuming
you
get
the
right
outputs
and
that's
so
so,
both
for
two
bridges,
essentially
a
bridge
at
the
export
and
the
bridge
at
the
input.
C
Yeah,
so
I
don't
know
if
we
want
to
go
back
on
that.
Second,
one,
though
I
think
that
kind
of
david,
what
dave
is
pointing
out
is
the
fact
that,
like
once,
you
get
to
open
song
tree
it'd
be
ideal
to
not
like
have
any
conversions
back,
because
it
gives
the
user
some
sort
of
you
know
way
to
go
from
one
to
the
other.
You
know
it's
showing
that
open
census
is
still
a
viable
path
forward,
and
you
know
ideally
open
census
is
going
to
be
sunsetted
and
it
eventually
right.
C
So
I
think
the
path
forward
is
if
we
want
to
eventually
just
use
the
open
telemetry
exporters,
especially
given
the
fact
that
open
source
exporters
there
are
more
of
them
and,
as
they
pointed
out,
some
of
the
users
want
to
use
those
because
they
have
functionality.
Open
senses
doesn't
already
so.
G
Yeah,
I
would
I
I
would
agree
yeah.
If
you
can
accept
otlp,
then
it
would
be
better
to
shim
into
open
census
and
then
take
hotel
p.
But
the
question
of
whether
you
have
your
views
done
correctly.
Otlp
can
carry
your
view.
Your
view
output,
but
but
the
processing
that
needs
to
be
done
may
be
a
question.
G
Oh
so
you're
saying,
like
the
exporter,
may
be
doing
some
processing
well
somewhere
in
open
census.
There's
machinery
that
implements
a
view
and
we
don't
necessarily
have
exactly
that
in
our
sdk.
So
if
you,
if
the
views
are
all
very
trivial
like
I
was
supposing
commonly
they
are,
then
you
know
you
probably
can
do
everything
you
want
in
the
export.
As
long
as
the
information
is
all
been
calculated
for
you,
you
can
you
can
sort
of
just
output
some
standard
stuff.
G
G
C
C
I
think
the
other
question,
though,
is
just
like
if,
if
we
wanted
to
forego
answering
that
question
say
just
give
it
by
some
time,
would
it
be
helpful
if
david
built
an
exporter
a
little
bit
further
down
instead
of
shimming
in
front
of
the
api,
just
built
the
next
an
exporter
shim
at
that
point,
so
then
the
open
senses
people
could
start
to
use
open,
telemetry
exporters
and
start
that
migration
forward
or
or
you
know,
do
we
do.
C
We
imagine
feature
parity
or
support,
for
you
know
full
open
census
thing
for
that
first
question:
it's
going
to
be
answered
in
like
a
week
or
two,
which
I
don't
think
is
the
answer.
But,
like
you
know,
is
it
you
know
worthless
time
to
go.
Spend
on
the
exporter.
I
guess
to
me
to
me.
The
answer
is,
I
think,
it's
useful
to
spend
time
on
getting
that
exporter
shim
before
the
bridge,
but
I
think
that's
just
because
it
may
be
like
a
month
or
two
before
we
have
like.
G
H
Yeah,
I
think
the
other
I'm
not
even
sure,
if
there's
an
advantage
of
using
a
true
bridge
over
just
using
an
exporter
in
the
sense
that
for
tracing
we
cared
because
context,
propagation
between
open
census
and
open
telemetry
libraries
breaks.
If
you
mix
them
right,
so
we
need
to
funnel
them
all
through
the
same
api
calls.
H
But
for
metrics
by
and
large
it
seems
that
it
doesn't
really
matter
whether
users
are
using
one
api
or
the
other.
We
just
we'd
like
everything,
to
end
up
at
the
same
place.
If
we
can
so
that
migrating
the
libraries
and
dealing
with
the
long
tail
of
libraries
that
haven't
been
updated
is
easier
for
users.
C
Well,
if
that's
the
case,
it
sounds
like
this
is
a
really
optimal
solution
that
you
proposed
here.
It's
just
wrapping
the
exporters
because
kind
of
like
what
josh
is
also
pointing
out
is
like,
if
all
of
the
views
are
going
to
be
simple
conversions
to
our
instruments,
then,
like
that's,
that's
a
really
good
path
forward
for
the
the
user.
C
Instead
of
us
dealing
with
this
extreme
explosion
of,
like
all
edge
cases
of
all
views
that
could
potentially
be
you
know,
supported
having
them
start
using
open
source
exporters
and
then
when
they
want
to
make
that
migration,
they
just
start
using
open,
telemetry
apis
and
turns
out.
You
know.
99
of
these
cases
are
handled
by
our
instruments.
Then
it'd
be
a
really
easy
transition
form,
but
it's
also
a
progressive
one.
So
yeah,
I
think
I
agree,
I
think
maybe
we
could
just
forgo
even
the
the
bridge
situation
and
just
make
sure
we
hit.
C
C
Okay,
all
right
that
sounds
good.
Okay,
I
think
that's
the
end
of
our
scheduled
agenda.
I
don't
know
if
anybody
else
has
any
other
issues
they
wanted
to
bring
up.
C
Cool,
I
think
that
we've
touched
on
everything.
I
am
hoping
to
get
some
reviews
done
today.
I've
also
got
a
ton
more
meetings
to
get
to,
but
we'll
I'm
gonna
make
it
happen
somehow
yeah.
If
anybody
has
a
time
machine,
I'd
love
one
one
of
those
as
well,
but
yeah.
Otherwise,
thanks
everyone
for
for
joining
we'll,
probably
end
it
here,
metrics
in
12
minutes.
Please
join.
We
love
input
and
love
feedback
there
as
well,
or
just
keep
a
pulse
of
where
we're
going.
C
That
part
of
the
specification
and
next
week
we'll
be
meeting
again
at
3
p.m.
So
we'll
see
you
then
bye.
Everyone.