►
From YouTube: 2020-10-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
C
I
put
some
a
few
items.
Yeah
there
we
go
first,
I
just
want
to
start
off
with
the
scope
of
the
maintainers
meaning
and
the
focus
of
the
topics.
I
know
we're
kind
of
concentrating
on
spec
related
issues
in
order
to
get
to
the
trace,
spec
freeze,
but
also
wanted
to
make
sure
we
had
time
to
talk
about
maintainer,
specific
topics
so
to
keep
with
the
focus.
I've
put
this
line
up
here
for
the
purpose
of
the
meeting,
so
we
make
sure
we
address
the
issues
first
and
honor.
C
That
purpose
begin
with.
That
being
said,
there
was
a
topic
that
was
brought
up
for
the
spec
meeting
on
tomorrow,
actually
from
last
week
in
order
to
swap
the
times
as
a
proposal.
So
that
way
we
could
address
the
spec
issues
at
the
beginning
of
the
week
since
we're
concentrating
on
that
first
maintainers
issues
being
addressed
on
the
tuesday.
C
Swapping
the
times
would
allow
for
us
to
be
able
to
take
it
on
on
the
monday,
so
I
know
morgan.
We
talked
about
this
at
the
spec
meeting,
correct
yeah.
That's
the
only
topic
that
I
think
we
ought
that
I'd
like
to
propose,
but
it
need
buy-in
from
both
both
meetings.
B
I
think
we
had
pretty
good
buy-in
from
the
spec
meeting
when
we
brought
it
up
there.
So
the
question
is
for
the
people
on
this
call
right
now,
if
we
swapped
this
meeting,
which
currently
takes
place
at
nine
o'clock
pacific
on
mondays,
with
the
spec
meeting
that
takes
place
at
eight
o'clock,
am
pacific
on
tuesdays.
B
If
you
already
attend
both
meetings,
it
probably
doesn't
make
much
of
a
difference
to
you,
because
you're,
the
if
you're,
already
committed
to
both
the
times
for
both
aren't
collectively,
are
not
changing.
F
B
C
Move
on
to
the
next
topic,
then,
a
summary
of
the
ga
spec
burn
down
issues,
so
we
still
have
two
priority:
one
required
for
ga
issues
that
don't
have
an
associated
pr.
We
have
one
that
has
an
associate
pr
that
are
marked
as
blocking
for
the
trace,
spec
freeze,
so.
C
I
think
that
leads
into
my
next
topic,
which
is
what
would
be
the
expected
new
timeline
for
the
trace,
spec
freeze.
So
that
way
we
can
get
started
on
the
implementation
or
at
least
know
what
we're
freezing
and
what
we're
trying
to
implement
towards
in
the
languages.
B
So
I
would
say
top
down:
it
needs
to
be
as
soon
as
possible
like
we.
We
need
to
get
this
done
so
that
we
can
start
implementing
the
release
candidate
or
not
even
start
but
complete
the
implementation
of
the
release
candidate
apis
in
each
of
the
sdks
bottoms
up.
I
think
it's
going
to
be
just
whenever
these
two
issues
are
finished.
B
I
don't
know
if
we
wanted
to
go
into
them
in
detail
today.
I
guess
I'll
probably
do
that
tomorrow
morning,
but
we
need
to
get
these
two
things
done
and
then
lock
it
down
and
to
be
honest,
I
think
we
should
consider
it
locked
modulo.
These
two
issues,
like
I
don't
want
to
see
other
p1
issues
cropping
up
at
this
point,
sort
of
like
something
legitimately
being
completely
broken
that
we
would
normally
like
take
a
change
in
the
rc
for
but
go
ahead.
G
H
B
G
B
Yeah,
okay,
so
I
added
that
to
the
notes.
So
I
guess
the
message
for
maintainers
is
yeah,
like
the
spec
is
effectively
locked
modulo
these
issues.
Obviously,
if
other
things
crop
up
that
we
need
to
fix
during
the
rc
like
we
planned,
we'll
fix
them,
but
you
should,
with
the
exception
of
these
three
issues,
one
of
which
is
in
progress,
assume
that
everything
else
is
locked
for
for
a
release.
Candidate.
D
H
Resources
are
in
the
sdk,
so
I
think
we
are
focusing
right
now
on
the
api
part.
So
resort
is
an
sdk
concept.
We
we
still
have
a
bit
more
freedom.
I
H
J
H
Big
issues
on
baggage
that
that
john
filed,
then
we
can
we,
we
should
look
at
those.
H
I
would
say
we
should
do
it
this
week,
but
I
mean
I
don't
think
we
can
make
this.
We
can
have
a
hard
deadline,
but
if
you
want
we
can
come
up
with
one.
But
personally
I
think
it's
probably
this
week.
E
When
do
you
think
broken,
you
may
have
a
pr
for
the
for
the
context
issue.
H
E
B
Agreed
yeah,
I'm
focused
on
the
I'm
focused
on
the
the
like
the
initial
freeze,
so
people
can
go,
build
their
implementations
and
if
any
issues
come
out
of
those
implementations,
then
we
can
can
work
on
those.
But
I
want
to
give
the
maintainers
like
confidence
that
the
spec
isn't
going
to
take
sort
of
unexpected
changes,
while
they're
doing
that,
I
see
yeah,
but
I'm
with
you
on
like
yeah
no
like
like
in
the
rc.
If
there's
some
critical
flaw
like
we
need
to
fix
that,
obviously.
G
C
B
Okay,
do
we
have
anything
else
on
that
or
are
we
moving
on
to
the
github
open,
telemetry
topic.
C
Okay,
this
is
just
a
quick
one.
Just
coming
up
with
ideas
to
help
with
discoverability
of
the
open
source
project
and
github
has
something
called
topics.
C
N
C
Like
over
here,
so
my
proposal
is
for
repos
under
open
telemetry
to
have
the
open,
telemetry
tag,
and
then
the
community
can
claim
this
in
quote
curate
it
right,
which
means
they
can
put
a
logo
here
and
then
anything
that's
tagged
with
it.
People
can
find
associated
things
tangentially.
I
saw
this
project.
I
have
no
idea
what
it's
got
to
do
with
open
telemetry.
C
Actually,
it's
got
open
tracing
code
and
open
consensus
code
in
there,
but
it's
like
got
crazy
popularity
quite
recently,
but
it'll
help
with
discoverability
of
open
telemetry
and
related
open
telemetry
projects.
So
that's
my
proposal.
B
I
don't
know
if
anyone's
been
following
up
on
that
last,
I
heard
was
about
six
months
ago
that
we'd
reached
out
to
github
to
see
who
owned
the
open,
telemetry
one
word
org
and
we
either
heard
nothing
back
or
we
heard
it
was
taken
and
there's
no
way
to
to
take
it
back.
C
Should
it
be
changed
to
have
a
dash?
Let
me
see,
I
kind
of
think
it's
already
like
take
a
look
and
see
whether
that.
B
Yeah,
well,
let's
do
it
without
if
we
can
change
the
org
name
later,
we'll
do
it,
but
like
enough
of
our
own
repo
names,
don't
put
a
dash
in
like
the
only
place
we
use
a
dash.
Is
the
org?
That's
it
just
because
we
have
to
so
doing.
This
is
consistent
with
all
the
repo
names
because
we,
like,
we
don't
say,
like
open
dash,
telemetry
dash
java,
for
example,.
M
O
If
you,
if
you
want
to
be
thorough,
you
can
actually
add
both
open
telemetry
without
the
dash
and
open
dash
telemetry
for
light
your
labels,
so
that
people
could
a
little
bit
more
discoverable
yep.
B
C
All
right
last
one
is
just
a
tip
for
maintainers
hacktoberfest,
going
through
the
month
of
october,
it's
put
on
by
digitalocean
or
sponsored
by
digitalocean.
C
If
you
register
with
github,
you
get
a
free
t-shirt
if
you
have
four
pr's,
so
that's
an
excuse
to
before
pr
is
open
source
repositories,
and
so
that's
a
way
to
tweet
about
it.
Talk
about
on
social
media
in
order
to
give
a
free
t-shirt
towards
anybody
who's
motivated
by
free
t-shirts.
C
They
it's
been
they've,
been
running
this
for
several
years
and
they
have
even
resources
on
tips
for
maintainers
on
how
to
direct
traffic
or
contributions
for
prs
and
they're,
pretty
good
guidance
for
people
making
prs.
You
know,
don't
try
to
game
the
system
by
like
adding
the
space
of
the
new
space
settings
based
on
space
right.
C
It
is
an
opportunity
in
order
to
have
you
know,
an
experience
for
new
contributors
and
hopefully
get
some
exposure,
so
I
just
encourage
maintainers
to
follow
some
of
the
tips
making
help
wanted
label
what
not
right
and
tweet
about
it
or
promote
it
as
best
you
can.
P
F
N
Yeah,
I
think
they
changed
it
from
it
was
opened
prs
first,
but
because
of
all
of
the
spam
and
the
manual
effort,
it
now
has
to
be
either
approved,
merged
or
labeled,
as
extroverfest
accepted
yet
not
approved
in
order
to
count.
So
it's
a
it's
a
two-step
opt-in
so
to
speak.
B
F
Yeah,
that's
our
plan
on
the
instrumentation
repo.
Is
we're
gonna,
add
it
and
see
what
happens
and
remove
it
as
needed.
B
J
I
put
that
one
there
and
it's
about
a
small
change
that
happened
last
week
yeah.
Basically,
we
have
a
few
intermediate
agreements
and
then
we
ended
up
changing
our
minds
and
the
final
agreement
was
that
third
party
propagation
code
would
leave.
You
know
in
the
in
the
in
the
actual
vendor
code
basis.
You
know
somewhere,
and
this
was
changed,
so
it's
allowed
now
in
both
open,
telemetry
or
vendor
specific
ripples.
J
So
I
just
wanted
to
mention
this
part
here.
So
in
theory,
the
change
is
not
that
big,
but
I
feel
that
changing
these
relatively
fast
and
changing
things
and
not
making
things
clear,
could
be
a
problem,
and
I
I'm
saying
this
because
I
also
remember
that
a
few
maintainers
mentioned
that
they
didn't
want
to
maintain
calls
that
belongs.
For
example,
you
know
for
propagation
that
is
like
specific,
for
example,
you
know.
G
B
J
J
B
Q
I
was
going
to
say
my
two
cents
is
that
I
would
prefer
that
the
vendor
specific
code
live
outside
the
repo.
I
don't
think
I
don't
think
maintainers
are
familiar
enough
with
the
code
to
even
like
effectively
maintain
it.
If
somebody
comes
in
and
changes
it
or
you
know
files
a
bug
report,
we're
going
to
have
to
defer
back
to
the
defender
to
ask
you
know,
is
this
okay,
so
yeah?
K
Any
other
thoughts
from
maintainers
well,
my
thought
at
least
so.
Let's
talk
about
aws
and
the
light
step
open
tracing
propagators.
I
think
those
are
the
two
big
ones
right.
On
java
we
have
a
maintainer
both
from
aws
and
a
maintainer
from
light
stuff.
So
for
us
it's
like
we've
got
the
people
already
in
place
yeah,
but
you
can't.
D
G
H
B
O
M
If
raise
a
real
quick
point
on
the
open
tracing
prop
header
propagator,
reasonable
people
can
disagree.
Yes,
lights
up
ceiling
company
that
uses
that,
but
that
is
technically
a
open
tracing
thing
and
not
a
light
step
thing.
M
It's
defined
in
code
in
the
open
tracing
repos.
It
was
never
formally
put
into
the
spec,
but
like
envoy,
adopt
like
envoy's
binary
header
propagation
format
is
the
open
tracing
one
there's
only
really
a
definition
of
it.
That's
used
by
light
step
and
that's
how
it
got
into
these
things.
But
I
don't
have
a
dog
and
or
I
I
don't
want
to
have
a
dog
in
this
fight.
I
think
there's
good
arguments
on
both
sides
and
I
would
tend
to
err
on
the
side
of
don't
keep
it
in
the
repo.
O
Yeah,
I
was,
I
was
just
saying
like
if
we
have
proprietary
formats
shouldn't
that
be
that
shouldn't
the
onus
of
those
proprietary
things
be
on
the
companies
that
own
them
and
vice
versa.
I
think
that
that's
the
tricky
part
to
me
is
like
if,
if
we're
gonna
have
an
open
like
a
completely
open
spec,
then
that's
one
thing,
but
if
you
have
proprietary
formats
mixed
in
with
open
things
and
like
the
proprietary
stuff,
probably
needs
to
be
maintained
by
the
people
that
own
them.
H
Q
I
would
say
things
that
are
looked
to
as
standards
or
at
least
can
be
at
least
fall
into
the
gray
zone
of
such
things.
So,
since
this
is
like
a
project
of
that
kind
of
merges,
open
tracing
and
open
census,
probably
anything
that
was
open,
tracing
or
open
senses,
you
could
make
an
argument
for
it
being
maintained,
but
I
think
things
that
are
clearly
on
kind
of
the
vendor.
Specific
side
of
that
line
should
probably
say:
inventor-specific.
D
H
That's
a
good
point,
but
also
the
opposite
point
is
made
by
by,
for
example,
google
or
or
aws,
which
may
be
different
than
dyna
trace
header
or
the
live
step
header
or
you
call
it
open
tracing
header.
The
difference
is
in
case
of
of
these
cloud
providers
in
order
to
communicate
with
their
solutions,
you
need
to
use
it.
So
if
you
run
aws,
which
is
very
common,
I
would
say
like
60
or
70
percent
of
the
world
runs
on
aws.
H
H
D
Yes,
I
I
see
three
different
categories
here
I
see
like
fully
open
standards
like
the
w3c
trace
context.
Obviously
nobody
is
is
arguing
that
one
probably
b3
as
well
yeah
b3.
Then
you
have
a
second
category
of
like
platforms
which
is
like
aws
and
gcp,
and
things
like
that
that
aren't
necessarily
open,
but
are
you
know,
sort
of
a
shared
resource
for
lack
of
a
better
term?
I
guess,
and
then
you
have
like
the
fully
proprietary
things
like
you
know,
the
diamond
trace
propagator
and
any
like
tracing
vendors
and
things
like
that.
D
Those
those
are
the
three
like
major
categories
that
I
see.
I
have
obviously
no
problem
supporting
the
fully
open
ones,
the
platform
ones.
I
think
you
could
make
an
argument
for,
but
I
would
want
to
keep
them
outside
of
like
the
core
repos,
maybe
in
like
contributors
or
something
like
that.
I
see
that
argument
both
ways
and
then
for
the
fully
like
proprietary
ones.
I
have
no
interest
at
all
in
maintaining
those.
H
O
O
I
was
going
to
say
with
the
platform
things
I
agree
with
you
bogdan,
I
think
it
like.
Yes,
we
recognize
that
aws
and
gcp
and
azure
like
run
most
of
the
world
in
the
cloud,
but
then
that
makes
like
the
barrier
to
entry
for
new
platforms
very
difficult
and
like,
and
also
if
we,
if
they
change
like
it.
Just
where
do
you
draw
the
line
on
which,
like
should
we
support
alibaba's
platform?
B
B
K
To
this
is
that,
let's
think
about
the
users
of
open
telemetry,
not
the
vendors
like,
what's
going
to
make
it
easiest
for
users
to
adopt
this,
because
we
really
want
this
to
be
adopted.
Right,
like
this
is
an
important
goal
of
this
project
to
have
this
be
adopted
as
many
places
as
possible,
and
if
people
have
to
go
hunting
hunting
somewhere
else
for
aws
propagators
or
gcp
propagators.
That's
true,
like
it's
going
to
be
a
pain
for
them,
and
it's
going
to
be.
It's
definitely
going
to
raise
the
barrier
of
entry
for
users.
M
M
Right,
we
have
a
registry
there's,
obviously
the
concept
of
wrapping
open,
telemetry
and
creating
you
know
a
a
package
that
bundles
in
your
platform,
specific
propagators
and
things
like
that.
H
D
Yeah,
I
think
so,
and
I
I
can
say
at
least
in
js,
we've
had
the
vendors
themselves,
like
particularly
aws,
has
asked
us
to
host
their
their
propagator
and
we
actually
said
no
for
now
and
they
had
no
problem
hosting
it
on
their
own.
But
I
have
not
had
a
single
user
ask
me
about
it
only
aws
themselves.
M
P
Yeah,
that's
been
the
most
that's
why
I
want
all
these
other
tracing
headers
to
die
so
that
so
they
should
be
put
off.
They
shouldn't
be
included
in
our
main
repos
envoy.
Contour
was
the
biggest
one
that
bit
me
recently
very
annoying.
Also.
M
I
think,
as
a
general
statement
of
personal
preference,
the
more
we
can
do
to
try
to
shift
people
towards
something
that
is
an
actual
factual
standard
like
w3c
trace
context
is
a
healthy
ecosystem
decision,
so.
M
With
that
in
mind,
saying
like
okay,
the
only
things
to
get
into
core
are
actual.
You
know
open
source,
widely
supported
whatever,
and
then
we
have
opera
ways
for
people
to
discover
platform-specific
ones
and
to
discover
vendor-specific
ones
seems
fine.
F
F
That
that
yeah,
that
we
want
people
to
consolidate
on
these
headers
one
other
question
I
had
is
the
same
question
basically
but
for
resource
detectors
like
I
think,
there's
there's
clearly
value
in
having
you
know,
like
aws
resource
detectors
for
users,
since
a
lot
of
users
are
in
aws,
but
I
feel
like
the
same
kind
of
same
discussion
there,
except
for
that
difference
there
of
there's
not
like
a
one,
there's
not
like
a
desired
future
state.
D
Yeah,
so
there's
for
the
resources,
there's
two
things
from
my
perspective:
one:
it's
significantly
less
code.
It's
like
way
easier
to
understand
for
the
most
part
like
in
gcp,
you
just
hit
a
single
endpoint.
They
have
already
published
modules
to
hit
them
and
and
you're
done,
but
two-
and
I
brought
this
up
before
and
we
don't
have
to
talk
about
it
now,
but
at
least
in
js.
I
don't
know
if
others
have
had
this
issue,
but
getting
remote
resources
at
process.
Startup
is
a
huge
pain
in
the
current
state.
D
I
know
that
particularly
the
folks
at
lightstep
have
been
making
some
noise
about
that
because
they
have
a
wrapped
like
light
step
sdk.
I
think
that
they
want
to
make
the
startup
simpler
for
their
users
and
in
the
current
state
of
js
repo.
It's
not
that
easy
for
them.
Q
While
agree,
these
things
are
like
a
can
of
worms,
and
I
think
they,
my
gut
feeling,
is
that
this
impacts
other
languages.
Whether
or
not
they
know
it
yet,
especially
in
ecosystems
where,
like
processes
might
have
like
very
short
like
run
time
or
where,
like
startup
time,
is
really
really
important
like
lambda,
or
something
where
you're
paying
money
for
how
long
these
things
run
and
yeah.
I
think
there
are
definitely
just
a
lot
of
complications
with
resources
coming
through
to
the
export
pipeline
and
as
they're
currently
specified.
Q
So
I
don't
know
my
feeling
is
that
there
needs
to
be
some
more
discussions
and
specification
around
all
this.
I
think
some
of
this
is
complicated
to
some
degree
by
the
fact
that
resources
are
not
part
of
the
api
and
they're
completely
an
sdk.
Q
Good
question:
I
I
think
that
we
need
to
there
likely
needs
to
be
some
changes
here.
I
don't
know,
I
don't
know
what
those
changes
need
to
be,
so
I
don't
have
any
specific
reasons
why
they
need
to
be
in
the
api
just
yet
so
we
shouldn't
go
moving
them,
but
I
fear
a
little
bit
that,
like
some
of
the
things
that
we
might
want
to
do,
might
depend
on
that
and
we'll
be
kind
of
locked
out,
but.
H
Q
Yeah
so
scrap
this
api
comment
at
this
point
in
time
I
will
walk
things
back
to
resource
initialization
at
startup
is
complicated
and
we
may
need
to
make
some
improvements
around
there.
D
D
May
you
know
it's
usually
fast,
but
it
could
theoretically
not
be
done
yet
when
the
first
span
is
created,
and
then
you
have
to
have
some
way
to
wait
for
the
resource
to
finish
before
exporting
that
span
and
that
the
question
becomes
at
what
point
do
you
wait?
Is
it
do
you
wait
in
the
tracer?
Do
you
wait
in
a
span
processor?
Do
you
wait
in
an
exporter?
D
The
the
simplest
solution
to
me
would
be
to
change
resources
to
be
rather
than
having
a
per
tracer
provider
would
be
per
exporter
and
then
at
the
exporter
would
have
full
responsibility
for
waiting
on
async
resources
if
it
needs
to
before
it
exports
anything,
but
that's
a
pretty
big
change
pretty
late.
D
Be
honest,
so
that
particular
like
in
particular
that
would
make
it
easier
for
us.
I
know
that
the
original
reason
to
have
multiple
trade
like
have
it
per
tracer
provider,
was
to
have
multiple
tracer
providers
in
a
process.
Again,
I
have
not
really
seen
that
in
the
wild
for
js.
Nobody
has
asked
me
about
that,
and
it's
actually
pretty
painful
right
now.
D
If
you
even
want
to
try
to
support
that
use
case,
because
we
only
have
one
global
tracer
provider
and
you
have
to
manually
manage
everything
else
to
me,
it's
would
drastically
simplify
things.
If
we
said
we
just
have
one
global
tracer
provider
and
the
resources
are
handled
by
the
exporter
and
you're
done.
H
I
would
not
put
that
in
the
exporter-
maybe
we
it's
done
by
its
own
package
and
the
exporter
consumed
that
package.
Think
about
that
I
mean
I
don't
wanna
make
it
per
span:
exporter,
metrics,
exporter
per
logs
exporter
and
so
on.
So
it
probably
is
going
to
be
a
small
package
that
ex
with
also
a
global
thing
or
whatever
only
one
thing,
similar
and
and
all
these
exporters
can
consume
from
there.
D
D
Yeah
that
would
work
for
me.
So
then,
if
you
have
some
sort
of
asynchronous
or
potentially
slow
resource
collection
that
would
be
handled
by
this
separate
package,
but
in
the
typical
use
case,
where
you're
just
grabbing
some
environment
variables-
and
you
know
it's
fast-
then
we
use
the
mechanism
we
already
have
yeah
and
we
sort
of
maintain
them
side
by
side.
D
Yeah
I
like
that
idea.
The
only
caveat
that
I
would
have
is
maintaining
two
different
mechanisms
is
not
necessarily
that
big
of
a
maintenance
burden,
but
a
documentation
issue
and
confusing
for
users
potentially
just
to
explain.
When
do
you
use
each
particular
mechanism.
B
D
So
not
in
the
primary
do
we
want
to
put
them
in
contrib
or
not.
That's.
K
B
S
But
morgan,
where
would
that
be
I
mean
if
it's
not
in
contrib?
Is
there
a
different
reaper.
S
P
S
You
know
there
there
will
be
artifacts
that
are
built
separately.
You
know
by
the
vendor
and
being
distributed,
possibly,
which
is
not
necessarily.
M
H
But
the
only
difference,
the
only
difference
often
is
you
know
all
the
dependency
problems
and
if
those
propagators
come
with
other
dependencies
and
dependency
resolution,
so
especially
in
languages
like
java
and
even
now,
in
gore
started
to
become
a
mess,
and
I
would
still
want
that
that
artifact
to
kind
of
move
toward
with
the
same
with
the
ecosystem,
not
not
being
on.
Let's
say,
depending
on
a
artifact
that
is
five
years
old
and
everything
breaks
cannot
use
it
and
so
on.
D
Yeah
and
in
terms
of
a
fragmented
ecosystem,
I
mean
that's
that's
kind
of
unavoidable
anyways.
We
already
have
at
least
for
js,
there's
a
handful
of
developers
that
have
started
their
own
repositories
and
are
maintaining
their
own
instrumentations
completely
outside
of
our
repository,
and
you
know.
S
D
So
you
know
I,
I
think
you're
already
going
to
have
to
gather
some
components
from
various
different
places.
You
know
if
you're
using
a
vendor
specific
exporter,
you'll
have
to
get
it
from
somewhere,
and
I
think
the
idea
that
you
would
only
get
all
of
your
stuff
from
a
single
place
isn't
necessarily
realistic
for
very
many.
I
Use
cases
and
the
other
thing
is,
it
was
pointed
alolita
before.
H
S
I
mean
think
of
it,
I
mean
bogdan.
I
understand
that,
and-
and
I
agree
with
you,
but
again-
I'm
thinking
of
a
couple
of
models.
One
is
that
if
you
had,
if
you
considered
a
plug-in
model
of
sorts
which
is
normal
with
distributions
or
you
know,
large
component
projects
like
this
third-party
plugins
are
usually
as
is,
and
that's
something
that
is
out
of
the
realm
of
the
key
maintainers
in
terms
of
taking
responsibility
and
ensuring
that
you
know
the
dependency
trees
are
totally
in
sync
with
what
the
project
expects.
S
But
on
the
other
hand,
if
you
know
any
specific
component
is
supported
by
say,
aws
or
google
or
microsoft
there.
It's
their
responsibility,
obviously
to
keep
those
plugins,
or
you
know,
exporters
in
this
case
up
to
date
and
and
related
components,
but
it
is
on
the
project.
It
doesn't
mean
that
the
reaper
has
to
be
supported.
You
know
fully
by
the
maintainers
themselves,
the
core
maintainers.
H
S
H
Person
and
I
care
about
the
repo
itself,
but
what
I'm
trying
to
say
is
because
they
coexist
it's
very
hard
to
not
to
avoid
changing,
sometimes
and
and
paying
the
maintenance
burden
which
again,
I'm
fine
with
the
contrib
how
it
works.
So
far
we
have
very
responsible
people
and
stuff,
but
it
will
come
a
point
when
somebody
will
no
longer
be
responsible
for
that
component
and
will.
I
will
have
to
make
a
tough
decision
to
delete
that
component
because
nobody
maintains
it.
S
E
S
Mean
it
makes
sense
right
if
a
build
breaks
three
a
couple
of
times
and
you're
and
somebody
is
not
maintaining
that
component
then.
Yes,
you
should
absolutely
remove
that
from
the
build.
You
know
process
right.
That
is,
it's
no
longer
an
it's
a
deprecated
component
or
an
unsupported
component,
but
it
doesn't
mean
that
for
all
the
rest
of
the
99,
who
will
be
responsible
that
you
kick
them
out
too.
H
H
I
I
can
imagine
us
having
a
very
very
hard
time
sooner
than
later
about
maintaining
that
repo
and
doing
upgrade
of
dependencies
and
so
on.
If
we
are
not
that
open,
I
can
see
the
reverse
side,
which
is
we
are
not
helping
the
users.
S
Sure
and
then
also
the
other
dependency
that
is,
you
know
worth
considering.
Is
integration
testing
right
because,
at
the
end
of
the
day,
the
the
core
components
of
the
project
cannot
be
independent
of
all
the
other
exis.
You
know
exporters
or
receivers
or
any
other
adapters
that
are
built
for
different
frameworks
or
platforms
and
and
the
integration
testing
has
a
huge
dependency
on
that.
So
how
do
we
ensure
that
the
core
components
are
you
know
in
sync,
with
the
entrant
pipeline.
M
Make
another
point:
there's
a
whole
universe
of
propagators
that
we
aren't
considering
that
we
may
never
know
about
because
they
are
from
teams
or
companies
that
have
built
in-house
tracing
systems
and
will
want
to
create
a
propagator
to
integrate
their
existing
infrastructure,
tracing
infrastructure
or
context
propagation
infrastructure
with
open,
telemetry
right
yeah.
So
I
think
the
more
we
optimize
for
the
use
case
of
hey.
You
might
have
to
go.
Look
for
other
stuff
or
hey!
M
You
know,
you're
gonna
be
plugging
things
in
from
outside
that
cork,
a
core
control
or
core
repo
or
core
contrib
repo,
the
better.
I
also
think
we
keep
this
pretty
isolated,
just
propagators,
there's
a
lot
of
things
outside
of
propagators
and
I
feel
like
this
conversation
is
pulling
in
both
sides
of
it.
Sometimes.
T
I
I
can
share
my
learnings,
I
think
in
open
senses
we
we
had
the
integration
test
with
gcp
and
that
is
causing
a
lot
of
headache.
For
example,
when
I
changed
the
piping
code,
I
figured
some
integration
ratio.
I
have
to
set
up
my
own
gcp
account
in
order
to
test
that,
and
I'm
not
interested
in
doing
that,
and
also
imagine
if
someone
is
going
to
support
like
erlang
or
some
language
that
I've
never
heard
of.
T
Do
we
expect
that
language
maintainer
will
go
and
implement
a
gcp
or
aws
or
address
like
propagator
if
they're
not
interested
and
yet
like
in
aws
or
azure
gcp
people
are
saying
based
on
the
like
telemetry
data.
We
believe
this
language
is
not
our
interest,
we're
not
going
to
support
that
normally
create
a
very
confusing
situation
where
some
languages
they
have
this
vendor
specific
propagator,
some
don't
so
I
I
would
suggest
that
we
only
support
open
standards.
H
I
I
think
we
have
to
to
have
a
longer
conversation
of
this.
I
I
I
think
we
can
start
small
say
we
support
only
the
fully
open
standards
grab,
feedback
from
users,
learn
from
that
and
and
make
sure
if
we
open
things
we
open
in
a
way
that
is
reliable
and
scalable
for
us
that
doesn't
mean
by
the
way
alolita.
This
is
all
about
the
the
propagators
for
all
the
things
not
not
about
the
country
bro.
We
have
in
collector,
which
we
are
happy
with
that,
and
we
continue
to
support
that.
H
But
this
is
a
bit
different,
as
I
said,
because
this
is
across
all
the
repos,
all
the
the
implementations
and-
and
maybe
maybe
we
can
start
small
initially
and
make
a
case
for
for
for
adding
all
of
these
things
based
on
user
feedback.
Somebody
also
pointed
that
so
far,
only
people
from
aws
asks
for
that
header.
That
may
be,
because
not
too
many
aws
users
start
to
use
open
3.
T
Is
that
possible?
We
have
some
like
guidance
in
the
in
the
spec,
for
example,
if
a
particular
propagator
sitting
in
them
in
the
main
repo
had
some
like
zero
day
security
fix
and
what
what
should
be
the
expectation
like
do.
We
expect
the
maintainer
to
set
up
the
aws
account
and
do
the
integration
test
fix
all
the
zero-day
effects
and
do
the
emerging
like
release
or
or
we
have
different
expectations.
S
H
Yeah
yeah,
I
think
also
the
meeting
of
guidance
and
missing
of
of
a
process
to
to
do
this
is,
is
part
of
the
problem
you
mentioned
this
a
month
ago.
Alolita
that
you
you,
you
are
interested
in
putting
down
some
of
these
processes.
If
I
remember.
S
Yeah,
I
mean
that's
a
good
point,
bogdan
I'll,
be
happy
to
at
least
put
down
the
use
cases
we
have
run
into
you
know,
specifically,
as
daniel
said,
you
know,
with
the
propagator
or
id
generator.
These
are
key
components
that
are
needed
or
even
the
sig
v4.
You
know,
credential
authentication
that
we
do
so
there
are.
These
are
all
open
source
components,
but
yet
you
know
they
are
used
for
vendor-specific
usage.
So
again
we
can
certainly
document
the
what
we
have
done
or
what
we
recommend
yeah.
H
I
think
it's
okay,
I
think
it's
different.
The
id
generator
is
different
because
there
you
are
looking
for
us
to
provide
you
a
way
to
implement
this
specific
component
versus
where
that
component
specific
should
live
like
as
long.
I
think
us
being
able
to
provide
ways
for
others
to
hook
or
to
to
plug
in
different
things
is
a
different
request
than
us,
owning
that
custom
thing.
B
J
H
I
would
not
revert
it.
I
would
just
put
maybe-
or
you
can
revert
it,
but
I
think
there
are
other
improvements
there.
So
maybe
maybe
just
add
a
to
be
decided
in
that
section
or
a
comment
that
this
is
not
finalized.
H
K
Mostly,
I
wanted
to
just
ask
maintainers
if
anyone
has
actually
implemented
a
w3c
baggage
propagator
with
the
current
specification,
because
the
current
spec
doesn't
even
mention
metadata,
which
is
in
the
w3c
baggage
and
there's
other
issues
which
we'll
bring
we'll
talk
about
tomorrow
in
the
spec.
But
if
anyone
has
actually
implemented
it,
I
would
love
to
talk
to
them,
because
I
don't
think
it's
possible
with
the
current
spec
so
anyway,
spec
meeting
that
will
be
a
topic
for
spec
meeting
tomorrow.
M
Yeah
hi,
so
real
quick.
I
just
want
to
make
it
for
sure
everyone's
aware
that
before
kubecon
cloud
native
con
virtual
coming
up
in
november,
we
will
be
hosting
the
first
hotel
community
day.
It's
a
cncf
joint.
M
I
have
probably
bugged
some
of
you
about
this,
but
what
we're
looking
for
are
lightning
talks
and
then
also,
if
anyone
is
a
maintainer
and
wants
to
do
either
like
a
presentation
or
a
panel
discussion
about
you
know
past
future
present
whatever
then
feel
free
to
reach
out
to
me,
and
let
me
know
we
have
a
lot
of
latitude
in
what
actually
happens
other
than
the
lighting
talks.
Those
are
all
done
through
cfp,
so
feel
free
to
share
this
with
your
your
sigs.
M
Ted
young
morgan
is
on
the
program
committee,
kellyanne
fitzpatrick
from
be.
M
M
Well,
anyway,
discussion
for
a
different
time
anyway,.
S
B
All
right,
thank
you.
I
realized
john.
You
had
asked
your
question
and
I
probably
didn't
give
enough
time
for
anyone
to
answer.
If
anyone
else
has
implemented
the
baggage
spec
so
has
anyone
on
the
call
implemented,
w3c
baggage,
I
suspect
not,
but
I'm
curious.
D
K
D
Okay,
one
thing
I
did
want
to
mention
about
the
baggage
spec
is
that
it
is
like
the
the
metadata
was
particularly
what
you
brought
up
and
that
has
been
kind
of
a
back
burner
type
on
the
w3c
spec
side.
That's
the
only
reason
it's
even
included
is
because
we
used
a
existing
header
format
for
the
set
cookie
header,
but
the
metadata
is
essentially
only
specified
to
the
point
where,
like
it's
technically
there,
but
it
doesn't
mean
anything
in
the
baggage
spec
right
now,
right.
K
P
S
Morgan
morgan,
I
just
have
a
last
question:
are
we
still
tracking
next
week
for
rc.
B
That's
a
good
question.
I
mean
we
were
talking
about
that
earlier
right.
U
H
You
guys
idea
idea
is
we
are
in
an
rc
officially
yeah.
L
H
B
I
mean
first,
we
need
the
sdks
to
hit
the
rc
right.
I
think
that's
what
elevator
was
asking
about
is
when
we
actually,
when
the
sdks
can
implement,
so
that
that
we
can
discuss
some
depth
more
tomorrow,
because
I
know
we're
over
time
the
consider
the
the
tracing,
spec
and
context
back
locked
modulo.
The
three
issues
that
are
open
that
have
been
discovered,
they're
being
fixed
okay,
but
we're
not
taking
any
additional
changes,
short
of
like
p0s.