►
From YouTube: 2020-10-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
B
C
C
I'm
using
a
headset
today,
so
I
would
I
wouldn't
exit
to
echo
I'm
a
bit
echoing
through
someone
else.
C
Interesting,
okay,
so
the
first
thing
I
wanted
to
talk
about
was
the
open
telemetry
community
day.
This
was
brought
up
in
the
maintainers
meeting.
Let
me
share
my
screen.
C
So
it's
a
free
event
and
anyone
is
welcome
to
join
so
click
there
to
get
your
tickets.
I
know
they're
also
looking
for
some
participants
for
things
like
workshops
and
things
like
that.
If
you're
interested
contact.
C
C
It
looks
like
somebody
has
already
started,
but
it's
just
something
to
keep
in
mind
that
we
should
be
watching
and
I'm
trying
to
follow
this
work
to
make
sure
that
it
stays
up
to
date
and
accurate,
but
everyone's
help
is
appreciated
on
stuff
like
this
one
thing
I
did
notice
is
that
this
person
said
they're
gonna
migrate,
the
getting
started
guide,
but
the
getting
started
guide
is
a
little
bit
outdated.
At
this
point,
it's.
C
It
just
doesn't
use
the
new
sdk
module,
so
maybe
maybe
I'll
have
time
to
update
that
we'll
see.
B
Yeah
I
was
just
looking
through
contrib
yesterday
and
I
reviewed
it
was
one
of
bart's
pr's
for
a
bug
fix
for
the
pg
plugin
I'd
seen
a
couple
issues
come
in
on
that.
So
I
was
just
gonna
call
attention
to
the
fact
that
there
is
a
pr
out
there
to
take
a
look
at
and
there
is
somebody,
oh
all
right.
It's
merged
there.
We
go.
B
Right
so
the
only
other
thing
I
had
there
was
that
somebody
was
kind
of
asking
about
about
the
next
release.
I
think
there's
a
couple
of
things
that
this
user
was
interested
in,
so
I
was
to
bring
that
up
too.
Do
you
get
that
everybody's
radar,
I'm
not
sure
if
it
released
with
the
v012
or
not
yet.
C
I
would
like
to
release
today,
but
I
just
opened
this
pr
this
morning-
that
updates
all
of
the
core
dependencies
to
the
v12
of
the
of
the
sdk.
So
I'd
like
to
get
this
merged
before
we
do
the
release,
but
I'd
like
to
do
it
today,
if
possible.
B
Okay,
great
yeah,
that's
that's
all
I
had
on
that
stuff.
C
C
Does
that
sound
too
difficult
to
manage,
or
I
mean
if
it's
automated,
I
don't
think
it
should
be
that
bad.
But
what
would
you
guys
think
of
something
like
that.
B
Really,
the
one
thing
that
I
was
just
thinking
through
was,
I
think,
at
one
point
in
time:
kind
of
the
plug-in
versions
generally
matched
the
you
know,
api
sdk
versions
that
they
were
compatible
with.
It
seems
like
that
has
changed
a
little
bit.
B
It
seems
like
sometimes
now
the
plugins
are
a
little
bit
behind
I've
noticed,
but
I
found
that
somewhat
useful
to
know
what
was
compatible
with
what
and
definitely
notice
that,
if
you
use
like
an
older
version
of
a
plug-in
with
a
newer
version
of
like
an
api
or
sdk,
you
can
get
into
some
really
bad
situations
that
are
not
immediately
obvious.
C
Yeah,
that
was
so
when
everything
was
in
a
single
repo.
That
was
much
easier
to
do
when
we
moved
to
a
contrib
repo.
You
know
the
release
process
essentially
became
out
of
sync.
C
Try
to
tie
the
release
processes
together,
I'm
not
entirely
sure
how
it
would
work,
but
we
could
try
to
synchronize
all
the
versions
again.
A
C
C
Hopefully
it's
my
hope,
matt
that
that
problem
will
be
fixed,
or
at
least
mostly
mitigated
when
we
go
to
1.0,
because
then
the
core,
like
the
core
api,
should
not
change
or
change
infrequently,
at
least
so
it'd
be
much
easier
to
just
say,
like
all
the
plugins
support
1.x,
but
because
we're
not
at
1.0.
Yet
we
can't
make
like
a
promise
like
that.
B
Yeah,
I
I
think
that
makes
sense
it's
kind
of
like
as
things
stabilize.
I
don't
see
there
being
like
a
ton
of
change
on
on
either
side
like
instrumentation
can
become
pretty
stable
as
long
as
the
libraries
you
know,
don't
introduce,
braking
changes
and
the
api
sdk,
of
course,
will
calm
down
a
little
bit.
B
Yeah
I
didn't
realize
that
was
so
complicated
in
ruby.
The
thing
that
we've
been
doing
is
just
been
like
matching
the
the
minor
versions
and
letting
the
patch
vary,
and
that
has
worked
out
pretty
well,
but
we're
able
to
kind
of
release
like
individual
packages
as
as
needed.
So
there's
like
a
hot
fix
for
like
one
package,
you
just
make
a
patch
release
and
everything
continues
to
work,
but.
C
Yeah,
that's
more
or
less
what
I'm
suggesting
is
we
keep
the
we
would
each
time
you
update
a
plug-in.
You
know,
if
you
add
a
fix,
it
would
just
bump
the
patch
version
of
that.
Only
that
plug-in
and
release
only
that
plug-in
is
it
doable
easily
yeah,
so
learn
actually
has
a
mode
for
allowing
the
versions
to
desynchronize.
I
forgot
exactly
what
it's
called.
C
I
think
they
just
call
it
independent
versioning,
but
essentially
what
we
would
do
is
you
can
there's
a
learn,
a
command
to
release
only
the
packages
whose
versions
have
been
bumped,
so
in
the
pr
you
would
just
bump
the
version
number
and
that
would
cause
it
to
release.
B
Yeah,
I
guess
now
that
we've
talked
about
this.
This
does
sound
like
a
good
idea.
It's
definitely
worth
worth
pursuing.
C
If
it
works
well,
we
may
want
to
consider
something
similar
for
the
main
repo,
too.
You
know
bumping
the
patch
version
on
every
on
every
pr
and
releasing
you
know,
I
I
don't
see
any
real
problem
with
doing
that,
but
I
think
we
should
start
with
contrib,
where
it's
a
little
bit
safer
to
to
do
things
like
that
and
see
and
see
whether
or
not
they
work.
C
I
guess
I
will
this
afternoon.
I
will.
A
A
C
Yeah,
so
I
I
think
I
know
what
the
problem
with
that
is
I'll.
Try
to
fix
that
at
this
this
afternoon
also.
C
Okay
related
to
this
also,
this
came
up
in
the
maintainers
meeting.
The
cncf
wants
to
stop
paying
for
circle
ci,
which
means
we
need
to
move
our
tests
to
github
actions.
C
C
So
if
someone
wants
to
to
volunteer
for
this,
that
would
be
great.
If
not,
I
can
just
create
an
issue
and
we
can
get
to
it
when
we
get
to
it.
A
C
C
All
right
also
from
the
maintainers
meeting,
the
other
language
sigs,
are
looking
for
people
to
test
their
sdks.
Actually,
all
the
sigs
are
supposed
to
be
doing
this,
so
somebody
from
another
language
will
probably
be
testing
ours
soon.
I
think
morgan
volunteered,
but
the
idea
is
to
just
get
someone
who's
not
as
familiar
with
it,
to
test
it
and
make
sure
that
it's
actually
as
easy
to
use
as
we
think
so.
C
If
anybody
wants
to
volunteer
for
this
type
of
thing,
I
know
other
sigs
are
looking,
so
you
can
feel
free
to
reach
out
to
them.
If
you
don't
have
time-
and
that's
fine
also,
but
I
just
figured-
I
would
mention
it.
C
C
Okay,
I
will
review
this
after
the.
C
I'll
review
it
after
the
meeting
and
if
it's
good
I'll,
just
merge
it
I
wanted
to
bring
up.
There
are
three
bugs
in
the
main
repo
that
all
have
prs
that
are
waiting
on
reviews.
C
So
if
we
could
get
these
reviewed,
that
would
be
great.
This.
C
There's
been
it's
it's
getting
there,
but
there's
been
some
back
and
forth
so
bart.
I
read
through
your
comment
and
I
think
it
sounds
reasonable,
so
I
think
we
should
have
him
update
the
pr,
but
other
than
that,
the
other
two
we
just
need
reviews
on.
C
That,
in
terms
of
our
ga
burn
down,
there
are
a
handful
of
prs
that
are
open.
That
still
need
additional
reviews
and
there
are
a
handful
of
issues
that
are
unassigned,
so
I
I
was
hoping
to
find
volunteers
for
some
of
these
issues
today.
If
we
can,
if
people
have
time
most
of
them
are
pretty
small
actually,
so
they
shouldn't
take
a
ton
of
time,
but
these
are
all
things
that
are
needed
for
ga
and
nobody
is
responsible
for
them.
Yet
so.
C
C
Think
the
main
confusion
is,
though,
is
wording
actually
so
that
the
term
context
means
too
many
different
things.
We
look
for
your
comment
here,
yeah,
so
the
parent
context
here
this
does
not
refer
to
the
spam
context.
It
refers
to
the
the
context
object
that
contains
the
the
active
span.
C
The
the
issue
here,
I
believe,
is
just
wording
so
this
example,
for
instance.
C
This
is
just
how
the
specification
wants
us
to
inject
spans
into
the
start
span.
Operation,
and
the
reason
for
this
is
that
they
want
things
like
spam
processors
to
be
able
to
access
other
things
in
the
context
like
the
baggage
and
things
like
that
down
the
line.
C
So,
instead
of
individually
adding
all
of
those
things
as
arguments,
anything
they're
just
passing
the
full
context
down
and
they
have
specifically
disallowed.
C
A
C
A
A
A
When
would
they
do
that,
and
I
mean
because
they
can?
I
mean
I
mean
you
can
still
call
it,
it
will
be
accessible,
it
will
be
working
fine,
there
might
be
some
hidden
bugs
behind
so
either
we
do
this,
but
I
don't
know
we
use
like
a
dedicated
function
for
that
or
we
do
this
somehow
behind
the
scene.
A
A
C
A
B
C
Was
looking
in
the
context,
so
the
only
thing
that
changed
is
that
the
span
options?
Okay.
So
if
you
don't
change
it,
this
was
removed.
A
It's
possible
yeah,
so
that's
that's
my
main
concept
that
people
will
be
using
this
one
and
the
only
way
to
keep
an
eye
like
hey
this
is
forbidden
is
what
just
tell
them?
Please
don't
do
this
because
I
mean
you
know
writing
something
to
this
fact
that
it's
forbidden,
but
it's
because
it's
possible
so.
C
A
C
A
C
A
C
C
We
could
maybe
make
a
new
method.
That's
just
called
like
set
parent
that
takes
both
as
a
convenience.
C
B
Yeah,
I
wasn't
at
last
week's
meeting,
but
I
was
just
kind
of
like
looking
at
the
notes
and
there
was
something
saying
that
grpc
was
seven
to
eight
times
faster
than
json.
I
think
for
the
collector
exporter.
C
And
that
was
in
getter.
That
was
just
some
somebody
mentioned
to
me
and
see
if
I
can
find
it.
C
B
C
Up
yeah,
sorry,
I
don't
remember
exactly
where
I
came
from,
but
they
were
asking
for.
They
were
essentially
asking
for
for
a
a
grpc
exporter
in
the
web.
This
would
I
made
the
agenda
for
last
week,
but
actually
only
a
few
people
showed
up
and
we
ended
up
just
having
a
short
meeting.
C
So
we
didn't
really
talk
about
that,
but
I
I'm
not
very
familiar
with
grpc
in
general,
particularly
not
in
the
web,
so
I
have
been
hoping
to
get
bart's
input
on
that
actually
as
to
why
we
currently
don't
do
that
and
how
much
work
it
would
be
to
support
it,
because
if
it's
easy,
I
think
it's
something
that
we
should
do.
But
if
it's
a
ton
of
work,
then
maybe
it's
not
worth
it
right
now.
B
I
think
there
are
some
complications
with
like
browsers
and
grpc's
uses
of
eval.
I
don't
know.
Maybe
bart
can
explain
this
better,
but
I've
seen
that
some
browsers
have
like
security
concerns
with
it.
I
guess
it's.
A
B
So
yeah
that
was
so.
I
put
a
couple
suggestions
down
here
of
things
that
we
could
consider.
So
I
think
you
know
there
are
really
like
three.
There
are
three
things
that
grpc
does.
That
makes
it
a
little
bit
faster,
so
like
one
of
them
is
the
actual
like
serial
serialization
of
the
data
that
you
know,
json
encoding
is
pretty
fast
grpc
encoding
might
be,
you
know
a
little
bit
faster,
but
that's
that's
the
one
thing.
B
We
can't
change
the
two
other
things
that
grpc
is
doing
for
you
is
it's
going
to
compress
requests?
I
don't
know
if
we're
doing
that
from
json,
so
we
could
gzip
the
the
spans
that
we're
sending
and
then
the
other
thing
it's
probably
doing
is
keeping
a
persistent
connection.
B
A
Which
people
are
yeah
because
I
mean
the
gpc
is
not
something
like
a
native
supported
by
the
by
the
browsers?
Yes,
so
it
means
you
have
to
add,
like
third
party
converters,
for
all
of
that,
which
means
adding,
like
lots
of
probably
javascript
to
the
to
the
web,
and
people
are
also
complaining
like
with
even
you
know,
50
more
kilo
or
100
more
kilo
and
of
javascript,
and
I
think
this
would
be
much
more
so
yeah.
C
I
think
what
what
matt
brings
up
g-zipping
and
using
http
people
live
are
simple
things
that
we
can
do
that.
We
should
probably
do
to
try
to
speed
up
what
we
have
and
I
will
try
to
find
the
I
I've
been
looking
getter
trying
to
find
the
person
that
made
this
request
to
find
out
how
they
measured
it.
C
We
need
to
know
why
it's
so
much
faster,
because,
like
seven
to
eight
times,
tells
me
that
either
the
http
one
is
really
slow
for
some
reason,
which
I
don't
think
is
the
case
or
that
the
time
difference
is
so
small
that
it
hardly
matters
like
the
difference
between
five
milliseconds
and
25.
Milliseconds
is
technically
five
times,
but
it's
such
a
small
difference
that
you
know
may
not
make
a
difference.
C
So
I
think
we
need
to.
We
need
to
determine.
A
But
the
day
solution
for
grpc
is
not
usually
the
grpc,
because
it
will
be
still
sent
as
http
request
anyway
right
so
yeah,
the
the
browser
doesn't
support
the
grpc
natively,
so
it
will
be
all
sent
using
the
http
anyway.
So
if
that
is
the
case,
then
yeah
maybe
like
either
receiving
this
or
keeping
your
life
is
much
give
a
lot
of
improvement.
You
know.
C
Yeah,
I
think
g-zipping
and
using
the
keep
alive
is,
is
things
that
we
can
do
that
we
know
will
work
and
we
may
you
know
if
we're
not
doing
them
now,
then
we
should.
You
know
we
should
do
whatever
we
can
to
speed
up
what
we
have.
C
C
If
we
did
the
grpc
one
yeah,
it
would
be,
but
I
don't
think
there's
any
need
to
do
it
right
now.
There's
too
many
drawbacks
with
grpc
on
the
web,
anyways
and
yeah.
I
just
don't
think
it's
worth
it
at
the
moment.
D
Yeah,
I'm
sorry
for
a
maybe
dumb
question,
but
could
you
mean
protobuf
instead
of
grpc.
C
Yeah,
so
this
particular
I'm
a
little
annoyed
that
I
can't
find
the
conversation
right
now.
This
particular
user
was
measuring
it
from
node
and
they
were
saying
that
using
the
http
exporter
versus
the
grpc
exporter
and
node
was
a
speed
up.
So
that
was
the
comparison
and
then
the
extent
they
the
question
was
in
the
web.
We
only
export
with
http.
C
D
No,
we
just
discussed
it
within
the
team
and
we
were
interested
if
you
know
it
brings
any
benefits
but
more
in
terms
of
smaller
span
size
at
the
cost
of
the
cpu
and
bundle
size.
Of
course,
if
we
if
protobuf
was
implemented,
so
I
was
actually
looking
forward
to
hearing
from
the
guy
who
mentioned
it
a
week
ago,
but
I
guess
that's
that.
C
Yeah
I
mean,
if
you're,
if
you
want
to
contribute
an
exporter
that
uses
protobuf
in
the
web,
you
know
I
won't
stop
you.
I
think
that
that
would
be
a
good
addition,
and
you
know
everybody
can
make
their
own
decisions
for
if
they
want
to
trade
off
bundle
size
for
span
size.
If
you
have
a
high
traffic
application,
that's
sending
a
lot
of
spans.
C
That
might
be
worth
it,
but
I
don't
think
that
you
know
I'm
I'm
happy
to
create
an
issue
for
it
and
then,
if
somebody
wants
to
eventually
implement
it
great,
but
I
don't
think
I
want
to
spend
my
time
on
it
right
now.
C
A
C
C
Be
a
much
smaller
transfer
size
over
the
wire
when
it's
exporting
the
spans.
A
Yeah,
but
on
the
same
time
it
will
be
like
the
package
for
the
end
user
would
be
much
bigger.
A
C
B
Yeah,
but
I
think
that
was
one
of
the
reasons
why
I
was
recommending
g-zipping
your
requests.
That
will
definitely
decrease
the
wire
size,
probably
to
something
very
similar
to
the
protobuf
size
and
I
think,
you'll
get
essentially
the
same
benefits
that
that
you
would
from
from
grpc
by
by
doing
both
of
those
gzipping.
Your
requests
and
using
http
keepa
lives,
because
I
think,
like
the
only
other
variable
then
just
becomes
like
the
the
actual
serialization
cost
between
json
and
proto.
B
There's
I
think
it's
called
protobuf
js
that
library
on
on
the
readme
has
some
like
out
of
some
fairly
out
of
date,
like
encoding
benchmarks,
that
they
did
and
using
yeah.
B
So,
if
you
scroll
pretty
much
all
the
way
down,
I
think
is
where
you
will
find
those
it
found
using
google
protobuf
library
stuff
was
like
super
slow
like
seven
times
as
slow
json
encoding
was
two
to
three
times
as
slow
and
it
was
the
fastest.
C
Yeah
so
I
mean
it
might
be
worth
it
for
the
to
use
pro
to
have
proto
buff
js
for
the
web,
because
I
think
the
protobuf
min
bundle
size
is
actually
not
even
that
big
anymore.
It
used
to
be
really
big,
but
I
think
it's
not
that
bad
anymore.
A
C
I
would
expect,
probably
in
the
main
repo,
assuming
that
it's
not
wildly
different
from
the
exporters
that
we
have.
You
know
if,
if
it
can
just
be
a
different
transport
mechanism
on
top,
you
know,
inheriting
from
the
exporters
we
already
have
it
shouldn't,
be
that
much
code
I
wouldn't
expect.
So
I
don't
think
it
would
be
that
big
of
a
deal
to
have
it
in
the
main
repo
but
anyways
I
did.
I
just
don't.
D
No,
I
totally
agree
at
least
you
know
intuitively.
That
totally
makes
sense.
I
just
hope
that
the
guy
would
come
back
with
you
know
some
hard
numbers.
C
D
Oh,
it's
not
it's
not
about
the
performance
of
this
realization
itself,
it's
about
just
the
size
of
the
payloads,
but
that
was
the
main
concern.
C
B
Yeah
last
thing:
I
just
pasted
that
last
link
in
there
and
it's
kind
of
it's
an
issue
on
open,
telemetry
js
from
before
my
time,
but
it
looked
like
proto
was
the
only
option
and
it
was.
There
were
issues
with
this
unsafe
eval
and
it
was
something
that's
present
in
both
protobuf
js
and
google
protoboss
library,
and
this
caused
security
issues.
I
guess
with
browsers.
C
C
C
I
remember
this
issue,
but
this
was
brandon
just
brought
this
up
as
something
that
he
ran
into
internally
at
lightstep,
and
he
just
wanted
to
make
us
aware
that
it
was
a
possible
problem.
It
wasn't
an
issue
with
open
telemetry
itself.
C
B
C
Yes,
it
is
overcomeable,
it's
just
a
configuration
topic,
so
if
you
want
to
use
protobuf
js
in
the
web,
you
need
to
also
set
csp
headers,
which
we
would
not
be
able
to
do
like
we
would
have
to
document.
If
we
had
a
protobuf
exporter
like
in
order
to
use
this
exporter,
you
must
set
this
csp
header
make
sense
yeah,
but
it
is
a
consideration
also
like
with
json.
You
obviously
don't
have
that
problem.
You
just
export
and
everybody's
happy.
C
But
I
I
think
the
the
gzip
and
the
keep
alive
is
a
good
place
to
start
for
now
so
matt,
you
said:
you'll
create
issues
with
us,
yeah
awesome.
C
That
said
we're
approaching
the
end
of
our
hour.
Is
there
anything
else
that
people
want
to
bring
up
before
we
go.
C
Okay,
so
please
review
the
prs
for
the
ga
burn
down
and
the
bugs
and
the
the
contrib
dependency
pr,
so
that
we
can
get
a
release
and
everybody
have
a
good
day
and
I'll
talk
to
you
next
week.