►
From YouTube: 2020-12-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
A
Cool
yeah
we'll
make
sure
to
talk
about
that
later.
If
anything
goes
wrong,
always
like
feel
free
to
cut
me
off.
A
A
A
They're
around
kind
of
force
flush,
and
where
did
this
go.
A
It's
a
little
weird
to
have
the
shutdown
method
be
exactly
there
that
definitely
in
the
sdk.
You
do
want
some
sort
of
shutdown
method
somewhere.
It's
like
you
basically
want
a
way
to
be
able
to
set
up
sdk
in
the
beginning.
Tear
down
in
the
end
necessarily
doesn't
mean
that
tracer
provider
needs
to
have
a
shutdown
method.
A
B
Yeah,
I
think
originally
it
may
have
been
on
processor
and
your
processor
is
nested
one
level
deep
inside
sorry,
tracer
provider.
So
I
got
like
hoisted
it
up
to
tracer
provider
and
yeah.
You
need
some
shutdown
mechanism,
I
don't
know
where
it
should
live.
A
Yeah-
and
I
think
I
think
that's
the
discussion-
my
guess
is
the
more
I
get
hoisted
up
one
more
time,
but
that
all
sounded
kind
of
reasonable.
A
The
other
one
was
to
add
force
flush
to
the
api,
and
this
was
not
very
popular,
and
this
should
really
not
be
an
api
level
concern.
I
think
the
motivation
behind
this
is
probably
lambda
function
as
a
service
environment.
Somehow,
so
I
think
I
think
there
is
some
some
discussions
to
be
had
just
make
sure
that
things
can
work
out
there,
but
probably
not
this.
A
A
Ted
has
now
made
a
spec
pr,
which
is
kind
of
the
formal
version
of
the
versioning
instability
otep.
So
I
think
that
just
went
up
last
night,
so
he's
just
looking
for
for
reviews
on
it.
B
I
haven't
really
been
tracking
this
closely.
I
know
daniel
had
some
concerns
about
the
versioning
and
stability
document
were
any
of
the
like.
Have
there
been
any
changes
based
on
some
of
the
concerns
you
raised,
or
is
it
basically
the
way
it
was
originally
spec
by
ted.
A
Went
through
a
number
of
of
changes,
I
don't
know
if
any
of
them
were
like
significant
offhand.
Do
you
remember
any
of
the
bigger
things
that
daniel
was
concerned
about?
I
know.
B
He
wasn't
terribly
happy
about
the
lockstep
versioning
of
all
the
pieces.
A
Yeah
all
right,
so
I
actually
made
a
comment
about
the
lockstep
between
api
and
sdk,
and
that
gained
some
traction
actually,
and
I
think
the
the
otep
was
updated
to
allow
the
sdk
and
and
api
to
vary.
A
So
I
feel
like
we
yeah,
I
think
the
big
win
and
change
between
this
is
that
sdk
and
api
can.
B
Can
vary
okay
and
there's
a
statement
that
go.
Ruby
and
javascript
can
have
different
version
numbers
for
the
experimental
packages.
A
Examples,
so
those
are
the
two
big
things
that
that
I
remember,
and
at
least
commented
on
for
this
pr
to
kind
of
make
sure
that
things
were
were
in
good
shape,
at
least
between
kind
of
what
what
we
needed
from
ruby
for
for
the
yotep.
I
think
on
that
document
we
have
like
a
fairly
good
deal
of
bike
shedding.
We
need
to
do
to
actually
figure
out
like
what
packages
we
want
and
how
they're
going
to
be
organized,
but
but
a
lot
of
that
seemed
to
be
like
I
don't
know,
inter
sig,.
B
A
A
B
Right
so
we
expect
any
changes
to
the
spec
going
forward
are
going
to
have
to
be
made
in
a
backwards
compatible
fashion.
A
Yeah,
I
think
that
is
true
and
I
I
think
that's
kind
of
the
the
intent
behind
all
this
is
that
as
users
adopt
this,
we
want
to
make
sure
that
they
are
comfortable
adopting
it.
Knowing
that
there
will
be
some
long
tail
of
of
support
behind
the
thing
that
we're
pushing
out
but
yeah,
I
would
be
I'd,
be
shocked
if
everybody
loves
every
aspect
of
1.0
and
that
there
won't
be
a
2.0
at
some
point
but
yeah.
Nobody
came
screaming
that
they
have
plans
for
the
2.0.
A
So
so
yeah
this
stuff
is
looking
okay,
I
think
at
some
point
I
think
we
do
need
to.
A
Revisit
our
google
doc,
which
I
think
is
a
great
start
and
excellent
work
by
daniel
at
some
point-
we
do
need
to
get
that
into
like
a
a
markdown
file
somewhere
in
irvington,
but
I
think
I
think,
we'll
figure
that
out
at
some
point
I
know
other
things
are
kind
of
doing
that.
I
think
they
would
like
a
standard
place
for
it.
A
A
There
are
a
couple
of
resource-based
attributes,
the
cloud.star
and
functions
as
a
service
dot
star,
and
this
pr
was,
I
guess,
just
saying
that
you
could
add
these
to
a
span
in
a
serverless
environment.
If.
A
A
Yeah,
I
guess
I
don't
have
any
strong
feelings
about
this.
I
am
not.
It
seems
like
this
person
is
experiencing
some
use
case,
where
the
the
attributes
that
he
wants
are
not
like,
detectable
at
startup
or
not
part
of
the
resource
he
wants
to
add
them
to
the
span.
A
A
A
I
think,
ultimately,
the
thing
driving
the
split
in
other
things,
and
really
this
proposal
is
kind
of
the
amount
of
of
traffic
coming
into
a
repo
and
the
amount
of
things
that
the
set
of
maintainers
in
a
repo
can
reasonably
kind
of
get
through
and
trying
to
kind
of
yeah
having
multiple
repos
as
a
way
to
manage
that
throughput.
A
A
little
bit
better
and
mainly
manage
like
integrating
some
of
the
changes
reviewing
some
of
the
pr's,
because
I
think,
like
I
mean
we
all
suffered
from
this
to
some
degree
of
like
being
able
to
provide
feedback
to
all
the
stuff
coming
in.
So
I
think,
like
the
the
main
driver
behind
this,
was
moving
some
more
stuff
into
like
a
contrib
repo,
with
the
hope
of
maybe
adding
a
few
more
people
with
responsibility
over
that
repo
to
help
get
those
changes
through.
A
A
B
I
don't
have
a
really
strong
opinion.
The
only
thing
from
my
perspective
is
that
we
really
don't
have
so
many
contributions,
and
so
many
contributors
that
there's
strong
justification
for
splitting
the
monorepo.
A
Nobody
has
told
us
that
we
need
a
mono
repo
or
that
we
need
a
split
repo
yet,
but
I
think
I
think
most
of
the
other
cigs
have
had
head
pains
with
their
mono
repo
and
the
split
made
sense
for
them.
So
if
we,
if
we
reach
that
point,
we
can
kind
of
follow
them
the
lead
of
these
other
things
and
that
will
that
will
work
for
a
while.
A
But
apparently
that
only
works
for
a
while,
and
then
you
start
seeing
proposals
such
as
this
to
to
kind
of
help
help
these
situations
more
and
I'm
sure
there
will
be
more
proposals
in
in
this
vein,
as
time
goes
on.
A
The
last
thing
we
did
talk
about
metrics
topics
for
a
little
bit
and
one
of
the
key
things
jmfd
was
bringing
this
up,
and
I
have
run
into
this
actually
in
in
thinking
about
some
of
of
the
metrics
work
kind
of
on
on
the
in
regards
to
what
needs
to
be
done
on
a
metrics
back
end,
and
that
is.
A
Generally,
having
some
way
to
identify,
uniquely
where
metrics
are
coming
from,
and
I
think
like
a
lot
of
a
lot
of
languages,
can
use
a
proxy
for
this.
I
think
like
like
go
java
that
nas
it's
like
common,
that
you
could
probably
use
host
as
a
proxy
like
when
you're
serving
your
application.
A
You
probably
have
like
one
one
process
per
host
and
that's
gonna
be
pretty
normal,
but
for
like
ruby,
this
is
not
at
all
the
case
where
you're
going
to
have
a
pool
of
processes
per
host,
serving
your
application
and
if
all
of
them
are
reporting
like
a
vm
metric
like
something
about
gc.
For
example,
it's
like
you
need
to
have
a
way
to
be
able
to
figure
out
how
that
data
should
be
aggregated
and
then
presented
meaningfully
to
the
user
again
and
without
having
a.
A
A
way
to
at
least
uniquely
identify
a
process
you're
not
going
to
be
able
to
do
that
very
well.
So
I
think
I
think
that
was
a
thing
that
the
metrics
folks
are
starting
to
realize
for,
for
other
reasons,
but.
A
I
think
there
is
there's
a
process
in
which,
as
you're
exporting
metrics
and
going
through
the
collector
that
you
can
basically
drop
and
in
that
whole
process
there
are
some
labels.
It
turns
out
that
you
probably
cannot
drop,
and
one
of
these
things
that
you
definitely
probably
cannot
drop
is
one
that
identifies
identifies
a
single
thing
emitting
metrics,
because
some
of
the
the
back
ends
are
going
to
need
this
to
figure
out
how
to
how
to
aggregate
certain
points
as
they
come
in
so
there'll.
Be
some
stack
work
around
that
at
some.
A
Yeah,
ultimately,
I
think
that's
going
to
be
a
pretty
big
win
for
us
in
terms
of
efficient
use
of
time,
but
in
general
I
do
think
kind
of
the
the
shape
of
the
the
current
version
of
the
ego.
Sdk
is
pretty
close
to
what
we're
going
to
end
up
with,
but
yeah.
That
was
definitely
not
the
case
and
I
feel
like
other
sigs,
who
have
metrics
implementations,
they
kind
of
have
like
implementations
that
look
a
lot
like
the
go
sdk
back
in
like
kind
of
stuck
in
this
world.
Where,
like?
A
How
do
we
get
this
march-ish
version
of
the
sdk
to
something
like
what
they
go
version
is
today,
and
it's
kind
of
not
super
straightforward.
So
I
think
different
cigs
are
wondering
if
it
just
makes
sense
to
like
start
fresh
again.
A
Yeah
I
mean
we
have
no
option
which
is
great,
but
but
even
those
who
have
you
know
invested
tons
of
time
into
their
their
sdks
are
asking
that
same
question.
A
B
I
don't
think
so.
Yeah
overall
churn
in
the
spec
is
a
lot
lower,
so
we
have,
I
think
I
opened
a
couple
of
small
issues
related
to
the
spec
that
we
need
to
address,
or
maybe
only
one.
I
don't
remember
right.
The
fallback
for
the
service
name
was
the
main
one.
B
B
B
Yeah
we
need
to
to
modify
this
because
the
jager
exporter
right
now
is
populating
this
with
unknown,
but
otherwise
we
don't
have
a
fallback.
So
if
the
service
name
isn't
provided,
then
it's
just
not
present
in
the
resource.
C
A
Yeah
that
that
sounds
reasonable,
like
it
there
is,
there
are
so
many
things
that
depend
on
a
on
a
service
name
and
yeah.
I
don't
know
like
I
know
in
the
js
repo,
it
seems
like
every
component
that
depends
on
a
service.
Name
has
actually
like
a
potential
service
name
in
in
the
constructor
for
that
component,
which.
A
B
Yeah,
so
the
other
thing
was
we,
we
had
a
bunch
of
changes
go
in
over
the
past
week.
I
guess
and
I
did
another
another
release.
The
release
process
ended
up
being
extraordinarily
painful.
B
The
document
upload
failed,
incidentally,
the
document
like
the
documentation,
is
being
uploaded
to
a
site
that
no
other
sig
is
using
at
the
moment
like
it
ends
up
at
open,
telemetry,
github,
io,
slash,
ruby
or
something
so
anyway.
It's
it's
in
this
place
that
nobody
seems
to
be
using
for
documentation.
At
the
moment.
B
B
So
I
don't
know
there
were
maybe
a
dozen
packages
that
failed
to
upload
the
documentation
forcing
the
release
has
to
be
done
package
by
package,
so
it
was
a
fairly
brutal
process
of
sort
of
initially
I
tried
to
do
it
in
parallel.
B
That
didn't
work
because
the
action
requires
a
single
package
name.
Then
I
tried
like
running
the
action.
You
know
on
a
whole
bunch
of
tabs
with
the
different
packages
they
all
conflicted,
because
there
was
a
get
push
and
they
get
pushes
were
conflicting
with
one
another,
so
it
really
turned
out
to
be.
You
had
to
do
it
serially,
so
it
was
just
a
really
really
painful
release.
I
I
don't
know
why
it
was
more
painful.
B
This
time
like
it's
previously
been
generally
okay,
or
maybe
it's
been
okay,
ish
previously
and
daniel,
has
noticed
and
jumped
in
and
fixed
it
for
me,
but
anyway
yeah.
It
just
feels
like
we
need
to
make
this
a
lot
less
painful
and
a
lot
clearer.
I
don't
know
if
robert
had
other
opinions
he's
probably
not
able
to
speak
yet,
but
I
feel
like
he
also
had
some
opinions
on
improving
the
release
process.
B
A
It'd
be
nice
if
daniel
were
here,
but
these
these
auto
release
processes
are
awesome
when
they
work
flawlessly,
but
when
they,
when
they
break
in
the
middle,
it
often
can
can
become
painful
like
this.
So
I
think
I
think
you
know
there
are
some.
You
there's
usually
like
a
learning
experience
from
each
one
of
these,
that
you
can
go
back
and
apply
to
like
the
whole
build
process
and
make
a
little
bit
more
robust.
A
So
if
there's
any
lessons
to
be
learned
to
robustify
this,
or
at
least
a
description
of
how
of
what
what
went
wrong,
then
I
think
if
we
write
these
things
up,
then
we
can
go
about
addressing
those.
B
Yeah
sure
yeah
I'll
try
to
do
that.
B
Yeah
at
the
time,
my
focus
was
just
on
fixing
the
broken
pushes,
but
we
yeah
previously.
I
successfully
released
kind
of
patch
releases
for
the
first
time,
so
we
had
just
four
packages
that
were
updated
and
that
process
actually
worked
quite
well.
It
was
the
the
release
of
all
24
packages.
It
was
a
little
painful
yeah.
Sorry,
I
don't
know
what
else
I
have
to
talk
about
at
the
moment.
B
B
Maybe
we
can
take
a
look
at
pull
requests
for
starters
yeah.
There
are
more
here
than
I
have
looked
at
so
yep,
so
we
have
one
that's
being
contributed
by
somebody.
B
Okay,
robert's,
just
sending
me
notes
to
comment
on.
We
have
one
that's
been
contributed
by
somebody
who's
working
on
a
facebook
integration
at
shopify.
So
that's
the
koala
instrumentation.
A
A
Yeah,
I
have
been
fairly
swamped,
and
I
know
like
robert
had
been
asking
for
for
feedback
on
kafka
for
a
while,
and
I
made
sure
to
get
to
that
yesterday.
A
So
apologies
for
the
lateness
of
it,
but
of
yeah-
I
guess
of
what's
here,
are-
are
all
of
these
priority.
B
B
B
B
There's
a
couple
that
are
sort
of
conflicting.
I
guess
so
we
had.
A
B
A
Yeah,
I
think
I
think
that
makes.
A
B
So
the
other
thing
we
need
to
do
it's
on
my
list
of
things
to
do
is
really
go
through
the
spec
with
a
fine-tooth
comb
and
compare
the
spec
against
what
we
have
and
figure
out.
If
there's
any
gaps
we
need
to
address
before
I
don't
know
what
we
move
this
to,
whether
it's
a
release
candidate
or
something
for
tracing.
B
Yeah,
I
know
there's
a
the
compliance
thing,
but
part
of
the
problem
with
the
compliance
thing
is
that
there's
been
prs
that
have
gone
into
the
spec
since
then,
and
those
pr's
don't
necessarily
go
and
null
out
all
the
entries
here.
So
it's
entirely
possible
that
we
have
said
we
meet
a
requirement
and
the
requirement
has
changed
underneath
us.
So.
A
A
Yeah,
there's
probably
a
big
thing
that
we're
missing.
A
I
feel
like
there
is.
There
are
a
couple
of
things
in
the
propagation
area
that
we
are
missing
and
that
is.
A
A
B
Yeah,
I'm
pretty
sure
we
don't
have
fields,
but
yes,
we
we
have
a
bit
of
work
to
do
there.
We
also
have
a
bunch
of
work
to
do
in
environment
variables,.
B
Some
of
which
we've
created
issues
for
but
there's
probably
some,
certainly
the
the
limits.
The
span
count
limits
like
attribute
count
limit
event.
Count
limit,
link,
count
limit.
All
those
I
think
we're
missing
we're
also
missing
some
of
the
stuff
around
sampler
samplerog
yeah.
A
And
on
this
note
like
I
think
I
made
an
issue
for
this
a
while
ago,
just
this
configurator.
A
A
I
feel,
like
you
know
we
introduced
this.
I
introduced
this,
like
you
know
very
close
to
about
a
year
ago.
I
think
after
just
seeing
all
the
boilerplate
setup
code
that
that
we
had
to
go
through
and
this
and
what
you
know
what
I
started
off
at
the
time
helped
solve
that.
But
things
have
changed
and
evolved
quite
a
bit
since
then,
and
I
feel
like
I
feel
like.
Maybe
the
configurator
isn't
has
not
totally
stood
the
test
of
time
just
yet.
So
I
think
I
think,
with
some
improvements.
A
B
I
seem
to
recall
robert
had
some
opinions,
or
at
least
discomfort
with
the
way
it
was
structured
right
now,
so
that
he
is
muted.
Maybe
he
can
bring
those
up
on
this
issue.
That
would
be
helpful.
He
did
make
a
comment
in
the
chat.
Sorry,
just
jumping
back
to
the
build
issues.
He
said
that
I
think
our
test,
depending
on
what's
current
in
the
repo
versus
being
locked
to
a
version,
creates
some
risky
brittleness
that
allows
us
to
easily
make
a
release
with
broken
gems.
A
Yeah,
if,
if
anything
that
we're
currently
doing
does
enable
us
to
release
broken
gems,
we
should
find
a
way
to
fix
that,
because
this
will
not
be
good
for.
A
B
But
yeah,
I
think,
okay,
sorry,
just
while
we're
on
that
note,
he
mentioned
the
example.
Was
there
was
a
change
we
made
to
common
right?
Okay,
so
we
moved,
we
had
some
common
utf-8.
B
Validation
or
trimming
code,
I
can't
remember
exactly
what,
but
there
was
this
common
code
in
a
couple
of
gems
and
robert
was
adding
a
third
copy
of
that
in
another
another
instrumentation
jam,
so
we
extracted
that
to
common.
B
But
when
we
released
that
we
didn't
have
the
dependencies
set
up
correctly,
I
think,
but
because
the
because
everything's
referencing
stuff
locally
it
looked
like
all
the
tests
passed.
So
all
the
tests
looked
fine,
so
we
pushed
this
out,
but
the
dependencies
were
broken.
A
B
A
B
Because
we
have
relative,
like
we
have
them
dependent
on
relative
kind
of
dependencies
right,
so
the
gem
file
says
you
know,
look
for
api
and
dot
slash
api.
A
And
then
the
thing
was
not
actually
added
to
the
gem
spec
or
something
for
for
the
real
world.
Is
that
situation
yeah
yeah?
That
was
a
problem.
B
B
B
I'm
not
sure
how
to
make
it
work
with
yeah.
I
don't
know
we're
we're
trading
off
release
brittleness
for
developer
convenience.
I
guess.
A
I
think
you
possibly
could
add
a
conditional
on
the
jump
file
for
like
env
local
dev
or
something
or
env
release
to
where,
like
that,
that
gem
statement
would
not
execute
in
like
the
release
environment,
I
guess,
and
it
would
at
least
catch
it
there
and
still
allow
you
to
run
in
in
local
dev.
And
then
you
could
probably
you
know
as
just
to
ensure
that
stuff
will
work
in
the
real
world
that
you,
you
could
toggle
that
back
and
forth
and
just
make
sure
that
dependencies
are
are
okay,.
A
B
Okay,
yeah
in
the
past
we've
talked
about
like.
When
is
the
next
release
coming
up?
I
think
we
don't
need
to
care
too
much
about
that
right
now,
other
than
the
fact
that
we
should
probably
clean
up
our
milestones,
because
I
think
our
milestones
are
all
completely
incorrect
at
this
point.
B
We
may
want
to
one
I
think,
once
we've
created
issues
for
all
the
things
that
we're
missing
for
ga
we
should,
or
at
least
for
a
release
candidate.
We
should
update
the
release
candidate
milestone
and
have
everything
tagged
against
that.
Basically,
all
the
spec
compliance
issues
should
be
tagged
against
that.
B
I
think
that
makes
sense
because
at
this
point
we're
up
to
a
beta0110
release,
I
think
so
I
think
I'll
just
delete
all
these
beta
releases
keep
the
rc
and
keep
the
v
1.0,
and
once
we
create
a
bunch
of
spec
compliance
issues,
we'll
just
tag
all
the
spec
compliance
issues
against
the
rc.
A
Yeah
that
that
sounds
good,
it
seems
like
in
order
to
get
to
like
rc
that
there
are.
There
are
two
major
chunks
of
work.
One
is
all
the
spec
compliance
is
used
and
actually
implementing
all
those,
and
then
it
seems
like
the
I
feel
like
this
versioning
spec
has
set
back
every
project
like
an
additional
month
in
order
just
due
to
the
reorganization
that
needs
to
happen
in
a
lot
of
the
the
the
packages.
I
guess.
B
Yeah,
I
don't
know,
I
think
I
could
probably
do
the
re-org
pretty
quickly,
probably
in
a
day
or
two,
and
then
I
don't
know
how
long
to
get
reviews,
because
it's
going
to
be
painful
to
review
but
yeah.
I
I
don't
think
the
reorg
will
be
that
terrible
to
do
but
yeah.
We
should
have
a
an
issue
tracking
that
and
again
tag
it
against
the
aussie.
B
B
Is
it
propagation?
Is
that
a
separate
package?
I
can't
remember,
I
think
it
is
so
context.
Propagation
and
tracing
are
the
main
ones,
and
then
we
have
the
umbrella
package
that
pulls
everything
together.
B
True,
okay,
yeah,
I
don't
have
much
else
to
discuss.
I
know
robert
has
been
moving
up
the
leaderboard
in
terms
of
contributions
because
he
is
heavily
focused
right
now
on
getting
open
telemetry
into
open,
telemetry
ruby
into
production
at
shopify.
So
he's
picked
a
victim
service
that
he
is
testing
with
open,
telemetry
instrumentation
and
that's
actually
shaking
out
quite
a
few
bugs.
A
If
there's
any
chance
of
me
becoming
a
blocker
to
progress
like
feel
free
to
reach
out
to
me
like
I'm
always
I
don't
want
to
be
in
that
situation
so
like
if,
if
there's
something
that
desperately
needs
to
review
or
reach
out
to
me,
otherwise
I
will
do
my
best
to
kind
of
you
know,
monitor
things
coming
in
and
and
get
some
eyes
on
it
too
and
kind
of
be
be
proactive
there,
but
cool
thanks.
B
Anything
else
we
want
to
chat
about.
Oh,
I
I
think
that's
it.
Okay,
cool
cool
robert
shall
maintain
his
silence
at
the
handwave
at
least
cool.