►
From YouTube: 2021-11-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
So
I
started
the
recording
to
the
cloud
and
what
should
happen
is
at
the
end
there
should
be
an
email
sent
to
the
email
address
for
the
host.
Hopefully
it
goes
to
them
and
not
to
me,
which
will
have
the
the
link
to
the
recording,
and
I
think,
that's
probably
how
the
automation
picks
it
up
to
get
it
to
youtube.
C
If
that
happens,
and
hopefully
it
will
just
work.
Otherwise,
who
knows.
A
Yeah,
otherwise
we
should
try
to
take
good
meeting
notes.
A
A
All
right:
well,
we
are
definitely
beyond
time
at
this
point,
thanks
for
finding
it
everyone
we
can
jump
in
here
in
just
a
second
I'm
guessing
everyone
who
showed
figured
out
what's
going
on.
We
were
just
talking
about.
Also
the
work
meeting
is
now
being
recorded
again,
so
that's
pretty
good
if
you
haven't
yet
add
yourself
to
the
attendees
list
and
we'll
jump
into
the
agenda,
it's
pretty
short
so
far.
A
Right
now,
if
you
have
anything
else,
you
want
to
include
in
the
agenda
be
sure
to
add
that
as
well
and
I
figured
out
how
to
share
again
cool,
so
big
news,
at
least
I
think
so.
I
think
this
is
foundational
news.
We
have
aaron
joining
us
as
a
maintainer
from
lightstep
there's
a
pr
for
it
here.
So
congratulations
are
in
order
and
like
to
celebrate
it.
So
yeah,
we'll
yeah,
do
a
little
a
little
golf.
A
There's
a
reaction
for
that
yeah
yeah
there
you
go
anthony's
up
on
the
right
yeah
I
found
those
ones.
I
don't.
I
don't
even
know
how
to
do
what
I'm
sharing
but
yeah
we're
super
excited
to
have
aaron
join
us
as
a
maintainer
for
the
sig.
If
you
haven't
already,
please
take
a
look
at
the
pr
here,
give
it
your
stamp
of
approval
or
your
comments
and
yeah
we're
looking
to
merge
this.
A
I
think
my
goal
is
tomorrow
morning
just
to
give
it
a
24
hour
cycle,
especially
if
we
can
get
this
recording
up
just
for
people
that
may
not
be
able
to
join,
to
be
able
to
be
notified,
but
yeah
we're
hoping
to
accelerate
some
of
the
development
and
the
stagnation
try
to
get
rid
of
that.
So
yeah,
I'm
really
excited
about
that.
A
Cool
aaron
is
there
anything?
Maybe
you
want
to
say,
or
we
can
also
just
move
on.
B
I
mean
it's
been
a
pleasure
working
with
the
community
so
far
and
I
hope
to
continue
doing
it
and
I'm
glad
I
found
out
myself
in
a
role
where
I
can
and
and
am
able
to
put
effort
to
making
this
an
even
better
product
than
it
already
is
so
awesome.
A
Yeah
super
happy
to
have
you
cool,
I
think,
with
that
we
can
move
on
to
the
next
topic.
I
it
definitely
is
dawning
on
me
that
being
a
part
of
the
open
telemetry
calendar
on
the
google
calendar
that
the
community
offers
is
gonna,
be
really
important,
because
one,
this
meeting
is
now
a
revolving
meeting.
So
you
need
to
find
it
every
single
week
and
two
we
canceled.
A
Next
week's
next
week's
is
going
to
go
in
line
with
thanksgiving
in
the
u.s
and
so
we're
not
entirely
a
u.s
based
development
team,
but
it
is
very
heavy
in
the
u.s,
and
so
we
figured
there
was
just
going
to
be
less
of
a
activity
on
this
one.
A
So
yeah,
it's
been
canceled
for
next
week,
I'll
try
to
post
something
next
week
as
well
before
on
wednesday,
but
I
will
not
be
here
so
yeah
heads
up
on
that
cool
and
then
the
only
other
thing
that
I
had
added
to
the
agenda
is
finalizing
this
rfc.
We
talked
about
this
last
time
in
the
meeting
notes
and
we
did
a
good
job.
Aaron
did
a
good
job,
actually
documenting
it
and
there's
a
lot
of
communication.
I
think
that
was
happening
there.
A
We
had
our
concerns
listed
here
and
did
a
good
job
on
this
one
and
defining
the
mechanics
I
captured
here.
What
the
I
think
the
sentiment
was,
and
I
kind
of
want
to
double
check
and
make
sure
everyone
was
on
point
here
that
our
policy
that
we're
trying
to
put
in
place
is
officially
support
the
same
versions
of
go
that
the
upstream
go
maintainer
support
the
current
in
the
previous
version.
That's
what
our
testing
is
going
to
be
against
and
that's
what
our
official
policy
is
going
to
be
against.
A
B
C
C
Because
I
I
think
when
118
is
released,
we
may
not
want
to
immediately
drop
116
support,
give
them
time
to
because
there
will
be
bugs
in
point
zero.
There
are
every
release
release
in
18.1,
and
then
we
make
the
change.
That
also
gives
people
a
little
bit
of
time
to
decide
if
they're
going
to
move
to
17
or
18,
if
they're
on
16
and
get
off
of
that,
I
don't
expect
it'll
be
a
significant
delay,
but
not
making
the
jump
as
soon
as
upstream
go
makes.
The
jump
might
be
advisable.
A
Okay,
so
how
should
I
phrase
that
that's
not
that's
the
link.
I.
A
Yeah
that
sounds
good
aaron
as
well
on
the
timeline
I
think
a
week
open
was
kind
of
the
the
goal
anyway,
so
no
rush
but
yeah.
I
think
this
is
good.
I
think
we're.
There
also
found
three
examples
of
our
dependencies
that
are
being
held
up
because
we
are
not
able
to
upgrade
because
we
still
need
to
support
115,
currently,
okay
cool,
so
anthony's,
going
to
put
a
little
revise
on
there.
If
you
haven't
taken
a
look
at
it
yet
please
do
so
or
provide
your
comments
as
well.
A
We
have
it
there
for
a
community
involvement.
We
wanted
to
make
sure
this
is
done
asynchronous
thing.
It
wasn't
just
decided
in
last
week's
meeting
so
cool.
A
That
being
said,
I
think
that's
actually
it
for
the
agenda.
I
know
anthony
might
still
be
writing
a
comment,
but
I'm
gonna
open
up
the
floor,
get
off
my
sub
box
and
if
anybody
else
has
something
they
want
to
talk
about,
that
is
maybe
asynchronously
not
on
the
agenda.
Let's
hear
it.
A
Yeah,
I
definitely
want
to
say
that
I
made
sure
to
warn
aaron
that
this
is
a
hard
job,
so
when
he
starts
complaining,
no
yeah
we're
excited
to
have
mom.
A
B
There's
no
way
to
there's
not
a
good
prescribed
way
of
measuring,
like
how
often
the
the
collectors
are
not
the
collector,
but
the
exporters
are
firing
and
whatnot,
which
my
recommendation
is
like
hey.
If
we
can
settle
on
either
a
logging
or
some
kind
of
internal
metrics
system,
that
would
help
push
and
alleviate
those
concerns.
So.
C
B
C
Think
we
really
should
have
internal
metrics
for
the
sdk.
The
concern
I
have
about
that
is.
How
do
you
export
them,
though,
like
if
you,
if
you
export
them
out
the
export
pipeline
from
the
metrics
one?
That
means
traces
are
going
to
depend
on
metrics,
because
they're
360k
would
then
depend
on
metrics.
So
we
can't
do
that
until
metrics
are
stable
in
the
first
place,
but
two
if
your
metrics
pipeline
gets
broken
you're,
not
exporting
the
metrics
that
tell
you
your
magic
pipeline
isn't
working,
which
I
don't
know.
C
B
Yeah
one
of
the
things
that
I
suggested
is
like
hey
if
we
can
actually
get
the
internal
logging
just
work
in
progress
that
I
have
to
something
that
we
can
agree
to,
that
could
probably
alleviate
a
lot
of
the
what's
going
on
concerns.
C
The
worry
I
have
with
logging
is
that
there
can
be
a
lot
of
high
frequency
events
and
you
don't
want
high
frequency
logging.
So
you
know
perhaps
we
buffer
up
events
and
emit
counts
at
some
point
like
doing
metrics
through
logs,
but
that
also
brings
its
own
complications
that,
if
we're
not
just
using
our
metric
system,
we're
we're
then
building
a
second
metric
system
to
instrument
without
bringing
in
our
metric
system.
A
Yeah,
this
is
a
you
know.
A
This
is
an
age-old
question
of
who
watches
the
watcher,
because
once
we
build
a
second
metric
system,
what's
watching
that
right,
yeah,
like
I
think,
that's
a
good
point,
I
I
think
the
answer
is
to
try
to
solve
this
on
as
many
fronts
as
possible,
not
not
bloat
the
system,
but
I
like
the
idea
of
logging,
sounds
like
a
great
solution.
Metrics
sound
like
a
great
solution,
and
and
honestly
I
can
see
just
it's
just
getting.
You
know
a
better
integration.
A
We
already
have
the
error
handler,
that's
something
that
is
like
you
know.
We
could
build
it
into
like
some
sort
of
screen
agent.
I
don't
know
what
that
would.
Look
like
you
know,
maybe
paying
pager
duty
or
something
like
that
for
users
could
be
kind
of
cool,
but
I
like
this
idea
also
for
the
logging,
because,
like
you
could
like
anthony
like
you're
pointing
out
like
you
know,
we
could
start
with
a
really
simple
thing
and
then
realize
like
well.
A
We
probably
want
to
build
some
rate
limiting
into
there
or
you
probably
want
to
build
dynamic,
verbosity
levels
right.
So
maybe
like
something.
That's
acting
poorly,
you
could,
then
you
know,
change
the
verbosity
to
see
like
a
little
detailed
net
state,
but
it
could
also
be
like
the
same
for
metrics
right,
like
maybe
you
have
exemplars
or
added
in
after
the
facts.
A
Also,
you
know
like
maybe
there's
a
built-in
prometheus
export
pipeline
or
something
I
think,
there's
a
lot
of
really
good
ideas
here,
and
I
think
you
can
only
get
better
than
where
we're
at
because
right
now
I
think
it's
just
you
have
to
write
a
profiler
and
it's
like
hard
to
do
and
situ
for
any
situation
that
actually
is
problematic.
A
Rich,
how
how
does
the
the
the
go
agent
at
new
relic?
I
know
that
they
have
internal
metrics
for
that.
D
Yeah,
you
know
we
call
them
supportability
metrics
and
we
have
more
or
less
standards
about
how
all
the
language
agents
report
those
and
those
are
just
go
through
our
normal
metrics
pipeline
and
they
are
prefixed
with
the
word
supportability
and
we
filter
them
off
at
data
ingest,
so
yeah.
So
it's
the
situation
if
the
metrics
die,
which
they
rarely
do,
then
we
do
have
logs
as
a
backup
and
talking
about
that,
I
I
really
think
this
is.
D
It
seems,
like
I
say
this
every
time
and
I'm
sorry
but
it'd
be
great
to
do
some
sort
of
cross-sdk
cross-language
effort
here.
So
these
can
be
the
same
across
sdks
like
the
internal
metrics
yeah
yeah,
the
internal
metrics,
like
common
things
like
how
many
spans
were
created
span
exporters
should
make
a
metric
about
how
many
spans
they
dropped
or
how
much
they
think
they
sent
that
sort
of
stuff.
So
you
can
make
little
graphs
and
see
how
customers
are
doing.
B
And,
to
be
fair,
I
I
want
to
say
the
the
scope
of
the
comment
was
they
wanted
to
know
why
some
of
their
things
weren't
working
the
way
they
expected
it?
So
it's
not
necessarily
a
running
in
process.
All
the
time
deal
that
needs
to
be
solved,
but
something
that
gives
us
visibility
at
some
point
would
help.
A
Yeah,
maybe
that's
also
a
good
way
to
kind
of
like
frame
the
context
from
like
my
being
a
like
when
we
call
it
a
systems
administrator
days
like
sometimes
the
answer
is
just
to
kill
the
process
right
like
it's
just
like
you
can
take
all
the
metrics
that
you
want
in
the
world
to
try
to
get
insight
into
it,
but
like
it
just
needs
to
run
sometimes
and
yeah,
like
I
feel
like
in
those
situations.
A
Sometimes
that's
just
going
to
get
killed
eventually
and
like
maybe
we
try
to
not
have
that
erect
be
the
case
where
you're
you
know
having
so
severe
of
errors
in
our
cylindrical
sdk,
that's
causing
it
but
yeah.
A
I
do
think
that,
like
there
is
a
middle
ground
that
we
should
try
to
aim
for,
and
I
think
that
we're
so
far
into
like
having
no
visibility
whatsoever
right
now
that,
like
anything's
gonna,
be
an
improvement,
although
it
may
swing
you
so
far
to
the
other
end,
if
you
just
put
you
know
so
much
logging
in
that
the
user's
disk
is
full
in
20
minutes,
that's
not
great,
either.
C
Yeah,
so
I
think
that
perhaps
then
the
work
that
aaron
had
started
with
introducing
logger
right
that
gives
us
verbosity
levels
that
we
can
use
as
kind
of
a
throttle
and,
if
developers
need
to
know
while
they're
doing
development,
what
the
heck
is
happening
within
the
sdk.
They
can
crank
up
the
verbosity
level
and
see
you
know
at
a
very
fine
grain
level,
what
what
is
happening,
but
they
wouldn't
run
it
like
that,
in
production
and
in
production.
C
We
would
then
eventually
build
a
supportability
metrics
or
internal
visibility,
metrics
that
can
go
down
the
normal
metrics
pipeline
and
if
those
break
they
break,
and
hopefully
the
error
logs
at
least
catch,
why
they
broke.
It
seems
like
a
good
kind
of
middle
ground
and
a
place
that
we
can
get
started
on
relatively
soon
by
landing.
The
the
logger
integration.
B
Yeah,
I
completely
agree
which
shapeless
plug
I've
put
a
couple
updates.
I've
actually
integrated
it
with
the
error
handling
so
that
that
is
a
lot
more
seamless
of
an
experience
and
simplified
the
error
handler
process
a
little
bit.
So
I
know
that
that
kind
of
grew
the
scope
of
that
pr,
but
it
kind
of
kind
of
makes
sense
for
it
all
to
happen
all
at
once.
So
if
anybody
has
any
time,
I'd
appreciate
more
reviews
on
that.
B
So
both
I,
I
started
with
the
default
error
handling
using
the
logger,
but
then
realized
that
there
was
a
lot
of
mechanism
that
could
be
simplified
out.
If,
if
we
just
instead
of
having
like
a
weird
delegate,
that
sometimes
will
be
swapped
out
and
sometimes
actually
fully
replaced.
If
we
just
had
a
box
implementation,
where
you
always
receive
the
the
outer
box
that
always
points
to
the
inner
one
and
all
of
the
set
methods,
just
change
that
inner
one
so
that
there's
like
there
could
be
access
contention.
C
A
Yeah,
that's
cool.
I
yeah,
I
honestly.
Last
time
I
saw
this
I
was
like
well,
let's
just
turn
it
into
a
real
non-draft
pr,
but
I
could
take
another
look
at
its
current
state
as
well
it
I
was
trying
to
get
that
other
poc.
I
had
out
done,
but
this
is
like
next
on
my
list
of
things
to
try
to
review.
So
it's
high
on
my
list
for
all
of
the
reasons
we've
described.
A
Yeah
and
anybody
else
who
has
already
taken
a
look.
Please
take
another
look
and
if
you
haven't,
please
take
a
look:
I've
added
it
to
the
agenda
docs
as
a
community
action
item,
because
the
more
eyes
on
the
better,
especially
eyes
that
are
going
to
be
using
it
from
the
customer's
perspective,
because
that's
I
think
this
is
the
interface
that
we're
trying
to
build
like
the
implementation
is
totally
sdk.
We
can
always
kind
of
adjust
later
on,
but
this
is
the
one
where
it's
like.
A
If
this
doesn't
work,
then
no
one's
going
to
use
it
so
yeah
cool
with
that
any
other
community
responses
or
cool
projects,
I'm
guessing.
We
can
probably
end
this
pretty
quick.
A
Okay,
seeing
a
lot
of
faces
that
are
happy
to
get
back
to
doing
some
work:
okay,
okay,
everyone
yeah,
let's
get
back
almost
hour
or
half
an
hour
and
more
than
half
hour,
and
so
thanks
everyone
for
joining,
and
we
will
see
you
all
virtually,
but
not
next
week,
the
week
after
that
and
yeah
have
a
good
day.