►
From YouTube: 2021-12-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
First
thing
I
wanted
to
talk
about
was
the
test,
all
versions,
pr
there's
currently
an
open
pr
in
the
contrib
repo
to
run
the
test,
all
versions,
script
on
every
pr
which
I
think
is
a
good
idea,
but
one
thing
that
was
brought
up
on
the
pr
by
amir
is
that
in
their
repository
they're
running
it
nightly,
I
believe,
against
all
packages.
A
So
I
wanted
to
talk
about.
Is
that
something
that
we
want
to
do?
And
if
so
you
know,
how
exactly
does
that
work,
because
we
need
some
way
of
alerting
that
the
tests
have
failed,
whether
that
creates
an
issue
or
sends
an
email
or
what
so
amir?
Do
you
mind
sharing
how
yours
works
and
how
you
are
alerted
of
failures.
C
B
C
Okay,
sorry,
so
I
just
want
to
to
add
that
it
found
the
issues
in
the
past
and
mainly
in
aws
sdk.
They
tend
to
release
versions
every
couple
of
days
and
they
change
their
internal
package
code
quite
often,
and
sometimes
it
broke
the
instrumentation.
C
C
A
Yeah,
so
I
I
probably
would
prefer
to
run
them
nightly.
I
don't
really
see
a
reason
not
to.
I
know
that
it
could
possibly
if
they
fail.
A
lot
could
generate
some
alert
spam,
but
I
would
rather
you
know,
I
would
rather
know
I'd
rather
be
spammed
with
alerts
of
failing
stuff
than
just
not
know
that
it's
not
working.
A
So
I
I
think
this
is
something
that
we
should
implement.
The
the
question
that
I
have
is:
what's
the
best
way
for
us
to
alert
on
those
failures,
because
not
everybody
monitors
the
cncf
slack
regularly.
A
If
we
do
that,
then
we
run
the
risk
of
it,
creating
a
lot
of
issues
and
duplicate
issues
and
stuff
like
that.
But
I
guess
it
depends.
How
often
we
expect
it
to
fail.
Do
you
have
any
like
in
your
repo?
How
often
do
you
see
failures?
Is
it
you
know
something
that
happens
once
a
week
or
once
a
month,
or
do
you
have
any
real
feeling
on
how
often
you
expect
that
to
happen.
C
So
for
us,
it's
very
stable,
like
once
in
a
while,
it's
failing
due
to
invalid
reason
like
something
in
the
test
in
the
setup
I
don't
know
normal
test
failure
stuff
and
then
we
just
re-run
the
the
test,
and
we
see
that
it's
green
and
we
let
it
go.
C
I
I
just
wanna
add
something
else
that
usually
instrumentations
are
like
standalone.
If
you
did
a
change
in
the
instrumentation,
then-
and
you
test
it,
then
you
know
that
everything
is
good,
but
sometimes
there
can
be
dependencies
on
other
packages.
That
instrumentation
depends
on
some
other
package
and
this
package
can
also
be
updated.
C
So
so
it's
another
motivation
to
run
it
nightly,
like
both
the
instrumented,
the
library
can
change
and
both
some
dependency
can
change,
and
I
I
think
that,
no
matter
what
we
do,
we
can
start
with
just
having
the
most
easiest
way
of
alerting
and
see
how
it
behaves
that
currently
it
doesn't
alert
at
all.
So
I
think
I
don't
know,
what's
the
easiest
thing
to
implement,
but
maybe
we
can
start
with
it
and
and
make
it
better
after
we
get
the
results.
A
I
think
the
easiest
is
to
create
an
issue
on
the
repo,
because,
if.
B
A
I
don't
know
what
we're
allowed
to
do
on
the
cncf
slack,
how
hard
it
is
to
get
integrations,
approved
and
stuff
like
that.
So
I
think
just
doing
it
directly
on
the
repo
and
creating
an
issue
is
probably
the
easiest
way
to
do
it
for
us,
and
then
I
guess
I
would
probably
just
create
one
issue
for
the
failure.
I
wouldn't
want
to
create
one
per
package
or
anything
like
that.
A
Just
if
you
create
an
issue
every
time
it
fails,
even
if
it
fails
for
a
couple
of
days
and
it
creates
a
couple
of
issues.
I
think
it's
easy
to
just
close
those
and
say
they're
duplicates
and
that
way
we're
we're
still
constantly
reminded.
I
guess
of
the
of
the
failure.
C
And
it's
also
very
good
yeah.
It's
also
very
convenient
to
have
a
discussion
over
the
issue
which
is
publicly
available,
and
people
can
comment
and
reference
other
issues
or
other
prs.
B
A
If
it
generates
too
much
spam,
we'll
we'll
look
into
what
we
can
do
to
maybe
duplicate
the
issues
or
something
like
that,
but
for
now
I
think
we
just
create
an
issue
when
it
fails,
I'm
not
sure
when
we
start
running
it
the
first
time
there
may
already
be
some
failures,
so
we'll
have
to
see
how
much
work
it
is
to
actually
get
everything
working
initially,
but
hopefully
it's
not
too
bad.
A
Okay,
most
of
you
already
know
that
there's
a
pr
for
the
metric
streams.
It's
already
had
a
few
reviews
and
revisions.
I
think
it's
essentially
ready
for
final
review,
so
just
asking
people
to
take
a
look
at
it.
I'd
like
to
merge
that
before
the
end
of
the
week,
if
possible,
you
know.
Obviously,
if
it's
not
ready,
then
we
won't
merge
it,
but
I
think
it's
probably
getting
pretty
close
the
example
rspr.
A
I
was
hoping
that
the
author
would
be
on
the
call
today,
but
it
looks
like
they're.
Not
it's
been
open
for
a
while.
It's
still
it
has
a
few
reviews.
I
guess
without
the
author
here
we
probably
there
isn't
much
to
talk
about
here,
since
it's
had
reviews
but
not
been
revised
since
then.
So
I'll
just
skip
this.
For
now.
A
Okay,
this
pr
was
opened
by
svetlana
and
in
the
pr
we
talked
about
the
examples
and
where
they
should
live,
so
she
updated
the
example
to
use
the
most
recently
released
version,
and
it
was
pointed
out
by,
I
believe,
legendicus-
that
the
current
example
was
actually
more
up
to
date
and
pointed
at
like
the
work
in
progress
package.
I
think
that
we
should
have
that
examples.
Folder
show
examples
for
released
packages
and
then
have
prs
or
have
examples
for
for
work
in
progress.
A
Packages
should
be
inside
the
package
that
they're
showing
an
example
of.
I
just
think
that
new
users
that
come
to
the
repository,
if
they
look
at
the
examples,
folder
and
they
try
to
run
the
example
and
it
doesn't
work
on
the
release
package,
they're
going
to
be
really
confused
or
just
assume
that
it's
broken.
A
I
think
that
that's
probably
a
fairly
uncontroversial
thing,
but
if
anybody
has
any
any
doubts
or
reason
that
we
shouldn't
do
do
it
that
way.
Now
is
the
time
to
speak
up.
If
not,
then
I'm
just
letting
people
know
that.
I
think
this
is
the
way
we
should
do
it
in
the
future.
A
So
lana,
I
know
that
you
have
this
pr
open.
Do
you
mind
moving
the
current
work
in
progress
example
that
you're
updating
into
the
package
directory?
In
that
same
pr.
B
Sure
I
can
get
that
done
to
by
the
end
of
today.
A
A
So
right
now
we
have
an
issue
open
for
the
api
to
use
a
shared
definition
for
attributes
currently
in
the
specification
there's
a
shared
definition
that
metrics
and
traces-
and
I
believe
logs,
are
all
using
that
define
attributes
as
a
map
to
a
string,
a
number
or
whatever
we
already
released
our
api
with
the
name
span
attributes
because
we
already
released
it
as
1.0.
A
We
can't
get
rid
of
that
name.
We
have
to
keep
it
so
I
I
thought
of
a
few
options
here
to
move
forward
and
I
was
hoping
to
get
people's
thoughts
or
opinions
on
on
which
of
these.
We
should
go
with
I'm
probably
leaning
towards
option
two
or
three
here,
but
I'll
just
go
through
them.
A
A
Option
number
two
is
to
just
create
a
new,
alias
called
metric
attributes
that
points
to
the
existing
span.
Attributes
this
would
mean
the
definition
shared
and,
like
the
naming
would
be
consistent.
You
would
use
span
attributes
when
you're
doing
tracing
and
metric
attributes
when
you're
doing
metrics
and
then,
if
the
the
definitions
ever
diverge
in
the
future,
we
then
have
two
separate
names
that
are
already
in
use
that
we
could
use,
and
then
the
third
option
is
to
just
have
them
be
separate
definitions.
A
A
So
I
don't
know
if
anyone
has
any
opinion
here.
If
people
don't
have
opinions,
I'm
probably
leaning
towards
option
three.
For
now.
D
Yeah,
it's
it's
not
a
strong
opinion,
but
yeah.
I
was
just
thinking
ideally
like.
If,
if
we
weren't
tied
to
the
span
attributes
naming
it
would
be
nice
to
call
it
attributes
and
number
one
kind
of
gives
you
the
ability
to
do
that
in
the
future.
B
A
It
uses
the
same
proto
definition.
Okay,
I
didn't
realize.
E
A
Okay,
so
if
the
proto
definition
is
shared,
then
I
think
we
can
be
fairly
confident
that
the
definitions
will
not
diverge
and
then
I
guess
the
only
possible
way
it
could
diverge
is
if
we
have
like
log
attributes
or
something
like
that
in
the
future,
and
then,
if
that's
different,
we
would
have
attributes
and
log
attributes,
but
that's
probably
okay.
I
think
I
think
they'll
most
likely
use
the
same
definition.
Anyways.
D
I
do
feel
like
there
has
been
kind
of
a
drive
to
make
these
uniform
and
usable
between
the
multiple
telemetry
types.
There
was
kind
of
like
the
transition
from
calling
these
labels
on
metrics
to
adopting
attributes
so
that
you
can
so
you
can
use
kind
of
the
same
metadata
on
a
span
as
as
you
can
with
metrics,
but
I
guess
the
only
difference
that
I'm
aware
of
right
now
is
that
while
they
do
use
the
same
protobuf
definition,
I'm
pretty
sure
anyways
like.
D
I
think
the
spec
does
kind
of
restrict
how
you
restrict
the
possible
values.
I
guess
for
tracing
versus,
I
think
logs
anyways,
there's
like
technically
you
can
have
nested
maps,
I
think
with
the
portawaf
definition
of
attributes,
but
for
for
tracing
that
is
disallowed
at
this
point
in
time.
D
Okay-
and
I
assume
it's
also
disallowed
for
metrics-
then
also,
I
think
so,
but
but
in
terms
of
like
the
data
type
itself,
like,
I
think
yeah,
I.
E
A
Primitive
type
or
an
array
of
permitted
types,
so
at
least
in
the
common
definition
of
this
there
is
no
like
nested
objects,
allowed.
A
A
It
seems
to
me
like
the
like,
the
spec
and
the
tc
is
pushing
to
have
this
defined
in
a
single
place
as
much
as
possible
and
have
a
shared
definition.
So
in
that
case
I
think
option
one
is
probably
the
way
to
go
since
that's,
I
guess
mimicking
the
intention
of
the
specification
as
closely
as
we
can.
A
Let's
see
thumbs
up
from
nev
and
amir,
I
know
matt
initially
already
voted
for
that.
So
it
sounds
like
there's
support
for
that.
If
anyone
has
a
reason
we
shouldn't
now
is
the
time
to
speak
up.
Otherwise,
I'm
going
to
say:
let's
do
that.
C
I
have
a
question:
if,
in
the
future
we
release
version
two
of
the
api,
we'll
probably
want
to
get
rid
of
spam
attributes
and
stay
with
only
attributes
right.
So
maybe
we
should
keep
a
list
of
changes
that
we
want
to
implement
when
we
bump
the
mage
or
version.
So
we
don't
forget
to
do
it.
Yeah.
A
Let's,
I
guess,
let's
create
an
issue
so.
E
Js
dock,
like
deprecated
annotation,
or
something
like
that
as
well
to
mark
them
in
code
and
help
people
upgrade
as
well.
A
Yeah,
we
probably
could
do
that.
That's
not
a
bad
idea.
E
Yeah,
I
just
have
one
question
on
option.
One
typescript
should
understand
that
the
like
that
it's
not
gonna
nominally
type
them
like
as
long
as
they're
the
same
type
definition.
It
won't
care
one
between
the
other.
So
if
you
have
a
different
api
version
versus
what
the
user
requested
right.
A
A
Yeah,
I
forget
the
exact
the
technical
word
for
it,
but
the
short
answer
is
yes:
we
had
this
problem
with
classes
because
concrete
classes
like
typescript
will
say
they
don't
they're
not
compatible
with
each
other,
but
with
types
and
type
aliases.
They
are.
E
A
Okay,
so
I
will
create
an
issue
for
this
mark.
It
is
up
for
grabs
if
anybody
has
time
to
work
on
it
great,
if
not
then
I'll
get
to
it.
When
I
get
to
it.
A
A
Matt
will
not
be
here.
I
think
I'll
probably
do
the
same
thing
I
did
at
thanksgiving
I'll
at
least
join
the
meeting.
I
may
have
a
short
agenda,
but
if
anybody
has
anything
to
talk
about,
they
can
join
in
and
talk
about
it,
but
maybe
just
assume
that
the
that
the
meeting
might
be
short
next
week
and
we
won't
talk
about
anything
too
important
while
everybody's
gone.
A
Okay,
I
guess,
with
that
in
mind,
I'll
give
everybody
half
an
hour
back
and
for
those
that
aren't
here
next
week
enjoy
your
holiday.