►
From YouTube: 2021-10-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
That
just
some
travel
I
was
at
kubecon
last
week.
I
don't
know
who
else
was
there?
I
know
at
least
yaniv
was
there
and
then
the
week
before
that
I
was
in
miami.
B
All
right,
so,
if
you
guys
could
put
your
name
on
the
attendee
list,
I
would
appreciate
that.
The
first
item
is
that
just
so
you
know
I
either
next
week
or
the
week
after.
I
think
the
meeting
link
will
change,
so
the
new
link
will
be
in
the
document,
but
the
reason
it's
changing
is
because
currently
we
only
have
three
zoom
links
that
we
use
for
all
meetings
so
like
the
js
sig
shares
a
link
with
like
the
python,
sig
and
stuff
like
that.
B
So
we
can't
have
any
meetings
that
overlap
times
and
when
they
get
automatically
uploaded
to
youtube,
there's
no
way
to
know
which
meeting
it
was
so
moving
forward.
We
will
have
a
link
per
sig
so
that
when
the
meetings
get
uploaded
to
youtube,
it
will
be
properly
titled
and
things
like
that.
So
just
be
aware,
the
link
might
change.
B
B
What
I
wanted
to
talk
about
for
most
of
today
was
the
the
metrics.
You
know
what
what
work
we
still
have
to
do
and
what
our
roadmap
looks
like.
Currently,
my
goal
is
to
have
our
api
specification
compliant
by
december
and
then
to
have
the
sdk
implementation
done
in
early
2022.
B
B
Okay,
so
I
created
a
project
which
tracks
the
metrics
api
work
and
there
is
a
tracking
issue
which
I
commandeered
from
ivan,
which
also
lists
all
the
various
tasks
that
need
to
be
done
for
the
api,
and
I
created
issues
for
all
of
these
things.
All
of
them
are
currently
up
for
grabs.
B
The
these
three
up
here,
mostly
already
have
people
assigned
to
them,
are
already
working
on
them,
but
everything
below
that
in
the
meter
provider
in
the
instruments,
I
essentially
created
an
issue
for
each
instrument
to
ensure
that
we
have
it
that
the
name
is
correct,
that
there
is
a
no
op
implementation,
an
interface
in
the
api,
and
things
like
that.
So
this
is
api,
only
not
sdk,
and
I
would
like
you
know
us
to
sort
of
split
these
tasks
up.
B
So
if
anybody
wants
to
volunteer
for
for
any
of
these
now,
I
can
assign
them
now
or
you
can
comment
on
the
issue
and
I
will
assign
them
most
of
these
or
some
of
them
like.
I
know
the
counter
is
probably
already
done,
but
it'll
just
be
ensuring
that
we
match
the
specification
as
closely
as
possible
and
then
some
of
them,
like
async,
gauge
or
histogram,
I
think,
may
require
some
changes
as
long
as
that
makes
sense
to
everybody.
B
I
may
have
also
missed
some
things
in
here,
so
if
there's
any
any
functions
or
methods
or
instruments,
or
anything
like
that
that
I
missed
feel
free
to
comment
on
this
issue
and
I
will
add
them
to
the
list
or
just
create
an
issue,
and
I
will
add
that
issue
to
the
list
and
the
project
does
that
make
sense
to
everybody.
E
B
Know
bart
probably
is,
but
this
has
been
removed.
I
think
that
we
should
remove
it.
There
is
some
wording
in
the
specification
that
allows
for
performance
optimizations
if
we
need
them,
but
I
don't
know
if
we
do.
I
think
that
that
was
more
for
languages
like
go
and
java,
which
are
strongly
typed
and
because
in
js
objects
are
a
little
bit
more
common.
I
don't
know
if
we
really
need
those
types
of
performance
enhancements,
but
if
we
decide
that
we
do,
we
can
always
add
it
back
later.
B
B
Yeah,
it's
it
removed
it,
but
it
didn't.
So
if
we
look
at
the
specification,
I
believe
it's
in
the.
B
Sdk
specification,
there
is
some
wording
that
essentially
says
that
you
can.
B
F
Yeah,
I
think
that
makes
sense.
I
I
know
I
have
seen
that
in
the
api
spec.
I
think,
but
it
probably
mostly
applies
to
languages
where
you'd
wanna
hold
a
lock
on
the
instrument
with
those
labels,
whereas
I
don't
even
know
if
we
need
locks
in
javascript.
A
B
The
two
main
reasons
to
have
it
would
be
locking
and
also,
if
I
add
like
if
I,
if
I
have
a
counter-
and
I
call
add,
with
two
different
sets
of
labels-
then
there's
two
values
stored
on
the
counter
right:
one
for
each
set
of
labels.
B
So,
like
the
the
count
for
cpu,
one
is
different
than
the
count
for
cpu
two,
for
instance,
and
one
thing
that
the
bind
implementation
does
is
like
pre-calculate,
essentially
a
hash
of
the
labels,
so
that
the
order
that
you
pass
them
in
doesn't
matter.
So
if
I
pass
in
an
object
that
has
a
cpu1
host
one
and
then
I
pass
in
an
object
that
says:
host
one
cpu,
one
like
they're
in
backwards
order.
Those
need
to
be
treated
as
the
same.
B
F
Yeah,
that
makes
sense.
I
think
the
spec
also
allows
you
to
use
like
pre-allocated
label
sets.
So
if
you
wanted
to
use
the
same
object
across
multiple
invocations,
that
could
work
too,
and
maybe
you
could
do
some
clever
hashing
of
that
and
get
the
same
performance
benefit
without
having
to
do
the
the
bind.
B
B
Is
this
a
known
object
and
if
it
is
then
great
we're
done
and
then
do
a
calculation
for
equivalence
only
if
we
don't
find
it
in
in
the
map
or
something
like
that
that
there
are
some
tricks
we
can
do
to
maybe
avoid
binding,
and
then
we
can
add
bind
if
we
find
that
we
have
performance
issues
or
something
like
that.
But
to
me
it
seems
like
maybe
a
premature
to
add
it
right
now.
B
C
F
Yeah,
I
think
one
other
thing
I
just
want
to
call
out.
I've
been
working
on
the
implementation
for
python,
for
metrics
of
prototyping
and
then,
like
you,
said,
java
and
net
have
been
doing
it
as
well,
and
the
goal
is
to
have
you
know
a
few
people
implement
the
spec
before
marking
it
stable.
So
we
can
find
any
weird
issues.
F
I
think
java.
Sorry,
I
think
javascript
is
probably
one
of
the
more
interesting
languages
since
it's
single
threaded,
you
don't
have
integers
stuff
like
that,
so
whoever's
working
on
it
be
awesome
if
you
could
join
the
metric
sig.
If
you
find
anything
weird
or
if
there's
any
weird
things
in
the
spec
that
don't
work
well
in
javascript
and
that's
sort
of
the
goal
I
think
of
having
a
future
freeze
period.
B
Yeah,
definitely
there
is
a
person
from
dynatrace
who
some
of
you
might
know.
His
name
is
georg.
He
goes
by
pergio
p-I-r-g-e-o
on
github.
He
had
been
intending
to
join
this
call
today,
but
he's
actually
sick
this
week,
so
he
will
start
joining.
B
B
Okay,
I
know
we'll
put
this
in
here
and
he
said
he
may
not
make
it.
It
looks
like
he
is
not
on
the
call,
but
essentially
the
question
that
he
has
he's
working
on
splitting
the
exporter
so
that
we
can
have
the
stable
trace.
Exporter
go
to
1.0
and
his
question
is
essentially:
do
we
want
to
call
it
the
otlp
trace
exporter
and
have
that
go
to
1.0?
B
I
don't
have
a
particularly
strong
opinion
on
this.
I
don't
know
whether
combining
them
both
into
one
exporter
has
any
drawbacks
that
I'm
not
aware
of.
I
can
imagine
like
for
web
browser.
Trying
to
ship
as
small
of
a
package
as
possible
certainly
is
a
concern,
but
at
the
same
time,
having
them
only
be
one
exporter
may
make
it
easier
to
share
some
of
the
the
logic
in
terms
of
like
serialization
and
things
like
that.
So
I
does
anyone
have
an
opinion
on
this.
F
B
Yeah,
so
we
would
probably
have
a
separate
metrics
exporter
and
then
once
it
becomes
stable
we
would
add
the
metrics
to
the
you
know,
combined
exporter
and
then
deprecate
this
metrics
package.
F
Yeah
I
mean,
if
they're
already
separate
it
kind
of
makes
sense
to
maybe
leave
them
separate
if
it's
not
too
much
overhead.
What
is
not
already
separate-
and
I
mean
in
the
case
that
you
mentioned
where
you
would
have
a
separate
package
for
development.
B
Yeah,
I
think
one
option
we
could
do
is
have
them
be
separate
packages.
We
have
the
otlp
trace
exporter
and
the
otlp
metrics
exporter,
and
then
the
otlp
logs,
export
or
whatever
it
is,
have
those
all
be
separate
packages,
and
then
we
could
have
one
package,
that's
just
like
the
otlp
exporter
that
depends
on
those
and
essentially
just
re-exports
them.
So
you
can
depend
on
a
single
package
if
you
want
to
that
might
be
a
good
middle
ground.
I
know
that
I've
seen
nev
on
the
call.
B
D
Grpc
and
rotor
will
have
now
four
instead,
so
and
because
there
are
some
code
that
is
being
shared
between
the
exporter
for
metrics
and
for
traces
I
mean
updating.
Any
product
range
will
require
to
update
moto.
B
So
I
I'm
not
that
familiar
with
the
way
the
otlp
exporter
works.
I
haven't
worked
on
it
that
much.
I
do
know
that
it
uses
the
sub
module
for
the
proto.
Is
there
any
way
to
consolidate
those
like?
Maybe
we
could
have
a
package
just
for
the
proto?
That
is
a
dependency
of
each
of
the
exporters,
or
does
that
not
work
yeah?
That's
what
we
did.
B
B
B
Yeah,
but
in
terms
of
having
it
in
a
separate
package,
do
you
think
that
that's
a
good
idea,
or
do
you
or
no.
D
E
B
Okay,
yeah,
I
mean
that
that
also
works
yeah,
just
having
the
same
sub
module
four
times
seems
like
yeah.
That
doesn't
make
sense
to
me,
so
I
have
being
able
to
consolidate
it
into
a
single
location.
I
think,
would
just
make
it
easier
in
terms
of
development
of
nothing
else.
B
D
A
C
B
B
Yeah,
it's
only
the
sub
module
like
the
and
the
like.
The
proto
interfaces.
D
I'm
not
sure
because
the
william
already
started
speeding
and
it's
like
yeah
okay,
it
should
be
done
at
that
moment.
I
was
really
thinking
about
structuring
this
too,
so
not
splitting
yet,
but
trying
to
do
something
about
this
one,
a
bit
different
because,
like
I
would
like
to
have
the
package
that
only
transforms
the
data,
nothing
more
and
then
the
packages
that
basically
sends
this
transform
data
in
a
different
way,
so
either
using
json
or
http
or
grpc.
B
E
D
Else
but
we
are
using
the
js
proto,
so
it's
basically
reading
directly
the
protofiles.
So
we
don't
need
to
worry
about
this.
One.
E
E
I
wasn't
able
to
make
it
work,
but
that's
just
because
it
took
a
lot
of
time,
but
it's
just
another
direction
that
if
we
want
to,
we
can
take
a
look
at
as
well.
D
I
mean
you
can
give
it
a
try.
I've
been
there,
so
I
was.
I
was
trying
this
I
was
trying
with
treyas.
I
was
ranked
with
typescript
and
then
the
last
package
more
over.
It's
it's
it's
faster
than
the
google
protobuf
and
it
just
can
read.
You
know
directly
the
proto
profile
profile,
so
we
don't
have
any
problems.
You
know
it's
like
delegating
the
complete
responsibility
out
of
somewhere
else.
Yeah.
E
Yeah
but
currently
the
json
interface
is
like
a
written
by
hand
right.
It's
not
a
derived
directly.
E
E
Yeah
also
the
encoders
and
the
decoders
yeah.
So
I
think
that
it's
it's
okay
to
write
it
by
hand,
but
I
think
we
should
at
least
try
to
automate
it
somehow
so
because,
currently
from
what
I
have
seen,
there
are
mistakes
like
in
incorrect
types
and
missing
data.
At
least
it
was
like
this
few
months
ago.
Maybe
it
was
fixed
in
the
meantime,
but
it's
just
a
suggestion.
B
Yeah,
I
will
I'll
take
this
task,
so
let
me
let
me
create
an
issue
here.
D
G
A
But
I
think
william's
question
was
a
lot
simpler
one.
It
was
either
like
whether
we
should
start
off
with
right
now,
like
with
the
with
the
intention
of
having
like
one
master
package
for
both
of
the
exporters
trace
and
metric
or
design
it
from
now
that
they
will
be
separate,
and
I
think
like
like
designing
for
the
second
case,
which
is
like,
let's
just
focus
on
having
them
separate
right
now.
If
we
need
to
do
like
code
multi
like
duplication
in
the
back
end,
to
make
it
work,
that's
fine!
A
We
can
later
separate
it
like
the
dependent
file
like
dependent
code
into
a
separate
package
or
create
another
package
that
combines
those
two
in
front
of
it,
but
anyways
like
just
making
it
trace
and
metrics
for
now
like
separate
for
now
it's
it's
the
most
like
fixable
way
to
go.
I
think-
and
if
I
understand
to
william
correctly,
that's
the
question
right
now.
B
Yeah
and
his
his
his
task
is
to
split
that
the
question
is:
do
we
call
the
trace
exporter
the
trace
exporter
or
just
the
exporter,
because
in
the
future,
when
we
have
metrics,
will
it
be
added
to
the
same
one?
I
think
the
answer
is
for
now
we
call
it
the
trace
exporter
and
then,
if
we
want,
we
could
create
one
combined
package
which
depends
on
the
trace
exporter
and
the
metrics
exporter.
D
F
One
small
thing
I
want
to
call
out
which
is
kind
of
slightly
unrelated,
but
I
think
there's
an
issue
in
the
spec
that
we
should
support
proto3
feature
which
is
optional
for
field
presence,
so
whatever,
like
generator
and
protobuf
library
we
use
it
needs
to
be
able
to
support
that
which
is
like
a
relatively
recent
feature
in
proto3.
As
I
understand
it,.
B
F
No
worries,
I
said
the
there's
a
few.
It
looks
like
we're
trying
to
choose,
potentially
between
a
few
different
ways
to
generate
the
proverbial
buffs.
Whatever
library
we
use
needs
to
support
field
presence,
which
I
believe
we're
going
to
start
using
in
otlp,
which
is
a
relatively
new
proto3
feature.
F
Issue
for
that
and
update
the
dock
here.
F
Believe
it
will
be
used
in
the
future.
There's
a
proposal
to
use
it,
which
I
believe
is
going
to
go
through
and
the
it
basically
lets.
You
distinguish
between
the
default
values
so
for
like
an
integer
field
or
something
is
zero
and
if
it's
not
included
at
all.
G
B
B
B
Okay,
so
we
did
respond
to
will.
I
believe
the
metrics
tracking
issue
and
project
are
fairly
well
explained.
Please
take
any
tasks
that
you
believe
you
have
time
for.
B
That's
all
we
had
on
the
agenda
today.
Is
there
anything
else
that
we
should
talk
about?
We
still
have
20
minutes.
E
Is
the
release
please
working?
What's
the
schedule
for
it,
because
I
believe
that
the
core
packages
are
1.0,
but
all
the
instrumentations
are
still
referencing
version.
0.25.
B
Yeah,
so
there
was
an
issue:
florina
updated
all
of
the
dependencies
to.
B
B
A
new
pr
which
uses
the
feature
conventional
commit
looks
like
it's
been
approved
by
a
bunch
of
people,
so
I
will
update
this
I'll,
merge
it
when
the
tests
finish
running
and
then
that
should
update
the
automation
so
that
all
of
these
packages
will
be
released
and
if
that
works
I'll,
go
ahead
and
release
that
today,
thanks
yeah,
I
was
trying
to
do
it.
I
was
looking
into
it
last
week
and
because
I
was
at
kubecon,
I
didn't
really
have
a
lot
of
time,
but
I
I
will
get
this
done
today.
B
Okay,
nav
discussion,
I'm
not
using
bleeding
edge
features
specifically
ts,
features
that
generate
different
code
for
older
versions.
How
do
we
go
about
catching
and
enforcing
these
issues?
G
More
recently,
we
had
the
an
issue
where
we
had
a
default
parameter.
I
think
it
was
or
an
override
in
the
constructor
that
effectively
generated
different
code
from
434
onwards
versus
earlier.
So
it's
just
things
like
that.
G
B
Yeah
there
there
have
been
others,
we
there's
definitely
some
times
in
the
past,
where
we
have
decided
not
to
do
things
because
they
wouldn't
be
supported
in
like
node,
eight
or
in
old
browsers.
For
instance,
we
do
have,
let's
see
telemetry.js.
B
Experimental
here
we
go
so
we
have
these
backwards
compatibility
tests,
which
essentially,
they
don't
really
do
much,
except
that,
like
the
the
node
types
package,
is
like
the
types
from
node
8
and
from
node
10
and
from
node
12,
for
instance,
so
that
if
these
fail
to
compile
that
means,
we've
used
some
api
that
did
not
exist
at
that
time,
and
I
this
is
like
the
bare
minimum
that
we
can
do
right
just
to
make
sure
that
we're
not
doing
anything
completely
broken
in
terms
of
things
like
the
override
directive
generating
different
code
on
different
versions
of
typescript.
B
I
I
understand
the
concern
that
if
there
are
two
different
you
know
the
code's
different
in
two
different
places:
there's
potential
for
like
a
bug
on
some
version
of
typescript,
but
not
another
version
of
typescript
or
something
like
that,
but
they
should
be
functionally
identical
unless
there's
a
bug
within
typescript
itself.
G
Know
in
that
particular
yeah
in
that
particular
case,
it
actually
changed
the
signature
of
the
constructor
from
one
argument
to
two,
so
that
one
was
a
pretty
bad.
I
have
no
idea
how
we're
going
to
catch
this.
That's
really
where
the
question
is
coming
from.
It's
like
we
could
do
it
in
code
review
if
someone's
aware
of
all
the
new
features
and
keeps
up
to
date
on
everything,
that's
rolling
out,
but
that's
not
going
to
get
everything
yeah.
D
G
B
It
is
sneaky,
so
I
think
the
the
way
truly
to
solve
this
would
be
to
have
like
an
example
project
that
uses
like
as
many
features
as
possible.
You
know
all
features
ideally
and
then
that
project
would
be
compiled
and
then
tested
with
each
version
of
node
and
each
version
of
typescript
right.
So
you
would
say
starting
from
some
minimum
version
of
typescript,
we
compile
it
with
every
minor
version
and
then
for
every
one
of
those
starting
from
version
eight
of
node.
We
run
it
on
every
version.
B
You
know
every
major
version
of
node
or
something
like
that.
That's
obviously.
B
Yeah,
that's
obviously
a
lot.
I
know
that
internally
at
dynatrace
we
do
stuff
like
that
like,
but
it's
it's
not
without
drawbacks.
It's
takes
a
lot
of
maintenance
effort
and
those
like
our
full
test.
Suite
can
take
quite
a
while
when
we're
running
against
all
of
those
different
versions.
B
B
You
know
trust
it.
Essentially,
you
know
it's
hard
is
not
really
a
good
excuse
not
to
do
it.
We
certainly
right
now
have
higher
priorities.
I
would
say
like
metrics
and
stuff,
like
that,
I
I
don't
you
know.
I
only
have
so
much
time
in
my
day
and
I
need
to
make
sure
that
metrics
is
complete,
but
if
someone
were
willing
to
take
that
on,
I
think
that
that
would
be.
You
know.
I
would
love
that.
B
B
Forever,
I
could
see
you
know,
maybe
we
change
our
model
slightly
such
that
we
make
all
of
our
prs
onto
like
the
main
branch
and
then
once
a
week
we
create
a
pr
from
main
to
release
or
something
like
that
and
run
those
only
on
the
big
weekly
release
prs.
Maybe
that's
a
good
trade-off,
but
eventually
we're
going
to
need
tests
like
this
and
we're
going
to
have
to
figure
out
how
to
run
them
at
appropriate
times.
B
B
You
know-
and
it's
not
super
helpful
just
to
say
I
know
this
is
a
problem,
but
I
don't
know
what
to
do
about
it
right
now,
but
that
is
kind
of
my
answer
is
like
this
is
not
something
that
I
have
time
to
solve
at
the
moment.
If
somebody
does
have
time
to
solve
it
then
great,
I
would
deeply
appreciate
that.
G
Okay,
so
so
for
now,
I
think
we
just
say
any
approver
needs
to
look
out
for
this
type
of
thing,
if
they're
aware
of
it,
and
that
might
be
a
good
start.
B
Yeah,
I
mean
definitely
look
out
for
those
types
of
issues
if
you're
aware
of
it,
when
you're
creating
prs,
try
not
to
use
features
that
are
that
you,
you
know,
if
you
have
just
heard
of
some.
Oh
this
new
feature
was
released
in
typescript
4.9.
B
You
know,
try
not
to
use
it
just
because
it's
new
and
fun
unless
there's
something
that
actually
cannot
be
solved
without
a
new
feature.
I
think
we
should
avoid
them.
I
think
that's
a
a
good
rule
of
thumb
and
as
a
as
a
reviewer,
you
know
you
should
be
looking
out
for
things
like
that,
but
in
terms
of
an
automated
solution,
I
just
don't
know
if
we're
going
to
have
the
the
bandwidth
to
to
create
those
tests
for
a
while.
B
Yeah,
if
you
absolutely
have
to,
then
you
know,
there's
no
way
around
it.
So
far,
we've
been
mostly
able
to
avoid
things
like
that,
but
yeah
I
I
definitely
agree.
This
is
a
concern
I
think,
in
in
more
strict
languages
like
java,
it's
a
little
bit
tougher
for
them
because
they
have
more
like
you
know.
B
They
have
to
drop
hard
versions
where
in
in
typescript,
we
tend
to
have
like
more
polyfills
and
things
like
that
that
make
things
work
on
older
versions,
even
if
they
are
a
little
bit
less
performant
or
something
like
that.
But
no
I
I
agree
that
this
is
something
we
need
to
look
out
for.
B
B
Okay,
if
that's
the
case,
then
I
hope
everybody
has
a
good
afternoon
evening
morning,
whatever
time
it
is
for
you-
and
I
will
talk
to
you
next
week,
bye-bye
you.