►
From YouTube: 2020-05-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
D
D
B
E
E
Job
they
invited
Saxon
to
come
to
this
meeting.
So
you
can't
come
this
week,
but
maybe
maybe
next
week,
all
right
cool
actually
did
the
game.
Sorry,
so
I
was
just
saying:
I've
ID'd,
a
former
coworker
of
John
coat,
a
co-worker
of
mine.
One
of
the
New
Relic
founders
is
getting
interested,
so
sweet.
D
E
B
Before
we
get
too
deep
into
the
scheduled
thing
here,
I
don't
want
to
spend
a
lot
of
time
talking
about
static
instrumentation
again,
because
we
talked
about
the
last
time,
but
I
am
working
on
a
write-up
of
you
know,
so
the
static
instrumentation
workflow
and
some
of
the
design
issues
that
come
out
from
that
and
I
just
wondering
how
you
want
that
to
be
approached
here.
It's
not
an
issue
exactly,
but
do
what
you
want
to
file
as
an
issue
to
have.
There
should
be
a
place
for
discussion
about
it.
B
D
B
D
D
So,
let's
start
with
manual
instrumentation,
so
this
is
something
with
various
folks
on
this
call
have
chatted
about
over
the
months.
You
can
see
the
issue
we
first
opened
on
in
December,
but
just
the
idea
of
how
will
work,
how
will
manual
instrumentation
work
and
can
we
get
the
can
we
can
we
share
manual?
Can
we
share
most
of
the
parts
of
manual
and
auto
instrumentation?
How
much
of
those
can
we
share?
Can
we
share
tests
I.
D
B
D
Well,
Tyler
put
together
sort
of
an
initial
like
POC
for
how
we
could
extract
the
decorators
out
into
something
common,
but
we
haven't
had
any
progress
on
the
instrumentation
individual
interests
imitation
until
yesterday,
anuraga
has
taken
from
the
AWS
team
has
taken
this
AWS
SDK
and
done
some
work
to
extract
it
out
some
manual
instrumentation
I
looked
over
it,
so
it's
not
the
AWS.
Our
AWS
instrumentation
is
fairly
diff
from
our
other
instrumentation,
so
it
doesn't
really
it's,
maybe
not
the
best
example
of
how
we
would
want
this
to
work.
D
D
Do
we
want
to
keep
manual
and
auto
instrumentation
in
the
same
repo,
I'm
kind
of
in
favor
of
that?
Just
because
I
think
that
will
make
it
a
lot
easier
to
share
the
code
and
tests
and
keep
those
in
sync.
And
if
we
do
want
to
do
that,
should
we
rename
the
repo
from
auto
instrumentation
to
something
else.
A
F
F
A
F
B
And
I
can
actually
the
first
is
from
a
different
direction.
So
if
there's
meant
to
be
a
if
we,
if
the
goal
is
to
have
matter
how
you've
instrumented
you
know,
for
example,
a
JDBC
driver
right
whether
you've
Auto
instrumented
it
whether
you've
manually
instrumented
it
whether
you've
statically
instrument
of
it,
whether
you've
provided
a
contribute
and
does
stuff
to
it.
If
the
goal
is
that
all
of
those
scenarios
use
the
same
sorry
attribute
names,
semantics
and
so
forth,
you
know
essentially
we're
talking
here
about
just
focusing
on
the
decorators.
B
F
Other
languages
we're
moving
this
into
a
contribu
because
the
way
it
tends
to
work
there
is
those
are
plugins
right,
so
it's
usually
a
wrapper
or
or
if
you're
plugging
into
a
framework,
and
it's
got
the
proper
hooks
you're
using
those
or
something
something
and
then
the
auto
they
don't
have
agents
and
other
languages,
but
the
auto
installer
is
try
to
wire.
All
of
that
up
for
you
using
dynamic
magic,
I
do
I,
don't
know,
I.
Think
most
of
the
Java
instrumentation
is
similar.
F
E
A
And
API
has
had
some
work
on
that
current,
like
semantic
convention
in
the
IP
API,
like
said
sir
cellphone,
client
or
whatnot,
and
and
at
the
same
time
we
in
auto
instrumentation,
already
have
some
proof
of
concept
of
those
types
plans
which
were
type
sponsor
the
bad
name,
but
they
anyway.
We
know
that
for
HTTP
clients
we
want
to
do
the
estas,
so
we
already
have
like
HTTP
clients
path.
For
example,
if
I'm
not
mistaken,
Tyler
did
bad
in
Auto
instrumentation,
while
back
that
pull
request.
A
A
Yeah
yeah
exactly
so,
it's
all
seems
somehow
connected
and
especially
taking
into
account
John's
remark
about.
Maybe
that
should
live
in
the
SDK
at
all,
so
they
probably
that
HTTP
client
span.
We
should
capsulate
the
whole
semantic
convention
encapsulate
everything
that
we
want
to
in
extract
essential,
our
decorator
and
stuff
around
that.
So.
G
There
are
probably
some
of
the
decorator
that
base
tracer
whatever
what
I
call
types
bands
there.
Obviously
we're
wanting
a
different
name,
but
there
are
probably
some
of
those
that
could
live
in
the
core
API,
but
most
of
them
I
believe
could
not,
because
I
would
assume
that
they
would
use
that
they
would
actually
compile
and
depend
on
classes
from
the
actual
libraries.
C
A
C
A
G
F
D
A
G
Very
likely
that
every
instrumentation
will,
like
we
said
before,
there's
going
to
be
some
instrumentation.
That
is
only
auto
instrumentation,
but
I.
Don't
think
that
the
auto
instrumentation
should
directly
depend
on
the
manual
instrumentation
I
think
that
there
should
be
maybe
like
a
third
like
dependency
or
whatnot,
that
the
auto
and
library
instrumentation
depend
on
that
have
those
core
classes-
that's
shared
between
the
two
of
them,
but
when
you
go
to
publish
them
that
they
can
get
shaded
in
into
each
of
those
projects.
B
E
F
That's
just
the
core
mechanisms
for
installing
encuestas
in
to
the
program,
and
then
they've
got
a
contribute,
oh
that
all
the
instrumentation
and
plug-ins
live
in
and
then
a
human
can
install
those
or
that
they
can
use
the
auto
installer
package
that
will
grab
any
of
them
that
you
have
available
like
how
that
works
specifically
depends
on
language,
but
the
the
layout
there
is
the
core.
Repos
got
the
facilities
for
doing
all
of
the
instrumentation
like
like
hooking
it
all
together,
but
then
there's
a
contra
package
that
has
the
actual
instrumentation
packages
in
it.
H
F
F
F
Whatever
point
is,
you
could
call
this
like
python
plugins
and
then
you
have
like
python
core
and
python
plugins
I
think
they
were
calling
them
contribute
of
this
stuff
that
ends
up
in
there
like
exporters
and
things
do
tend
to
be
written
somewhere
else
and
then
get
donated
in
but
you're
right
that
I,
don't
the
goal
of
the
ones
that
the
repos
that
we're
hosting
is
we're
saying
this
is
the
stuff
we
are
going
to
maintain.
This
isn't
some
contribute?
Oh
we're.
We
don't
care,
someone
else
is
maintaining
it,
we're
not
paying
attention
to
it.
F
Those
things
definitely
don't
live
in
open
telemetry.
They
live
in
some
other
or
organization,
and
people
can
add
them
to
our
registry
on
our
website
so
that
they
can
be
found.
But
if
we're
putting
it
into
the
open,
telemetry
org,
we're
saying
you
know
the
sig
maintainer
czar
on
the
hook
for
for
keeping
that
stuff
working
well,.
G
F
D
F
Oh,
open
tracing
contribute
and
suffered
from
the
problems.
You
would
expect
like
a
tragedy
of
the
comments
right
where
people
donated
it
and
then
there
were
poor
people,
but
it
wasn't
clear
who
was
like
supposed
to
bottom
line
like
keeping
those
things
up
to
date
with
the
latest
API
or
latest
version
of
the
underlying.
G
Or
I
think
having
everything
in
a
single
repo
helps
is
because
then,
as
you're
like
updating
things,
there's
this
intention
of.
Like
you
know,
you
have
to
you're
obligated
to
kind
of
make
sure
everything's
refactored
at
the
same
time,
in
the
same
way
that
all
the
tests
are
passing
before
it
can
actually
be
merged,
and
so
I
you're
kind
of
forced
into
doing
the
right
thing.
Instead
of
like
being
able
to
like
forget
about
some
of
the
other
things
that
have
been
committed
and
shared
and
yeah,
oh
yeah,.
G
G
F
C
G
F
D
B
G
I
think
at
some
point
we
talked
about
maybe
taking
some
of
the
core
pieces
from
the
instrumentation
project
and
moving
it
over
to
the
API
SDK
project,
but
I
I.
Don't
think
that
we
have
to
like
finalise
that
now.
Just
know
that
that's
like
a
future
kind
of
discussion
that
we
still
want
to
keep
possibilities.
G
B
H
F
F
G
F
F
The
only
thing
that's
like
a
little
weird
is
in
the
artifacts:
oh
yeah
I,
don't
know
I,
guess
I,
don't
know
Java
well
enough
and
whether,
at
the
end
of
the
day,
I
expect
most
users
are
going
to
be
interacting
with
the
agent
is
like
the
first
thing
that
the
end
user
actually
interacts
with
and
the
packages
that
they
grab.
So
maybe
it
looks
like
slightly
weird.
B
Right
now
because,
like
for
example,
I've
been
working
with
the
JavaScript
ones,
and
some
of
the
plugins
are
in
Java
skinned,
the
contributory
ojs,
but
when
you,
when
you
pull
them
in
through
mbm
everything's
in
the
a
diploma
tree,
namespace,
there's
no
there's
no
separation
where
it
came
from
yeah.
Why,
like
yeah.
F
B
F
Certainly
like
post
1.0
or
as
the
api
settles,
it's
really
great
to
have
everything
in
one
place,
because
there's
this
you
break
it,
you
bought
it
attitude.
If
you
want
to
make
an
adjustment
to
how
open
telemetry
works
in
that
language,
it's
would
be
nice
in
the
future.
Say
like
you
can
do
that,
but
you're
also
on
the
hook
for
like
updating
the
plugins
like
it's
not,
we
slammed
some
stuff
into
core
that
changed,
how
it
works,
but
someone
else
can
go
deal
with
making
the
public
plugins
utilize.
D
F
D
Okay,
so
do
we
want
to
unto
this
decision
for
now
or
I
mean
I
guess
my
preference
would
be
to
keep
instrumentation
somehow
in
the
name,
because
I
feel
like
that
is
the
most
descriptive
of
what
this
Sagan
Creepo
are
about.
D
F
D
A
D
B
A
All
right,
I
have
one
question
regarding
that:
should
we,
before
we
start
introducing
those
manual
instrumentations
as
well,
should
we
do
something
with
that?
What
Tyler
has
started
all
those
types
pan,
because,
yes,
because
I
seems
it
will
be
very
beneficial
before
we
start
putting
a
lot
of
effort
to
that
manual
and
automatic
instrumentations
to
have
those
conventions,
semantics
conventions
and
functional,
well,
common
stuff,
somehow
extracted.
D
Yeah
so
I
think
that
I
mean
we
need
Tyler
had
kind
of.
We
left
that
office.
Okay,
the
proof
of
concept
is
done
now
and
we
just
were
waiting
for
somebody
to
pick
up.
You
know
the
first
instrumentation
and
test
it
out
and
you
know
really
see
how
it
works
and
the
the
only
the
AWS
instrumentation
actually
is
different.
It
doesn't
use
because
it's
it's
injecting
the
Interceptor
versus
it
doesn't
end
up
using
the
decorator
and
the
type
expands.
A
D
A
It's
read
in
there
so
and
I'm
I
spent
today
trying
to
understand
how
those
better
servlet,
instrumentations,
work
and
I
feel
the
same
way.
It's
a
lot
of
common
logic
is
hidden
somewhere
among
those
classes
and
it's
hard
to
understand
why
those
different
implementations
for
a
little
bit
different
either
because
they're
meant
to
be
that
way.
All
they
were
just
born
in
different
times
and
so
they're
a
little
bit
different,
but
they
really
should
be
the
same.
A
D
F
Would
go
a
step
further
in
that,
like
I'm,
very
Pro
sugar
I'm
excited
for
us
to
get
the
API
kind
of
batten
down,
but
I
do
think.
It's
like
a
low-level.
Api
and
I
really
am
I'm
sort
of
turning
my
attention
now
towards
like
usability,
and
it's
clear
that,
like
you,
can
put
a
lot
of
sugar
on
top
of
the
current
API
right.
You've
convenience
functions.
Things
like
these
types
fans
are
great
example,
annotations
all
that
other
stuff
making
it
very
easy
to
grab
the
active
span.
A
E
Yeah,
as
this
is
mostly
just
a
question
about
whether
I
should
add
this
as
a
feature
request
issue
or
whether
this
is
something
that's
already
been
thought
about
in
plans.
But,
for
example,
if
a
vendor
wants
to
do
some
additional
instrumentation
or
add
some
agents
without
having
to
fork
the
repo
to
make
that
happen,
is
there
a
way
that
we
can
facilitate
that.
B
So
you
want
to
use
this
agents
by
code
manipulation.
You
want
to
use
this
agents,
you
know
Oklahoma
tree
asked,
you
know
api's,
but
you
just
want
to
add
a
new
like
literally
new
instrumentation
package.
You
know
I
also
want
to
instrument.
Just
take
an
example.
If
we
don't
already
instrument,
elasticsearch
I
want
to
add
the
elasticsearch
instrumentation
by
just
adding
a
jar
somewhere
in
adding.
E
E
B
B
And
Trask
is
asked
before,
for
you
know
some
of
these
instrumentations
to
be
packaged
out
separately
anyway,
and
having
that
again,
I
like
the
idea
of
it
being
all-in-one
by
default,
but
having
the
option
of
you
know,
sent
me
supported
or
been
totally
vendor
specific
things
happening
with
an
additional
jar.
Yeah
I
think
we
can
come
up
with
something
for
that.
It's.
E
D
E
If
you
wanted
this
thing,
you
had
to
like
write
your
own
tracer
like
having
having
the
ability
for
vendors
not
to
have
to
fork
and
do
their
own
distro
and
just
maybe
be
able
to
have
it,
have
something
that
bundles
things
up
and
puts
it
together,
because
it
seems
like
way
easier
for
most
vendors
to
work
with,
rather
than
having
to
keep
to
fork,
repos
and
sync,
and
all
that
kind
of
stuff.
So
yeah.
G
G
You
know
you
bundle
it
in
with
your
your
single
Java
agent
like
we're
doing
right
now,
or
we
could
maybe
do
something
like
what
we're
doing
with
exporters.
Where
you
know
the
Java
agent
can
use
the
service
loader
to
find
exporters
on
the
users
application
path.
Is
that
correct
that
that's
the
way
it
currently
works.
G
So
we
might
be
able
to
do
so.
What
I'm
trying
to
get
at
is.
Maybe
we
can
do
something
similar
to
what
we're
doing
with
the
exporters
and
do
that
the
same
with
instrumentation,
but
in
order
to
do
in
order
to
do
that,
we
would
need
to
publish
the
the
core
classes
like
the
instrumentation
interface
and
stuff,
like
that.
In
order
for
that
custom
instrumentation
to
be
able
to
be
written
and
loaded
separately,.
D
G
D
A
A
So
I
would
very
like
to
start
some
discussion
about
at
least
I
I
want
to
stop
tolerating
the
current
station.
Oh
I,
I
want
to
I
want
to
boil
the
swamp,
so
the
current
situation
is
bad.
It
takes
a
lot
a
lot
of
time
it
it
is.
It
is
flaky
and
well.
We
have
to
do
something
and
recent
changes
to
stability
that
Trask
has
done
well,
if
not
mistaken,
has
may
has
made
the
test
even
more
even
longer
so,
ideally,
ideally
we
well.
We
will
have
the
pie
and
we'll
eat
fast
and
stable
test.
I.
B
How
you
know
we
talked
before
about
the
idea
of
buying
larger
circle,
CI
instances
and
getting
a
larger
budget
there
that
again,
multiple
vendors
might
even
contribute
to
in
some
way
how
much
of
the
of
the
time
problem
could
be
could
be.
We
just
throw
money
at
as
opposed
to
throwing
developer
hours,
that
clear.
G
B
Got
them
too,
we
don't
really
throw
a
whole
lot
of
money
at
it,
but
we
also
have
those
I
was
just
you
know,
just
thrown
it
out
there
I
think
there
are
two
separate
issues
here.
Right,
one
is
is
speed
which
I
think
can
be
organized
just
as
a
in
a
fair
mechanical
way.
How
long
does
it
take?
Let's
break
down
the
time?
Let's,
let's
optimize,
some
things,
the
stability
I've
I've
made
a
few
attempts
at
you
know.
Oh
I
see
something
flaking
out.
H
B
Run
button
I'll
like
try
to
dive
into
it
like
okay,
when
were
they
said,
going
wrong?
Okay,
let
me
try
to
fix
that
you
know,
but
do
we
do?
We
have
a
discernible
CI
publish
any
data
for
us
that
we
can
actually
see
that
kind
of
data
where
we
can
actually
focus
in
on
you
know,
X
percent
of
the
the
failed
runs
were
because
of
this
time
out
of
this
whatever,
and
we
can
treat
those
as
a
punch
list
of
let's,
let's
get
the
top
ones,
let's
always
focus
on
the
top.
B
G
They
used
to
I
think
the
UI
has
been
switching
around
a
little
bit
and
I
was
looking
at
four
I
was
looking
for
it
the
other
day
and
they
couldn't
find
it.
So
maybe
I
just
wasn't
looking
at
right
place,
but
in
my
experience
on
our
side
of
things,
the
the
most
unreliable
ones
tend
to
be
a
you
know,
elastic
search,
and
you
know,
port
collisions.
G
The
port
collisions
are
generally
a
result
of
we
do
a
random
port,
and
then
we
release
the
port
and
try
and
bind
to
it
again
and
Linux
seems
to
be
very
eager
at
reusing,
released
ports
and
reissuing.
Those
really
supports
very
quickly,
and
so
during
that
period
of
you,
you
have
this
race
condition.
If,
where
you
release
the
random
port
and
rebind
to
it,
somebody
else
has
already
taken
it.
G
The
other
classification
that
I've
seen
a
lot
is
elastic,
search,
elastic
search,
embedded
is
kind
of
tricky,
so
maybe
if
we
could
switch
to
a
containerized
instance
of
elastic
search
and
not
have
to
use
the
embedded
version,
maybe
that'd
make
it
more
reliable.
I'm
not
sure
I
haven't
looked
into
that
at
all,
but
other
problems
with
reliability,
but
those
tend
to
seemed
those
seem
to
be
the
biggest
ones
that
I've
run
into
I.
A
Have
a
gut
feeling
that
at
least
some
of
the
failures
I
do
too
various
tie
a
timeout
which
had
you
exactly
to
the
performance
and
to
all
due
to
overhead
of
all
those
instrumentations
and
how
much
how
much
resources
we
can
consume
during
our
tests
and
well
in
these
and
this
failure
that
you
had
just
to
open
it's
interesting
because
well,
it's
the
it's
like!
Do.
We
have
some.
A
D
This
is
prom.
I
guess
is,
this
would
be
a
reordering
an
ordering
issue,
but
yeah
I,
don't
I
mean
my
I
mean
I've
dealt
with
sporadic
stuff
before
and
the
only
way
that
I've
been
successful
is
just
you
know
going
through
and
tackling
them
one
by
one
understanding
them
fixing
the
the
reason
why
they're
sporadic?
If
it's
a
port
thing
you
know
just
do
a
retry.
D
A
Maybe
the
easy
to
start
solving
that
another
part
of
the
problem
slow,
because
maybe
if
we
start
making
tests
faster,
leaner,
taking
less
resources
that
may
help
with
flakiness
as
well,
but
at
least
it
will
speed,
speed
in
God
seems
up.
So
it
will
be
less
less
irritating
to
wait,
10,
15
mins
and
then
to
discover
that
they
failed.
So
something.
G
The
way
that
we
expect
it
to
in
that's
completely
separate
from
the
running
in
a
real
environment,
then
I
think
that
our
testing
of
in
running
in
an
automatic
instrumentation
can
be
much
lighter.
We
can
do
things
like
validate
that.
It
just
calls
that,
instead
of
verifying
what
the
order
of
the
spans
that
generated
are,
does
that
make
sense.
Yeah.
A
B
G
A
C
D
Anyway,
I'm
I'm
a
little
concerned
about
you,
know
reducing
our
test
coverage
of
auto
instrumentation
I
mean
I,
like
that
it
does
a
full
I
like
that.
It
tests
the
entire.
You
know
entirety
of
this
bands
that
are
expected
out
of
it
and
it
tests
all
these
different
use
cases
with
the
auto
instrumentation.
It
gives
me
a
lot
more
confidence
in
that.
D
G
B
D
A
B
A
D
That
plan
I
can
totally
do
that,
so
create
a
how
about
I
create
a
separate
issue
for
each
one
and
then,
when
I
see
duplicates
of
the
same
issue.
I
can
just
tag
that
on
to
the
existing
issue,
so
we
can
kind
of
prioritize
based
on
how
many
times
we're
seeing
particular
failures.