►
From YouTube: 2021-01-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Have
you
figured
out
how
to
get
rid
of
that
yeah
and
this
management
bit
in
there.
B
I
have
had
a
solo
type
account
in
the
past
yeah,
it's
probably
around
somewhere.
I
haven't
looked
in
a
really
long
time.
A
C
B
I
still
have
my
login
google
password
google
password
management
for
the
win,
it's
yeah,
so
I
still
have
a
login.
C
B
B
C
B
Can
you
do
you
know
where
the
jira
is
is
like
starting
to
tell
you
yeah?
I
can
find
it
issues.
C
B
C
A
C
D
C
C
B
A
C
C
B
B
C
I
mean
in
the.
C
C
C
B
B
B
B
C
B
C
B
A
C
A
But
yeah
whatever
it
and
I
mean
you,
know
yeah,
I'm
with
john-
is
I
just
wanted
to
state
for
the
record
that
I
I
thought
that
the
plural
looked
really
weird,
that
you
know
yeah
yeah,
so
that
when
somebody
else
comes
back
later,
when
ted
or
somebody
chimes
in
and
pays
attention
to
it
and
they're
like
hey,
this
looks
really
weird.
Why
did
we
do
this
like?
Well.
B
A
In
this
meeting
is
half
this
meeting
is
called
to.
Do
you
want.
B
C
C
A
B
Cool
yeah
there's
not
much
to
it,
but
I
think
it
will
be.
It
will
be
useful
for
debugging
these
particular
issues,
and
I
know
we
don't
know
exactly
what's
going
to
happen
with
this
whole
span.
Suppression
thing,
but
in
the
meantime,
it's
nice
to
be
able
to
ask
for
debug
logs
and
be
able
to
see
exactly
what's
going
on.
A
Cool
yeah
and
it's
not
overly
verbose.
That's
my
only
concern
about
debug
logs
is
not
super
spammy,
because
then
we
lose
meaning,
I
think
anorak.
You
pushed
some
some
some
pr
with
you.
Had
them
change
it
from
debug
to
trace.
C
B
B
This
to
trace
also,
I
don't
care
very
much
yeah.
I.
C
C
B
C
B
C
C
A
A
B
B
C
C
B
A
bad
idea
at
all,
I
think
that
would
be
incredibly
useful,
yeah
just
one
flag
and
you
can
get
all
the
information
from
your
customer
yeah,
because,
especially
right
now,
you
can
only
have
one
exporter
right
and
so
having
a
kind
of
a
work
around
to
get
both
their
normal
export
going
through
the
normal
pipeline
plus
some.
I
think
that
that
sounds
like
a
fantastic
idea.
A
B
So
we
know
precisely
which
instrumentation
is
generating
that
in
the
debug
logs
then,
rather
than
having
to
go
to
the
back
end
to
then
you
know
query
it
up
and
see
what's
causing
it.
I
can
I
mean
that's
in
that's
in
regular,
open
telemetry,
so
I
could
put
that
in
a
pr
for
that
tomorrow,
just
to
put
the
instrumentation
library
info
and
into
the.
A
B
B
A
Yeah,
no,
we
got
some
really
nice
consolidation
of
things
into
the
sdk
lately
to
share.
B
B
Remote,
I
know
it's
I
mean
I
was
just.
I
was
running
it
locally,
but
then
using
the
agent
code
base
to
do
a
remote
debug
on
my
local
process.
Right,
it
works
great.
You
said
to
make
sure
you
put
the
suspend
equals
y
yeah
and
you
do
it
in
front
of
your
agent
initialization.
Then
it
works.
C
A
Yeah
we
actually,
yes,
yes,
we
have
a
thanks
to.
I
think
I
recall
pavel
a
discussion
with
pavel
and
john.
C
B
C
B
A
By
unit
tests
you
mean
the
the
ones
where
we
are
normal
yeah.
A
C
A
C
A
A
They
do
so
it's
a
full
agent's
setup.
Now
they
do
yeah
there's
a
little
bit
of
difference
right.
We
have
a
condition.
Something
really
did
global
ignorance.
C
C
A
C
D
B
Secret
is
to
remember
to
remove
it
before
you
before
you
commit.
A
Yeah
I'll
let
jason.
E
A
E
Nikita
kind
of
drove
this,
but
he
and
I
were
having
a
side
conversation
that
was
kind
of
fall
out
after
the
benchmarking
stuff
that
we
talked
about
last
time,
like
maybe
a
week
ago
or
a
week
and
a
half
ago
and
nikita's
like
well,
that's
all
cool,
but
don't
you
want
to
contribute
that
like
into
the
main
repo
and
I
was
like,
but
I
hate
mono
repos
and
he's
like,
but
you
should
still
contribute
it
and
I'm
like
yeah,
okay,
it's
a
pretty
good
idea,
but
other
people
were
quick
to
call
out
that.
E
E
The
thing
that
I've
hacked
up
right
now
is
a
little
bit
splunk
centric,
so
it
needs
a
little
attention
there,
but
I
think
that
there's
a
way
to
contribute
it
meaningfully
and
to
be
able
to
eventually
from
ci,
run
these
benchmarks
and
have
baselines
that
get
published
and
tracked
over
time,
like
maybe
on
the
pages,
the
gh
pages
branch
or
something
I
don't
know
how
other
projects
want
to
do
that
or
how
we
want
to
do
it
but
yeah.
C
C
C
E
So
sorry,
that's
fine.
I
think
that
any
any
results
we
publish
about
overhead,
like
specifically
my
benchmarks,
were
geared
or
aimed
at
looking
at
overhead.
They
need
to
come
with
the
giant
caveat
so
that
people
don't
expect
a
specific
number.
Like
all
the
benchmarks
say
it's
three
percent
overhead
now
you're
now
I'm
hitting
four
like
what's
wrong.
Well,
of
course,
like
it
needs
to
come
with
a
big
stamp
of
of
caveat,
exactly
don't
use.
C
E
A
Yes,
so
I
would
from
past
discussions
about
publishing
benchmarks,
the
the
consensus
had
been
publishing.
Micro
benchmarks,
like
from
the
sdk
project,
was
good,
but
on
the
agent
side
there
was
a
lot
of
reluctance
to
publish
benchmarks.
A
C
A
C
Are
just
this
is
probably
the
start
span
commands
it's
like
probably
a
two
nanosecond
thing
anyways,
and
so
if
we
publish
something
that
doesn't
have
variance,
it's
probably
good,
if
it
does
have
variance
we
don't,
we
can
still
do
it,
but
we
don't
have
to
publish
it.
I
think
that's
what
we
talked
about
before.
B
If
it
does
have
variance,
that's
actually
useful
to
publish
to
show
hey,
we
changed
nothing
relevant
between
these
four
builds
and
they
you
can
see
what
the
normal
variance
is,
because
our
you
know
this
is
just
what
our
variance
is
on
our
you
know.
Whatever
hardware
we're
running
on
or
virtual
hardware,
somebody
else's.
B
B
B
I
don't
know
it
seems
like
the
data,
at
least
at
that
point
we
can
say
we
don't
have
a
good
benchmarking
platform
right
like
hey,
you
can
see
what
our
variance
is.
We
nothing
changed
between
these
three
runs,
and
this
is
the
variance
we
see.
We
need
a
better
benchmarking
platform,
like
that's
the
proof.
E
I
don't
know
I
mean
that's
addressing
a
thing
that
trusts
brought
up
earlier,
at
least
anecdotally
from
my
runs.
You
know
there
is
variance,
but
it's
not
wild
and
I
I
don't
know
I
kind
of
attributed
that
to
the
fact
that
we
were
doing
a
lot
of
this
like
workflow,
I
think,
might
cause
some
of
these
differences
to
sort
of
wash
out,
but
I
mean
locally.
A
E
Yeah
I
mean
I'd
like
to
believe
that
I
think
you're
right,
but
there's
also
a
lot
of
different
things
that
go
on
on
my
machine.
I
don't
know.
E
A
And
we
could
be
like
a
little
careful
with
by
publishing
the
no
like
the
response
times
with
agent
and
without
agent
separately
to
track
those
over
time,
as
opposed
to
having
a
single
graph
that
says:
here's
the
percentage
of
overhead.
Oh.
B
B
E
B
B
C
B
C
B
B
C
B
C
B
A
Yeah,
that's
a
common
problem.
I
remember
publishing
to
maven
central
and
glow
root
and
I
would
I
had
a
I
did
have
like
a
whenever
I
would
publish
a
staging
repo.
I
had
a
set
like
app
that
I
would
then
target
that
staging
repo
and
make
sure
that
you
know
everything
downloaded
everything
ran
yeah.
Some
automation
around.
That
would
be
great.
C
C
A
A
Oh
yeah,
so
the
question
was
about
oh
yeah
mateish,
which
worked
had
been
working
on
testing
commons
and
was
asking
about.
You
know
stability
for
that
stability
for
these
different
things-
and
I
just
mentioned
this-
was
sort
of
as
far
as
probably
our
priorities
if
we
needed
to
rank
them
library,
instrumentation
and
instrumentation
api,
since
primarily
because
fleet
and
other
people,
but
flute
primarily.
A
Oh
yeah
yeah,
so
none
all
of
these
will
be
alpha
with
one
zero.
A
No
telemetry
yeah.
C
C
C
C
A
D
A
Yeah,
I
mean
most
of
it
is
nettie.
Most
of
it
comes
from
nettie.
A
B
And
it
actually
is
going
to
end
up
blocking
the
sdk
as
well,
because
they're
not
going
to
declare
the
spec
at
1.0
until
that
is
settled
so,
okay.
So,
even
though
we're
still
declaring
our
configure
alpha
they're,
not,
we
are
blocked
on
the
spec
being
1.0,
which.
D
C
A
But
I
think,
as
long
as
the
spec
is
one
zero,
it
seems
not
a
big
deal.
A
And
yeah,
I'm
glad
that
that
everybody's
blocked
on
the
spec
being
one
zero.
Because
then,
though,
I
think
this
will
drive,
I
think
there'll
be
a
flurry
of
activity.
Well,
I
know
that.net
still
thinks
they're
going
to
release
1-0
tomorrow,
but
that's
why
they
were
pushing
on
the
spec
to
be
1-0
by
tomorrow.
A
Happen
cool
was
there
anything
that
you
wanted
to
chat
about
on
a
rag.
C
C
A
C
C
A
A
And
so
you've
got
the
collector
contrib
and
then
the
sd
java,
sdk,
spec
and
then
java
instrumentation
and
then
collector.
So.
A
I
was
thinking
you
know,
you
mentioned
how
the
collector
review
stuff
like
they
have
that
rotating
reviews.
I
don't
really
I
mean
I
don't
think
we
need
to.
I
think
it's
still
fine,
but
I
mean
from
a
volume
perspective,
at
least
at
least.
This
explains
why
we're
all
like,
like
feeling
bogged
down
in
in
pr
reviews,.
A
And
yeah
so
at
some
point
we'll
want
to
think
about
that,
like
a
split
and.
C
Speaking
of
monorepo,
I
mean
not
really
to
java
but.net
is
having
this
conversation.
What
to
do
with
their
country,
people
and
apparently
like
their
maintainers,
aren't
they're
pushing
for
not
like
what
do
you
say
like
they
want
each
artifact
maintained
individually
so
that
it's
not
all
on
the
maintainers
or
something
like
that,
and
I
it
would
be
the
only
language
I
think
that
followed
that
model.
C
E
C
You
have
a
monterey
where
the
maintainers
do
apply
dependency
updates
to
all
the
artifacts
proactively
or
you
split
up
all
the
repos,
I'm
I
don't
care
about.net,
so
I'm
personally,
fine
with
either
of
them,
but
it
just
sort
of
reminded
like
we
have
our
100
instrumentation
project
and
we
do
deal
with
this
and
I'm
personally
fine
with
it.
I'm
just
curious
what
if
people
have
thoughts
on
that?
Those
are
like.
C
E
Pretty
big
gains
to
be
had
by
splitting
instrumentation
out
from
the
main
agent
body
I
lobbied
for
that
in
the
new
relic
code
base
as
well,
because
a
lot
of
that
instrumentation
just
doesn't
change
and
the
guts
of
the
agent
do.
E
C
A
E
That
comes
along
with
its
own,
a
different
set
of
challenges:
yeah,
but
maybe
there's
some
logical
grouping
that
can
be
carved
out
or
drawn
like.
You
can
draw
a
circle,
maybe
around
a
certain
group
of
them
and
there's
stuff.
That
will
be
core
and
kind
of
never
go
away
and
applies
in
85
plus
percent
of
deploys,
like
I
don't
know,
I'm
just
making
stuff
up,
but
like
servlet
jdbc,
you
know,
client,
http,
clients,
like
all
that
stuff
is
pretty
core.
E
B
But
could
also
make
it
easier
for
like
these
people
in
china
or
whatever,
if
they
can
come
in
and
just
contribute
some
instrumentation
against
our
apis
and
not
have
to
worry
about
building
the
entire
thing
and
the
full
like
entire
regression.
Suite
might
be,
there
might
be
a
win
there
to
getting
more
people
to
contribute
could
be.
C
Yeah,
let's
first
think
of
groups
and
this
document
example,
this
concrete
reaper
has
some
exporters
and
some
instrumentation.
So
one
group,
the
grouping
that
did
come
to
mind,
was
exporters
and
instrumentation,
but
then
like,
for
example,
sergey
or
like,
I
guess,
the
doctor
maintainers.
They
feel
as
if
they're
not
familiar
enough
with
the
libraries
to
maintain
the
instrumentation
for
them,
and
so
they
really
just
don't
even
want
to
be
on
the
plate.
A
I
know
in
the
net
world
and
sergey
in
particular
like
a
big
proponent
of
trying
to
get
all
the
libraries
to
instrument
themselves.
C
B
A
C
B
Yeah
I
seem
to
come
in
pretty
solidly
number
three
in
pretty
much
every
time
frame.
Last
decade.
That's
a
tough
one.
B
C
D
C
C
B
Yeah,
it's
seven
o'clock
here
so
yeah,
but
I
think
we're
good
for
oh
15.