►
From YouTube: 2021-01-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
I'm
guessing
that's
enough
where
in
the
world
is
yes,
where
in
the
world
is
nikita,
is.
C
D
D
B
C
B
B
C
We
started
talking
about
trying
to
get
to
arizona
during
for
a
month
during
january
every
year
or
something.
D
B
Would
love
to
love
it
if
it
weren't,
like
the
trip
to
australia
yeah,
but
how
long
is
it
from
japan?
Even
it's
a
long
trip
right,
yeah.
B
It's
more
mountain
this
year
right
yeah,
one
year
we
did
go
to
costa
rica
that
was
too
close
to
the
equator.
It
was
just
ridiculously
hot,
and
then
we
were
at
a
place
with
a
black
sand
beach
and
the
temperature
was
like
105
fahrenheit.
Every
day
like
we
literally
could
only
go
to
the
beach
at
dawn
because
it
got
so
ridiculously
hot.
We
couldn't
do
it.
B
In
the
water-
and
it's
awesome
right,
even
at
100,
if
you
have
to
get
out
of
the
water,
though
you
have
to
walk
on
the
beach
well,
that
old
vacation.
We
basically
sat
by
the
pool
jumped
in
the
pool
when
we
got
too
hot
and
then
got
out
of
the
pool
and
then
repeated
that
over
and
over
again
so
the
place
we
were
staying
was
a
little
eco
hotel.
That
was
run
by
a
swiss
brother
and
sister
and
he
was
a
chef.
So
the
food
was
just
incredible
like
like
amazing,
gourmet
food.
C
So
this
morning
we
had
a
short
meeting.
We
ended
like
well
like
15
minutes,
tops.
C
So
we
didn't
have
too
much
to
chat
about
a
couple
things
that
came
up.
Oh
yeah,
jason
plum
was
asking
about
doc
the
light
step,
daca
updating
the
light
step.
Dock
docks
were
wrong,
and
so
I
was
asking
why
why?
Why
is
he
want
to
update
the
light
step
docs
and
apparently
they
have
the
best
seo
for
open
telemetry.
So
people
end
up
there.
E
And
I
just
thought
it
would
give
the
wrong
impression,
because
I
mean
it
says
very
plainly-
that
open
telemetry
has
no
concept
of
baggage
or
something
and
I'm
like.
Oh
that's
not
right.
So.
B
D
D
D
C
D
B
B
D
D
B
B
D
D
B
C
Who
else?
Oh
yeah
jason,
plum
again
asked
about
auto
service
update?
Well,
so
we
had
to
have
somebody
give
us
fetus
topics.
Thank
you,
jason.
E
Goodness
so
we
have
an
issue
with
our
agent
where
we're
using
an
incompatible
like
we've
shaded
in
we've,
since
fixed
it,
but
we've
shaded
in
an
incompatible
version
of
of
guava,
and
so
it
breaks
the
otlp
exporter
in
one
of
the
versions,
and
so
I
was
in
through
this
investigation.
I
found
that
auto
service
was
at
rc3,
which
is
three
years
out
of
date,
and
I
submitted
rc-7.
D
C
D
E
C
That
I
know
well.
C
D
Yeah
yeah,
that's
good,
and
so
that
yeah
that
also
unblocks
the
other
thing.
I
guess
right.
This
golf
update.
C
Cool
yeah-
and
I
mentioned
how
the
the
new
testing
via
the
java
agent,
should
unblock
us
to
be
able
to
update
guava
like
then.
It
won't
give
us
that
much
pain,
although
mateo
actually
had
just
been
looking
at
it.
That
that
day,.
C
D
C
We
could,
but
then,
when
we
test
cassandra
3-0
right,
we
need
an
old
version
of
guava
or
cassandra
doesn't
work.
D
C
C
So
yeah
mater
said
he
was
looking
at
that
already
and
there
weren't
that
one,
but
there
wasn't
that
much
so,
hopefully
we'll
get
rid
of
that
and
yeah
that'll
be
nice
because
that's
been
a
that
keeps
popping
up
from
time
to
time.
D
C
So
yeah
that
was
it.
E
Yeah
I
mean
I
could
talk
about
some
of
the
performance
stuff.
I
was
looking
at
yeah,
so
we
have
some
early
adopting
customers
that
are
concerned
with
some
of
the
overhead
and
they
did
some
initial
testing
using
their
internal
checker
system,
and
I
decided
to
try
and
reproduce
it
and
to
quantify
it,
and
I
think
we've
we've
talked.
I
think
the
four
of
us
have
probably
talked
about.
You
know
overhead
testing
and
yeah.
E
How
much
value
is
there
in
that
and
there's
so
many
dependencies
and
it's
kind
of
like
micro
benchmarking,
but
it's
like
even
maybe
less
consistent
and
there's
just
a
lot
of
variants.
But
I
wanted
to
see
if
at
least
I
could
do
some
sort
of
like
order
of
magnitude
checking,
and
so
I
have
a
little
repo
that
I
spun
up,
which
is
not
I
mean
it's
just
under
my
github
account.
I
think
you
can
see
it,
but
it's
you
want
to
share
it
I'll,
just
get
I'll
paste
a
link
to
it.
E
E
Yeah,
so
it's
it's
pretty
low
fi,
but
the
idea
here
is
that
you
would
run
a
test
that
fires
up
the
spring
pet
clinic.
Rest
instance.
I
don't
know
if
you're
it's
just
like
spring
pet
clinic,
but
there
were
some
people
that
tried
to
separate
out
the
ui
stuff
from
the
rest
endpoints
and
make
it
more
like
a
real
app.
E
Instead
of
the
weird
mvc
thing
that
the
classic
spring
pet
client
is
and
it'd
be
cool
to
have
pluggable
apps
for
this
too,
but
right
now
it
spins
that
up
with
you
can
run
it
with
or
without
the
agent,
and
then
it
will
run
a
test
harness
through
this
tool.
K6,
I
don't
know
if
you
all
have
used
k6
before
it
was
it's
new
to
me
this
week.
If
you
look
in
the
k6,
subdirectory
it'll
give
you
an
example
of
like
what's
going
on,
and
this
is
a
new
tool
for
me.
E
E
E
So
that's
what
the
test
does,
and
so
it
throws
traffic
at
the
pet
clinic
running
locally
and
then
yeah.
Then
we
get
results
at
the
end.
That
gives
you.
It
gives
you
a
little
summary
and
it'd
be
cool
to
have
screenshots
of
this,
but
I
don't
have
it
yet
yeah.
If
you
look
at
run
yeah.
That's
fine,
so
very
simple
just
runs
the
pet
clinic
with
no
agent
and
then.
E
E
So
I'm
just
scratching
the
surface
here,
but
the
thing
I
did
uncover
pretty
quickly
is
that
wow-
and
this
is
probably
no
surprise,
so
I'm
saying
it
kind
of
sarcastically
but
wow,
the
jdbc
instrumentation
sure
has
a
lot
of
overhead
right.
That
probably
doesn't
come
as
a
surprise,
because
I
know
we
have
the
jdbc
data
source
disabled
by
default,
but
jdbc
itself,
too
is
is
pretty
heavy.
So
in
in
my
very
very
rough
findings
today
it
looks
like.
E
Going
from
no
agent
in
this
in
this
application,
going
from
no
agent
to
agent
more
than
doubles
the
time
on
average,
like
the
average
time
doubles.
If
I,
if
I
cut
out
jdbc
normalization,
sorry
sql
normalization,
then
it
goes
down
to
about
50
overhead
and
then,
if
I
cut
out
all
of
jdbc,
it's
basically,
you
know
comparable
in
terms
of
iteration
duration,.
D
F
D
E
D
D
Yeah,
we
definitely
had
ideas
of
also
starting
to
play,
so
this
starts
up
the
collector
to
get
the
cpu
and
memory
metrics
of
the
process,
and
we
had
an
idea
of
also
starting
up
prometheus
just
so
that
we
can
just
scrape
it
and
see
a
graph
during
the
benchmark,
and
that
might
be
fun.
We
haven't
gotten
that
far
but
yeah
cool
and
just
in
case,
so
this
is
based
on
something
I
wrote
recipient
a
long
time
ago,
which
is
more
complete.
So
just
for
reference.
C
Nice
from
my
experience
with
trying
to
benchmark
these
overheads,
I've
found
it
easier
to
just
hit
one
like
on
one
end
point
to
focus
a
test
on
just
one.
Instead
of
trying
to
have
too
much
of
a
user
flow,
even
though
it
makes
it
less,
you
know
real
world.
C
It
helps
a
little
bit.
It
helps
with
the
variance
the
run
to
run
variants
and
then
also
it
helps
with
when
you
actually
start
to
get
down
to
tuning
it,
because
there's
just
there's
less
code
paths
that
you're
looking
at
and
a
profiler.
The
profile
is
less
wide
than
of
all
these
different
code
paths.
So
it's
a
little
a
little
bit
more
clear
where
the
bottlenecks
are.
E
Yep,
no,
that
that
makes
a
lot
of
sense.
It
just
didn't
seem
like,
because
the
pet
pet
clinic
is
so
simple
and
all
it's
all
rest
like
the
very
simplest
operation
would
just
be
a
get
or
a
put,
and
it
kind
of
seems
silly,
because
basically
all
of
your
overhead
is
just
in
the
database
right.
So
I
wanted
to
make
it.
I
want
to
at
least
try
to
make
it
a
little
more.
E
C
Yeah,
no,
no,
I
it's
surprising
to
me
that
doesn't
sound
doesn't
sound
good
at
all.
D
E
A
B
B
C
The
95
I
mean
p99
especially,
can
be
more
because
any
kind
of
perturbation
we
have
on
like
using
up
a
little
bit
more
memory
can
affect
gc,
which
can
affect
those
the
p99s
a
lot
more.
But
man,
if
average,
is.
C
Going
up
you
know
more
than
if
it's
the
average
is
over
10,
that's
pretty
bad.
E
Yeah,
like
I
said,
I
will
try
and
get
more
more
visible,
write-up
yeah.
D
C
Yeah
I
mean
I
spent
a
good
amount
of
time
tuning
the
glow
roots,
jdbc
instrumentation,
just
since
that
is
like
the
most
common
thing
and
but
it
never
didn't
start
out
anywhere
bad
and
got
much.
You
know
significantly
better.
The
one
place
where
I
remember,
seeing
actually
with
new
relics
comparing
to
new
relic
the
there
was
new
relic,
was
capturing
each
next.
C
You
do
was
capturing
timing
information,
and
so
we
had
this
horrible
enterprise
app
that
when
you
log
in
the
first
time,
we
would
read
like
two
million
records
from
the
database
doing
iteration
over
all
those,
and
so
actually
the
new
relic
agent
would
capture.
You
know
the
timing
for
each,
which
makes
sense
right.
You
want
to
know
that
timing,
but
times
two
million
ended
up
just
being
a
lot
so
in
glory
to
have
a
switch
to
turn
that
off.
B
Yeah,
I
think
we
ended
up
disabling
that
by
default
in
the
new
relic
agent.
At
some
point,
yeah.
B
C
Well,
ideally,
what
you
would
want
to
do.
There
is
instrument
specific
drivers,
like
the
my
sequel
driver
with
the
the
code
that
actually
goes
and
gets
the
next
batch,
because
that's
what's
interesting,
that
wouldn't
be
a
lot
of
work.
E
Yeah
now
that
you
mentioned,
I
think
spring,
that
spring
rest
thing
I
think,
use
I'm
certain
uses
hibernate,
but
I
think
when
I
disabled
jdbc,
I
didn't,
I
don't
think
I
even
saw
hibernate
stuff
in
there.
I
should
double
check
that
I'll
write
that
down.
B
B
B
C
Hey
jason,
I
saw
you
say
you
had
jfr
hooked
into
this.
If
you
have
you
run
it
with
the
profile,
the
profile
profile.
I.
C
Okay,
yeah,
because
I
mean
if
it's
really
that
much
overhead
the
the
jfr
profile,
should
I
mean
just
the
code.
You
know
your
typical
profiling
code.
Profiling
would
probably
point
to
where,
where
where
the
heck
is
it
spending
all
that
time.
B
C
Because
we
can
access
some
package
level,
some
things
inside
of
the
code.
I
forget,
I
feel
like
it
came
up
and
maybe
we
even
docked
it.
D
A
D
Of
course
we
do
get
people
asking
about
it,
salute
this
one,
bigger
aster
and
then
sometimes
you
guys
suggest
to
me,
I
should
just
say,
use
agent
or
something
but
another
big
use
case
for
libraries
lambda.
I
think
I
don't
know
if
they'll
ever
be
able
to
get
the
agent
to
run
well
on
lambda.
We
do
still
want
people
to
be
able
to
use
open
tomorrow,
java
there,
and
so
that's
another
reason
internally.
Also
we
want
more
library,
instrumentation,
and
so
I'm
hoping
that
at
least
for
new
instrumentation.
D
D
D
C
D
D
C
You
know,
and
it.
D
C
F
B
D
Mean
these
aren't
users
they're
vendors
right
most
of
our
contributors,
so
vendors
probably
still
don't
realize
it
as
much
as
maybe
other
users
would.
But
we
like
again,
if
the,
if
it's
really
annoying
to
the
library
instrumentation,
I
would
definitely
not
push
for
it.
But
if
it's
just
a
matter
of
splitting
out
a
class
and
then
having
an
advice
that
accesses
it,
it's
not
a
huge
overhead
so
that
could
still.
F
C
F
F
D
And
I
don't
know
how
strict
it
is
like
it.
I
wouldn't
necessarily
block
the
pr.
Maybe
we
just
asked
like.
Can
you
extract
this
and
it's
like
if
they're
like
that's
too
much
work,
then
okay?
I
would
still
be
okay
with
that.
I
think,
like
last
time,
I'm
trying
to
remember
what
I
might
have
like
cassandra
for
something
like
that.
D
D
C
Yeah,
because
we
do
know,
has
a
a
clean
api
for
listener.
Yeah.
D
C
D
C
That
thinks
that's
a
java
agent
instrumentation.
C
D
D
C
D
C
So
is
this
just
applying
our
trace
our
listener.
D
A
A
C
Actually,
all
right
I'll
try
to
look
at
that
pr
later
today.
Also.
B
D
B
A
totally
non-urgent
thing:
do
we
want
to
talk
about
the
logging
exporter,
which
my
pr
basically
makes
it
identical
to
the
one
in
the
sdk
yeah.
D
C
Class
loader,
so
it
only
needs
the
shading
of
the
api
that
we
put
in
the
bootstrap
class
loader
and
that
shading
already
happens.
B
And
then
use
it
yeah,
because
with
this
tweet
it
basically
becomes
identical
to
what's.
D
B
D
C
D
D
C
So
what
did
you?
What
did
you
all
think
of
the
otlp
json
logging,
because
I
thought
from
from
the
agent
perspective,
I
was
in
favor
of
changing
our
logging
exporter
to
be
the
otlp
json.
C
C
D
B
B
B
D
C
B
Cool
speaking
of
which
I've
been,
I
verified
that
plums.
Well,
I
mean
we
worked
together
on
it,
but
plums
and
my
fix
for
the
propagator
custom
propagators
in
an
agent
distro.
Does
that
fix
does
work,
so
that's
cool.
It
was
good
to
see
or
something.
D
D
B
So
I
did
call
out
in
the
in
the
back
channel
about
if
you
want
to
get.
If
you
want
to
parse
the
hotel
resource
properties,
environment
variable
you
have
to
pull
in
the
auto
configure
module.
Now
it
doesn't
work.
It's
all
that
code
is
now
only
in
auto
configure
it's
not
in
the
sdk.
B
B
B
It's
the
the
latter,
I'm
not
concerned
long
term.
I
mean
long
term.
It's
a
little
bit.
It's
definitely
surprising.
D
D
C
D
C
Yeah,
I
I
think
just
in
the
docs,
you
know
that
that
will
most
people
will
probably
want
that.
Do
you
think
most
people
will
want
the
auto
configure.
D
D
C
Do
you
think
that
the
extension
like
say
the
auto
configure
extension
like
proves
out
and
it's
all
good,
would
you
ever
promote
that
out
of
an
extension
or
is
sort
of
extension,
the
right
term
for
anything.
D
D
C
Yes,
it's
there
right
now,
just
that
one
feels
a
little
bit
more.
Oh,
am
I
in
the
wrong.
D
D
B
A
C
B
B
B
Oh,
if
you
so,
if
you
right
so
if
you
depend
on
auto
configure
you
get
them
right,
but
does
the
agent
want
to
have
the
then
does
it?
Does
it
pull
those
in?
Does
it
get
auto
configure
stuff,
so
it'll
get
host
and
process.
D
B
B
D
C
So
your
delineation
honor
for
the
auto
configure,
is
properties
things
that
are
like
you
just
mentioned.
D
D
C
I
like
just
the
mental
model,
though,
of
what
the
what
the
auto
configure
does
and
doesn't.
A
C
Helps
for
the
auto
configure,
do
you
think
what
do
you
think
about
having
that
property
source
in
the
auto
configure
that
we
can
hook
into,
because
I
mean
it
seemed
reasonable
to
me
from
I
mean
if
it's
for
the
purpose
of
you
know,
everything
kind
of
of
auto
configure
is
about
properties
and
being
able
to
bridge
those
in
some
different
way.
D
D
C
C
C
You
know,
like
hotel
propagators,
looks
and
instantiates
the
right
list
of
propagators.
We
want
it
to
get
that
list
from
the
hotel
propagators
in
that
properties,
file.
B
C
Oh,
I
see
yeah,
so
it
got
added
as
a
custom
agent
point
you're
right,
not
for
that
that
was
already
in
there.
B
It
just
feels
like
a
it
feels
like
a
super
kludgy
way
to
do
that.
I
wish
there
were
a
better
way
to
because
like
for
me,
at
least
when
I
was
working
with
the
with
the
example
distro
like
it
was
super
surprising
to
me
that
you
would
basically
put
command
line
proper
command
line
options
into
this
file
in
this
weird
way,
as
a
way
to
specify
that.
B
B
I
don't,
I
don't
actually
know
what
I
was
expecting.
I
was
just
very
surprised
to
see
that
so
I
I
haven't
thought
through
what
a
better
solution
to
that
was.
I
hadn't
thought
about
the
default
perspective.
B
C
Can
you
look
at
the
I'm
curious?
What
the
I
guess
I
can
look
at
it
on
the
splunk
signal.
C
C
D
C
We're
just
waiting
for
nikita
to
weigh
in
on
the
thrift
thing
some
bracelet.
C
Max
I
know,
for
example,
I
haven't
migrated
my
our
stuff
over
to
use
this,
but
I
am
kind
of
doing
some
things
where,
like
we
have
our
own
config
file,
and
so
this
so
I'm
mapping
our
config
into
open,
telemetry
config.
So
it
is
sort
of
our.
B
Yeah,
I
guess
the
the
fact
that
it's
property
source,
I
think,
is
the
thing
that
seems
weird
to
me
like
that.
I
have
to
know
all
these
magic
strings
to
put
in
here,
rather
than
just
being
able
to
do
some
programmatic,
if
I
got
a
config,
a
config
object
of
some
sort
that
had
programmatic
calls
on
it.
That
to
me
would
like
default
can
default.
Config
object
with
with
you
know,
all
the
different
options
there,
rather
than
having
to
know
all
the
magic
strings.
B
E
Anything
you
want
in
here
right
yeah,
I
mean
especially
because
there's
extensions
and
who
knows
what
vendors
might
be
adding
like
it
seems.
Like
I
don't
know,
I
see
I
see
pros
and
cons
yeah.
D
C
D
B
B
A
B
D
E
Tell
that
to
the
maintainer
I
mean.
B
B
I
think
that
the
more
interesting
thing
is
the
ability
I
think,
you've
added
in
auto
configuration
to
be
able
to
specify
names.
That
named
propagators
is
that
right.
The
propagator
sdi
now
supports
having
a
name
for
the
propagator.
Is
that
right?
It
feels
like
we
should
do
that
for
exporters,
and
these
other
things
as
well.
C
C
B
B
D
C
C
A
good
long
weekend
for
those
of
us
and
who
get
monday
off.