►
From YouTube: 2022-03-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
John
or
somebody
yeah
find.
B
A
B
A
B
A
B
A
I
wanted
to
ask
your
thoughts
on
the
so
the
batch
span-
processor
worker
thread
it
it
waits
for
the
right.
It's
synchronous,
it
waits
for
the
exporter
to
complete
or
timeout
after
30
some
configurable
30
seconds.
A
I
was
wondering
like
what
I've
seen
is,
especially
with
a
remote
ingestion
service
that,
with
some
delay
there,
it's
easy
to
overflow
the
queue
or
not
easy,
but
I
mean
with
pretty
high
throughput.
You
can
overflow
the
queue,
and
so
I
was
thinking
I
mean
it.
Doesn't
it's
not
as
much
as
I
was
looking
at
it
like
what
it
would
take
to
multi-thread
it.
B
A
There's
some
nice
simplifications
of
it
being
a
single
thread
there,
but
I
thought
it
it
looked
like
it
might
be
possible
to
at
least
not
have
the
thread
weight
on
the
downstream
before
it
picks
up
the
next
batch
and
starts
processing
it
I
mean
there's
some
trickery
there.
I
haven't
actually
tried
it
yet,
but
like
we
get
back
the
completable
future,
you
could
stash.
You
could
have
a
max
of
you
know,
holding
on
to
like
five
completable
futures.
B
A
A
Yeah
I
mean
I'll
I'll
I'll
dig
into
it
more.
I
think
if
we
wanted
to.
B
B
B
A
Okay
still
have
the
single
worker
thread,
but
just
yeah
process
as
many
of
those
batches
as
you
can
in
that.
B
B
B
B
B
B
B
A
Yeah
I
mean
as
long
as,
if
you
have
you
still
have
that
blocking
and
as
long
as
you're
blocking
more
all
that
data
is
queuing
up
and.
B
B
B
B
B
B
B
A
Have
an
actual
customer,
so
let
me
try
try
something
see
if,
if
that
even
is
the
right.
A
To
where
we
need
to
be-
or
if
you
know
like
you
said,
or
if
it's
just
incompatible
with
the
back
end,
is
too
slow
for
or
or
a
single
thread
is
not
enough
to
even
serialize
all
of
everything
there's
other
options,
but.
B
A
I
can
see
it's
dropping
we're
dropping
about
50
with
them,
so
I
just
need
to.
I
just
need
to
double
the
throughput,
somehow
not
not
tenfold
so.
A
A
Rape,
so
actually
we
had
an
interesting
we.
Actually
it
was
several
customers,
but
we're
so
it's
not
span.
So
I
actually
have
a
copy
of
the
batch
span
processor
in
our
agent
down
lower,
where
that's
actually,
because
we're
still
have
like
a
wedge
to
metrics,
so
we
had
metrics
flowing
through
here.
Well,
we
have
like
one
pipeline
that
goes
out
and
it
sends
metrics
logs
and
traces
and
spans,
and
so
on
that
telemetry
pipeline
I
threw
a
batch
span
processor,
basically
in
there,
and
that
was
getting
overflowed.
A
Quite
often,
I
noticed
because
the
metrics
were
getting
reported
once
a
minute
and
when
the
metrics
would
all
those
metrics
would
come
in,
that
would
was
counting
towards
the
I
still
just
had
the.
I
think
I
had
to
bump
the
default
a
little
bit
to
like
4
000
records,
but
then
all
those
metric
records
when
people
had
like
spring
boot
actuator
and
I
had
some
people-
have
like
we're-
reporting,
100
000
metric
data
points
each
minute
and
so
that.
A
B
Yeah
yeah,
are
you
not
aggregating?
You
just
send
in
the
points
directly,
oh
even
with
it,
there's
just
so
many
metrics
in
there
that,
like.
B
B
B
A
Which
was
why
I
was
really
interested
in
the
the
metric
views
and
how
we'll
be
able
to
control.
You
know
dimensions
there
that
customers
may
think
they
want,
but
then
end
up,
not
because
they
these
were
actual
dimensions
that
they
had
configured
or
that.
B
B
B
B
B
C
B
A
A
I
haven't
even
looked
at
what
the
attribute
like.
Can
we
have
spaces
an
attribute.
A
A
B
A
A
Any
problems,
I
just
thought
it
looked
odd,
but
I
guess
it's
odd,
probably
to
to
put
a
space
in
the
mdc.
C
A
Anyway,
it
was
just
a
dummy
test.
I'm
sure
I
put
like
mdc
key
something
like
that
in
the
test.
A
I
was
wanted
to
merge
this.
I
don't
know
if
I
I
think
I
was
just
hoping.
Maybe
matthias
was
here.
I
don't
think.
A
We
still
yeah.
It
was
just
one
of
these
examples
where
the
we
had.
We
were
had
duplicate,
http
client
in
spans,
because
the
jax
rs
clients
had
used
like
http
url
connection
or
google.
A
B
Removing
seems
fine,
I
I'm
sort
of
familiar
with
the
annotations.
I
don't
even
know
how
the
http
client
looks
like,
but
oh
yeah.
A
Yeah,
it
was
basically
what
kind
of
this
well.
B
Right,
you
think
of
it,
something
like
aws
sdk.
We
instrument
the
sdk,
not
just
relying
on
http
instrumentation,
because
it's
an
rpc
sdk,
and
so
we
need
to
make
sure
like,
like
it's
actually
important,
that
the
client
spends
our
pc
spans
injects.
I
have
a
feeling:
that's
not
how
it's
used,
but
that's
just
one
thing
that
I
could
imagine
being
a.
A
Yeah,
it's
a
little
bit
because
it's
like
you
pass
in
like
a
your
object
and
it'll.
You
know
it'll
turn.
A
Json,
so
it
I
did
think.
B
A
B
A
B
A
Yeah
yeah
delete
cool.
I
will
I
got
lori
to
time
in
so
yeah.
B
B
C
B
Of
course,
so
if
neither
downloads
being
flaky
really
hurt
us
too
much
the
long
long-term
like
mirroring,
if
it's
only
for
a
ci,
then
having
it
access
good
home
packages
would
be
fine.
We
don't
want
our
normal
bill
to
do
that,
because
that's
not
anonymous,
but
we
could
mirror
our
dependencies
and
to
get
our.
If
that
helps
us
flakiness,
that's
one
just
idea:
that's
a
possible
thing
to
try
if
we're
not
able
to
find
another
solution.
B
A
A
I
wonder
if
this
will
the
central
flakiness
will
prompt
such
a
feature.
B
A
B
A
A
Yeah
I
like
the
that
package,
repo
idea,
because
this
is
pretty
hecky.
This
is
we
have
to
do
this
in
our
our
official
internal
build
pipeline
because
it
like
runs
offline
for
extra
security
to
have
to
which
is
annoying.
It
makes
like
docker
and
things
really
annoying,
but.
A
No,
like
the
you,
have
a
initial
phase
where
you're
online
that
you
can
do
stuff
beforehand
and
then
the
build
itself
is
supposed
to
not
be
online,
but
I
mean
you
can
kind
of
just
do
everything
before
yeah.
That's
also.
C
A
Yeah
but
I
don't
really
understand
that,
and
that
seems
kind
of
annoying
to
have
to
do
in
all
of
our
build
steps.
A
Do
you
care
at
all,
I
I
I
just
tried
it
out
because
it
was
in
the
spec
repo,
it's
kind
of
cool
like
it
found
a
few
and
it's
low
noise,
because
it
only
looks
for
like
known
misspellings
of
a
whole
dictionary.
A
A
B
A
Cool
yeah
I
do
want
to
take
once
I
kind
of
understand,
go
through
learning
of
the
github
action
stuff
go
into
the
the
core
repo
bring
some
of
that
stuff
over
there
also.
I
know
I
I
owe
that
error
prone
internal
javadoc
thing
also,
but
try
to
sync.
B
I
was
looking
into
how
to
publish
plugins
from
the
speaker
repo,
like
I
want
to
find
a
way.
That's
not
like
our
instrumentation
before
that
cradle
plugins
project
becomes
a
red
project,
stop
and
found
a
way.
C
B
A
A
B
A
Oh
right,
I
see
what
you're
saying
yeah.
If
we
move
it
over
to
the
core
move
it
to
core,
publish
it
and
want
to
share
it,
yeah
reuse,
it
yeah
yeah.
I
I
think
you
know
with
all
the
as
long
as
I
think
we
had
discussed
before
about
build
or
internal.
B
B
C
A
And-
and
they
asked
for
some
some,
what
yuri
asked
for
some
change
to
some
text
that
I
actually
went
and
made
up
that
was
in
the
auto
stuff,
actually
went
and
changed.
B
A
Over
here,
this
one
actually
has
approvals,
but
still
not
merged.
C
A
About
it,
but
at
least
now
once
my
motivation
for
the
the
log,
the
thread
attributes.
B
A
B
B
A
Exactly
I
don't
like
you
know,
people
are
people
like
their
thread
names
on
their
logs,
so.
B
B
A
A
B
A
A
His
name
is
my
real
name
mentioned
that
the
http
route
stuff
that
mateosh
did
was
working
for
him.
A
A
So
yeah,
so
I'm
hoping
that
emily
we're
hoping
that
emily
joins
again
this
week
and
we'll
chat.
B
A
A
Yeah,
I
think
that
really
they
should
do
the
bridge
basically
like.
So
what
would
a
bridge
look
like
for
them
to
if
they
want
open,
telemetry,
metrics
users
to
be
able
to
use
open,
telemetry
instrumentation
that
generates
metrics?
That
then
they
would
need
to
write
a
metric
exporter
or
a
metric
reader
metric
provider?
A
That
would
then
funnel
into
I
mean
short
of
doing
the
full
like
s,
api
level
bridge,
which
is
an
option
sort
of
like
micrometer
style.
They
could
probably
just
push
those
metrics
the
post,
aggregated
metrics,.
B
B
B
A
They
didn't
really
answer
that
there
was
the
issue
of
you
know
the
metrics
sdk
not
being
ready,
but
I
mean
that's
seems
moot.
Given
the
timeline,
I
think
yeah
it's
confusing
because
actually
the
reason,
the
other
this
the
reason
emily
mentioned
was
that
they
had
because
they
they
wanted,
that
abstraction
layer
like
to
protect
themselves
of
somebody
changing.
A
But
then
it
doesn't
make
sense.
Why
they're
not
doing
that
for
portraits.
A
A
A
Yeah,
hopefully
we'll
hear
from
matteish
and
nikita.