►
From YouTube: 2022-12-20 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
Hello,
everybody
good
morning
or
good
afternoon.
Let's
start
in
one
minute:
let's
wait
for
more
people
to
join
in
the
meantime.
Please
add
yourself
to
the
agenda
and
check
out
in
case
if
there's
something
you
need
to
discuss,
or
in
that
case
add
that
to
the
agenda
as
well.
A
Okay,
let's
start
I,
don't
see
tigram
here,
but
he
added
one
item
at
the
beginning
about
reviewing
supporting
maps
and
heterogeneous
arrays
as
attribute
values
yeah.
This
is
an
all
kind
of
old
PR
about
basically
supporting
maps
and
heterogeneous
arrays
yeah.
Any
comment
there
on
the
front.
A
And
for
the
record,
Latin
logs
array
supports
these
well,
the
idea
is
that
we
we
could
support
this
in
other
signals,
you
know,
at
least
in
the
data
model,
for
a
start,
okay,
either.
There
are
no
comments
in
that
one.
Please
review
that
yeah.
We
need
to
decide
whether
yes
or
no,
basically,
the
next
one
Alex
bird
neutral
PR
for
elastic
common
schema
support.
Yeah,
please
yeah.
B
Just
wanted
to
let
you
know
so:
the
old
PR
that
surreal
created
a
while
back
has
been
closed
since
he's
not
with
elastic
anymore.
I
created
a
new
one
and
in
that
one
I
also
try
to
reply
to
all
the
comments
and
questions
we
had
on
the
old
PR,
so
yeah.
This
is
a
request
for
review
and
if
there's
some
things
missing
or
any
other
questions,
please
feel
free
to
to
add
on
the
new
PR.
A
Yeah,
by
the
way,
the
reviews
that
you
already
have
in
the
previous
PR,
basically
Dan
and
Riley-
they
are
not
here,
they
are
taking
holidays,
but
they
already
approved
the
previous
PR.
So
I
could
expect
that
once
I
come
back
next
month,
they
could
be
reviewing
that,
let's
pop
Yuri
since
he's
still
around.
A
Okay,
thank
you,
okay,
the
next
one.
This
is
about
the
message
message.
Semantic
conventions
update.
This
is
a
basically
a
bit
refactoring
trying
just
to
provide
a
more
uniform
experience,
and
you
know
the
proper
information.
So
you
can
actually
get
more
message:
specific
information
out
of
semantic
conventions.
It's
a
it's
a
big
PR,
it's
a
PR
that
has
been
discussed
for
for
a
pair
of
months,
at
least
in
the
messaging
group
and
finally,
there's
agreement
on
that
front.
A
I
am
not
in
that
group
myself,
but
as
a
trace
and
the
DC
reviewer
I
review
that-
and
it
looks
great
so
please
review
that
we
have
enough
approval
so
far,
but
it
could
be
great
to
get
approvals
from
people
who
are
not
so
involved.
So
they
can
confirm
that
this
is
that
the
explanations
are
clear
enough.
A
Okay,
the
next
one
yeah
making
Expo
histograms
table
Jack
you're
in
the
call
you
open
this
one
just
wanted
to
get
the
ball
rolling
I
think
you
could
be
pretty
sweet
to
get
this
one
in
just
review
that
please
I,
don't
know.
There's
any
comments
on
that
prompt.
C
I,
don't
have
anything
else
to
add
on
that,
but
yeah,
like
you
said:
if
folks
have
opinions,
please
please
add
them
and
you
know
we'll
try
to
move
this
forward.
A
Yeah,
perfect
yeah,
there's
one
item
about
Max
scales:
parameter
that
can
be
a
follow-up.
It's
like
an
improvement.
It
shouldn't
be
a
blocker.
So
please
review
that
and
if
you
we
had
a
few,
we
have
four
implementations
and
there's
one
in
JavaScript
in
the
works
now
so
I
think
that's
this
time
to
do
it.
D
E
D
I've
been
doing
some
work
to
organize
how
we're
going
to
tackle
all
these
semantic
conventions
to
get
them
stable,
because
that
was
identified
as
a
priority
in
general.
Morgan
has
been
doing
good
work
to
get
community
feedback
and
brainstorm
in
order
to
identify
our
top
priorities
for
2023,
so
at
kind
of
a
strategic
level.
There's
this
document
going
into
the
Community
repo
but
encourage
people
to
have
a
look
at
it.
D
But
we
need
to
have
a
process
of
going
from
this
kind
of
high-level
road
map
to
what
it
is.
We're
actually
working
on.
I
think
this
is
important,
because
something
that
really
came
to
light
in
2022
is
that
things
tend
to
move
very
slowly
in
the
specification
process
and
it's
not
because
our
process
is
overly
complicated.
So
much
as
our
focus
is
spread
very
thin.
So
we
have
a
lot
of
open
issues
and
a
lot
of
oteps
there's.
D
Working
groups,
but
we
aren't
really
organized
as
a
spec
Community
to
picking
some
subset
of
them
at
any
time
and
trying
to
get
them
over
the
finish
line.
I,
don't
think
every
single,
Otep
or
issue
that
comes
in
has
to
get
prioritized
or
organized
in
this
manner,
but
for
the
stuff
that
we
deem
important,
we
should
be
in
the
New
Year,
trying
to
figure
out
how
much
bandwidth
we
actually
have
and
consciously
choosing
a
set
of
projects
to
be
working
on
at
any
one
moment.
D
I
think
just
that
process
alone
will
cause
a
big
change
in
the
feel
of
people
trying
to
get
specs
approved
or
disapproved
either
way.
If
we
can
tell
people
explicitly
we're
going
to
be
working
on
this
and
then
as
a
group
try
to
get
it
over
the
Finish
Line
quickly
or
drop
it
if
it
turns
out
the
champions
for
that,
Otep
are
no
longer
responding.
D
It
also
lets
people
know
hey
we're
explicitly,
not
prioritizing
your
project
right
now.
We
don't
have
the
bandwidth
to
work
on
it.
We're
working
on
these
other
things,
so
people
at
that
point
won't
be
receiving
a
bunch
of
comments
necessarily
from
approvers
or
or
TC
members,
but
they
understand
why.
D
So
it's
not
confusing
so
I'd
like
to
see
us
kick
off
something
like
this
in
the
new
year,
especially
given
the
wide
list
of
priorities
and
projects
we
already
have
open,
it
would
be
helpful
to
organize
all
of
that
work
into
a
backlog,
and
we
already
have
a
set
of
tools.
We've
been
developing
to
do
this,
so
we
have
an
understanding
what
project
management
means
for
spec
projects.
D
We
have
a
project
board
where
we
can
do
the
actual
organizing
and
we
have
a
project
issue
template
in
order
to
get
things
put
up
onto
that
project.
Board.
I,
don't
think
the
system's
perfect.
We're
definitely
going
to
want
to
to
tweak
all
of
these
things
as
we
use
it,
but
we
do
have
a
starting
point.
D
The
main
thing
that
we
need
is
to
kind
of
participate
in
setting
this
up
as
a
community,
so
in
particular
I
would
love
feedback
from
TC
members
and
spec
approvers
about
how
feasible
it
is
for
them
to
participate
spending
some
additional
time,
triaging
all
of
our
existing
projects
and
kind
of
organizing
them
and
seeing
if
we
can
get
the
first
round
more
organized
work
kicked
off
in
January,
but
I
would
love
feedback
from
this,
not
just
from
TC
members
but
from
everybody
in
the
spec
community.
D
So
that's
my
pitch.
It's
the
end
of
the
year
and
we'll
be
digging
into
this
this
harder
in
January,
but
we've
got
at
least
a
couple
of
TC
members
on
the
call
right
now
and
a
couple
of
maintainers,
so
I'm
interested
to
hear
your
initial
thoughts
on
trying
to
get
this
done.
A
Actually,
I
have
a
more
common
I
think
it
sounds
great.
It's
a
good
plan.
The
only
concern
that
I
have
and
we,
of
course
we
can
see
how
things
go
is
they
have
count?
You
know
I
think
it's
a
lot
of
work
and
we
need
people
who
are
actually
in
the
loop
and
part
of
us
finding
experts
for
some
specific
parts.
So
we
should
do
you
know
really
task
trying
to
find
people
like,
for
example,
I.
A
Don't
think
we
have
enough
Capital
experience
in
in
the
group
at
this
moment
you
know,
for
example,
so
if
we
wanted
to
actually
go
and
improve
the
Kafka
specific
somatic
conventions,
you
know
who
knows
what
we
have.
You
know
how
good
it
is.
Etc.
F
May
actually
help
with
the
head
count
problem.
If
we
have
a
head
count
problem
now,
then
not
organizing
things
is
just
going
to
make
it
worse.
If
we
start
on
this
process-
and
we
find
that
we
don't
have
the
head
count
to
do
all
of
the
projects
that
we
think
in
the
timeline
that
Ted
came
up
with,
it
just
means
we
do
fewer
projects
and
it
takes
a
little
longer
because
we
don't
have
head
count,
but
it'll
at
least
be
a
more
organized
effort.
D
Yeah,
what's
actually
happened,
that's
a
really
important
point
when
what's
been
going
on
today
is
I
think
we
have
a
good
process
of
people
not
coming
in
and
just
rubber
stamping
things
that
they
don't
understand
like.
We
want
to
be
cautious
with
getting
things
into
the
specs.
So
that's
a
good
process,
but
a
side
effect
of
not
being
organized
is
when
we
don't
have
the
people.
What
happens?
Is
those
issues
just
kind
of
stall,
silently
we
don't
have
a
mechanism
for
trying
to
prioritize
it.
Deciding.
D
We
don't
feel
comfortable
like
judging
this.
We
don't
have
the
subject
matter,
expertise
and
at
least
you
know
going
back
to
that
project
or
that
Otep
and
saying
hey.
We
need
to
get
more
subject
matter
experts
to
weigh
in
on
this,
and
we
need
some
time
somebody
from
our
community
to
agree
to
learn
the
subject
enough
that
they
can
at
least
understand
the.
D
D
And
then
we
don't
go
advertising
and
try
to
organize
more
experts
to
come
in.
It's
the
other
thing,
because
we
don't
realize
there's
a
problem.
G
D
Yeah
I
don't
know
if
any
maintainers
or
TC
members
or
anyone.
F
H
Great,
it
also
sounds
good
to
me.
Sorry,
I
was
making
lunch,
so
I
was
eating.
That's
fine,
yeah
I
think
just
to
re-emphasize
what
other
people
said.
If
the
minimum
we
do
is
make
sure
we
have
enough
people
to
make
decisions
on
any
particular
working
group
and
get
things
through
I
think
that's
that's
ideal,
because
we're
limited
to
bandwidth
by
how
many
people
can
attend
a
meeting.
That's
good.
D
Yeah,
like
likewise
with
our
Otep
backlog,
there's
like
a
lot
of
good
stuff
sitting
in
our
Otep
repo
as
PR's,
and
but
a
number
of
it
you
know,
is
tricky
enough
that
we
we
do
need
to
like
weigh
in
and
think
about
it.
D
We
can't
just
approve,
you
know,
improve
the
obtep
without
comment,
but
I
think
if
we
can
get
organized
and
just
just
pick
a
subset
of
them
at
any
given
moment
and
make
it
clear
to
the
rest
of
the
community
and
just
public
in
general,
which
ones
we're
we're
working
on,
we
might
even
be
able
to
start
attaching
timelines
to
these
things
right,
like
there's.
A
public
review
period
happening
right
now
for
this
Otep
for
the
next
couple
weeks,
and
then
we're
gonna
approve
it.
D
Unless
you
know
someone
makes
a
really
big
comment,
things
like
that,
it's
a
little
hard
to
have
milestones
and
deadlines
when
we're
not
kind
of
like
explicitly
choosing
which
things
we're
going
to
work
on.
A
A
Let's
see
how
that
goes,
but
basically
one
of
the
things
is
that
probably
too
many
reviews
are
expected
on
an
otab.
You
know
so
we're
trying
to.
We
will
be
trying
to
to
play
with
a
pair
of
changes
there
so
coming
up
next
month.
Book
done
was
proposing
some
changes
and
he
won't
be
around
to
his
final
details
until
January,
but
hopefully
we
can
make
it
happen.
A
D
Yeah
I
think
I
think
that
would
that
would
definitely
help
streamlining
the
process.
It
does
seem
like
sometimes
it's
a
little
unbalanced.
How
many
approvals
you
need
to
get
something
over
the
Finish,
Line
I
think
we're
we're
good
I
think
we
could
lower
that
because
I
think
for
changes
that
really
are
like
serious
or
have
like
big
implications.
I
do
think
we're
we're
cautious
about
hitting
the
merge
button.
You
know,
even
if
there
are
two
approvals
or
whatever
so
I'll,
be
fine.
Lowering
that
limit,
but.
D
Paired
with
also
having
a
triage
process,
where
you
know
we're,
rather
than
just
in
private,
like
publicly
communicating,
have
a
way
to
publicly
communicate
which
of
these,
which
of
these
projects
we're
working
on
and
kind
of
like
who
within
our
community
is
assigned
to
them.
I
think
that
was
the
other
piece
of
feedback
for
for.
G
D
All
kind
of
like
know
each
other
well
enough
that
it's
easy
just
to
Ping
somebody
on
slack
and
say:
hey:
can
you
look
at
this
or
whatever,
but
when
the
the
people
who
are
doing
the
work
on
the
proposal?
Don't
have
a
lot
of
connections?
The
open,
Telemetry
Community
they're
often
very
shy
about
just
like
reaching
out
to
some
like
random,
TC
member,
or
something
like
that.
D
If
they
don't
get
a
response
in
in
the
GitHub
PR
or
issue,
they
feel
very
shy
about
choosing
some
other
mechanism
for
reaching
out
to
people.
So
you
know
letting
them
know
hey.
This
is
like
the
person
who's
like
assigned
to
like
Shepherd
this
through
the
process.
So,
if,
like
you
are
confused
or
things
are
stalled
out
or
there's
some
kind
of
issue-
or
you
have
questions
like
this-
you
can
totally
DM
this
person
and
like
they
will
help
you
with
it.
D
A
Yeah
totally,
okay,
any
more
comments
on
the
front.
C
Yeah
I
have
a
I,
have
a
quick
comment.
So
I
was
scanning
through
your
document
really
quickly
and
there's
a
section
at
the
bottom
that
talks
about
trying
to
you
know,
go
through
our
existing
work
and
figure
out
what
the
the
high
priority
items
are,
and
one
thing
that
I
noticed
was
that
we
we
have
some
work.
That
is
it's
done,
but
it's
not
done
like.
Let's
talk
about
metrics
for
a
second,
so
metrics
they're
they're
done
but
they're
not
really
done.
C
There's
still
a
couple
of
high
priority
items
that
we
punted
on
like
exemplars
and
synchronous
gauges,
come
to
mind
what
was
another
one.
There
was
there's
a
observable,
histograms
I
think
folks
have
talked
about
that,
and
I
can
I
can
probably
come
up
with
a
couple
of
more
items.
Oh
the
hint
API,
that's
another
thing
that
you
know
we
we
hear
about
quite
frequently
from
users,
and
so
you
know
where
do
we?
C
D
Yeah
I
mean
this
is
there's
always
the
backlog
prioritizing
issue
of
if
something's
labeled
a
P2.
That
can't
mean
it
just
like
never
happens.
You
have
to
kind
of
find
the
right
mix
of
different
kinds
of
high
priority
items,
but
you're
totally
right
for
a
lot
of
these
projects.
D
I
can
see
the
same
things
happening
to
log
events.
This
happened
to
http
as
well,
where
we
got
like
the
bulk
of
the
work
done,
but
part
of
how
we
did.
That
was
by
carving
off
certain
things
and
saying
like
Okay
this.
This
piece
is
kind
of
blocking
the
rest
of
the
work
it's
important,
but
it's
less
resolved
than
everything
else,
so
we'll
we'll
punt
on
it,
but
we
don't
really
like
track
that
we've
done
that.
D
So
if
we
just
had
those
remaining
metrics
items,
as
you
know,
projects
or
you
know
like
placeholder
issues
or
something
so
that
they
could
go
on
the
the
board.
So
we
don't
forget
that
this
is
a
thing
we
committed
to
doing.
A
By
the
way,
just
not
a
very
important
comment,
but
Morgan's
roadmap
also
mentions,
like
you
know,
what
are
the
priorities?
Next
and
probably
it
could
be
interesting
to
discuss
like
in
case
of
metrics,
specifically
like
hey,
we
are,
there
are
still
items
that
are
super
important
for
us.
We
want
to
make
it
happen
as
a
priority.
D
That
would
be
good
feedback
for
that
doc.
I
think
he's
leaving
it
open
over
the
holidays,
but
yeah
mentioning
hey,
we
didn't
actually
finish.
Metrics
would
be
good
because
I
want
all
of
those
features
that
you
just
mentioned
checked.
That's
like
the
good
stuff.
C
Yeah
so
yeah
I
I,
like
this
idea,
I
like
the
direction
and
I'm
happy
to
participate
so
looking
forward
to
to.
D
Get
started
I
was
proposing
because
I
don't
know
that
we
want
more
open,
Telemetry
meetings
on
our
calendar,
but
I
also
don't
know
that
we
want
to
like
you
know
it
seems
like
this
meeting
aside.
We
we
usually
use
up
all
the
time
in
the
spec
meeting
you
know
talking
about
whatever
issues
are
opening
and
trying
to
discuss
details
so
I,
don't
know
what
that
I
want
to
turn
this
meeting
into
just
like
a
big
triage
session.
D
All
the
time
I
was
proposing
that
the
maintainers
meeting
on
Monday
has
a
lot
of
the
right
people
and
tends
to
not
consume
it's
full
time.
So
maybe
we
could
split
some
of
the
discussion
work
between
this
meeting
and
that
meeting
I
don't
know
how
well
that
works
for
everybody.
D
Slack
a
bit
more
for
discussing
this
stuff,
but
we
we
tend
to
like
meetings.
We
tend
to
like
synchronous
interfaces
in
this
project
for
whatever
reason,
but
I.
D
With
that,
like
you
know,
when
it
comes
to
actually
doing
the
work
of
like
trying
to
discuss
what
it
is,
we
should
prioritize
and
all
of
that
if
people
have
ideas,
it
would
be
great
to
get
comments
on
that,
because
I
definitely
don't
want
to
like
pack
yet
another
meeting
onto
our
calendar.
If
we
can
avoid
it.
A
Cool
okay,
yeah.
Thank
you.
Thank
you
so
much
for
that.
Yeah
interesting
discussion,
yeah,
okay,
moving
forward
in
the
case
yeah.
So
the
next
item
is
Alan
batch
processor
default
defaults.
Please.
G
Hello,
folks,
yeah,
that's
PR,
it's
I
think
a
small
one.
The
some
folks
in
the
logsig
myself
included,
have
been
kind
of
going
through
some
of
the
dangling
to
Do's
in
the
log,
API
and
SDK
specification,
and
so
this
is
one
of
them.
It's
in
reference
to
the
batching
processor
defaults
for
logs.
So
you
know
like
the
the
export
interval,
for
example,
and
that's
the
one
that's
actually
come
under
discussion.
G
So
there's
a
feeling
here.
Tigran
has
said
that
he
feels
like
five
seconds
is
too
long
and
in
fact
he
thinks
it's
even
too
long
for
traces
as
well,
and
five
seconds
is
the
default
that
I've
proposed
in
this
PR
for
the
interval
because
of
you
know,
consistency
with
traces,
but
he
raises
an
interesting
question
like
whether
there
would
be
any
appetite
for
considering
five
second,
a
five
second
default
as
a
bug
for
traces
and
going
back
and
considering
a
shorter
interval.
G
For
you
know
if
we
wanted
to
keep
consistency
across
signals
right,
proposing
a
shorter
interval
for
both
logs
and
traces.
So
that's
maybe
one
path
that
I
I'm
interested
in
in
a
wider
communities.
G
Kind
of
weigh-in
on
another
path
would
of
course,
just
be
that
they
just
differ
and
that
we
we
opt
for
a
shorter
interval
for
logs
tigran's,
presented
some
arguments
based
off
of
like
kind
of
like
a
live
tailing
use
case
where
if
people
are
shipping
logs
from
an
SDK
and
then
through
a
collector,
you
know
each
of
these
components
have
a
delay,
and
so
each
delay
kind
of
destroys
a
a
use
case
like
live
tailing.
G
E
G
G
Curious,
if
other
folks
have
strong
opinions,
one
way
or
another
or
if
you
need
to
think
about
it,
just
wanted
to
raise
this
issue
is
kind
of
a
to
your
attention
so
time
you
could
weigh.
E
D
I
I
do
agree
that
I
think
five
seconds
is
a
very
lengthy
default,
especially
when
it
goes
all
the
way
up.
The
chain,
like
you
said,
as
far
as
having
different
defaults,
I,
feel
like
we
should
be.
D
You
know
it's
definitely
possible
to
be
even
within
the
SDK
teeing
logs
and
metrics
and
traces
off
to
different
back
ends,
but
for
thinking
about
it
from
the
perspective
of
otlp,
where
we're
sending
all
this
stuff
out
over
the
same
protocol
to
me,
it
doesn't
make
sense
to
have
different
settings
around
batching
for
for
different
parts
going
over
that
protocol.
It
seems
like
that
would
turn
into
something
confusing,
which
would
be
kind
of
exacerbated
if
the
default
was.
G
Yeah
that
that
was
that's
basically
my
concern
as
well,
just
that
it
might
be
confusing
for
users
that
they
just
won't
know
why
why
they
have
to
go
in
and
configure
one
so
they're
the
same,
because
they
probably
wouldn't
want
them
to
be
the
same.
Maybe
yeah
a
side
note
you
know
metrics,
you
know
there
isn't
really
a
notion
of
a
of
a
processor
in
metrics,
but
the
default
for
the
periodic
metric
reader
is
actually
I'm,
recalling
off
the
top
of
my
head
60
seconds.
G
D
That
sure
makes
installing
open,
Telemetry
a
pain
in
the
ass.
That
is
feedback
we
get
when
people
are
trying
to
set
these
things
up,
which
you
could
argue,
is
a
special
use
case.
But
it's
difficult
when
you're
trying
to
understand
why
you're
not
seeing
something
and
whether
there's
a
connection
issue
somewhere
in
your
pipeline,
or
did
you
not
actually?
D
Are
you
not
actually
producing
metrics
Etc
having
all
of
these
big
delays
built
in
makes
that
difficult
and
switching
things
out
to
like
a
non-batched
processor
doesn't
leave
you
with
confidence
that
when
you
switch
it
back
to
what
you're
actually
going
to
use
in
production
that
it's
going
to
work,
so
I
would
be
in
favor
of
may
maybe
approaching
this
by
from,
like
an
otlp
standpoint
of
if
we're
going
to
do
efficient
batching
by
trying
to
send
metrics
traces
and
logs
all
piped
out
in
the
same
batch.
G
B
D
Yeah
that
makes
sense-
and
you
know,
60
divides
by
five.
You
know
well
enough,
but
I
guess.
My
point
is
like
maybe
revisiting
it
coherently,
the
as
long
as
we've
got
time
and
I'm
talking
about
this
a
reach
goal
that
I
don't
think
we
need
to
attack
right
now,
because
we
have
plenty
of
work
on
our
plate.
Is
we
don't
have
a
streaming
protocol?
D
We
have
a
batching
protocol.
I've
done
work
in
the
past
to
make
streaming
protocols
where
the
data
is
going
out
as
fast
as
it
comes
in,
and
you
have
a
back
pressure
system
for
slowing
that
down
and
the
thing
you're
sending
it
to
wants
you
to
slow
it
down.
D
Eliminates
can
be
in
some
scenarios
depending
what
you're
doing
can
be
a
more
efficient
solution
in
general,
because
you
aren't
not
sending
data
and
then
sending
a
big
pile
of
data
and
not
sending
data
and
sending
a
big
pile
of
data
you're
just
everywhere,
along
the
pipe
trying
to
push
as
fast
as
I
can
and
regulating
back
pressure
on
the
receiver
side.
C
G
Others
might
feel
that
way
too.
So
maybe
it's
separate
from
this
PR
I'm,
considering
maybe
opening
an
issue
and
proposing
that
we
shorten
the
default
for
traces.
So
folks
think
about
that.
H
I
have
a
question
which
is:
maybe
this
is
my
naivety
with
the
collector:
do
we
flush
when
the
Buffer's
full
or
do
we
drop
when
the
buffers
hold,
because
it
looks
like
according
to
what
I'm
reading
we
drop
on
the
Buffer's
full?
So
in
that
case,
I'd
argue
five
seconds
is
way
too
long.
But
if
it's
like
five
seconds
or
buffer
full,
when
we
flush
the
batch,
then
I
don't
think
it
matters
as
much
like.
That's,
because
I
expect,
logs
to
fill
up
very
quickly.
C
C
C
G
H
I
feel
like
that
to
me
basically
I
if
we
have
a
flush
trigger
right
and
a
different
Headroom
than
that
I
feel
like
that,
gives
you
what
you
really
need
for
logging
I.
Think
it's
less
again.
The
delay
is
more.
When
I
don't
see
anything
I,
don't
want
to
waste
a
ton
of
time
with
a
bunch
of
processing.
H
So
I
can
you
know
if
I
get
one
log
every
minute,
maybe
waiting
five
minutes
to
send
it
all
is
fine
because
there's
no
activity
on
that
server,
but
when
it's
hot
I
should
be
flushing
quickly
when
I'm
tailing
right
and
that
that's
where
I
would
use
that
other
control
for
it.
So
that's.
Why,
like
that's
the
question
I'm
asking
this
whole
debate,
but
I
didn't
know
specifically
how
things
work
on
both
sides,
because
I
need
to
go.
H
Look
that
up
like
it's
just
something
I,
you
know
I,
believe
I've
configured
the
batch
processor
twice
in
my
life
and
then
just
copy
pasted
the
setting
forever
and
never
thought
about
it
again.
So
that'd
be
some
if
somebody
understands
what
that
is
and
can
answer,
I
think
that
that
you
know
to
my
mind
all
the
concerns
about
five
minutes
being
too
late
are
basically,
in
my
opinion,
overwhelmed
by
this,
like
okay,
if
I
really
have
an
active
server,
it's
going
to
flush
quicker
if
that's
not
true
and
I'm,
actually
dropping
logs.
J
We're
not
dropping
I
mean
the
collect
of
the
bus
processor
will
not
Dropbox,
it
will
sign
the
logs
if
the
buffer
is
full.
Also
if
it
receives
more
logs,
then
logs
or
other
signals,
then
it's
able
to
then
then
the
configured
Max
size
it
will
just
split
it.
The
best
process
sort
of
split
the
received
for
this
resource
or
whatever
it
will
split
it
so
that
it
can
send
batches
not
bigger
than
the
configured
size.
J
C
Of
course,
and
while
we're
on
the
subject
of
differences
between
the
SDK
and
The
Collector,
the
batch
processor
and
The
Collector,
the
default
timeout
is
200.
Milliseconds
versus
the
default
timeout
of
of
five
seconds
for
the
batch
span,
processor
and
unknown
for
the
batch
log
processor.
D
D
And
also
Josh
to
your
concerns.
These
are
just
the
defaults
and
the
default
should
like
not
be
terrible,
but
I
think
they
should
be
like
a
balance
between
something.
That's
like
quick
enough
that
when
you're
a
new
user,
you
know
it
makes
sense,
but
won't
ruin
you
if
you
roll
into
production,
but
yeah
people
absolutely
tweak
these
things
to
do
like
what
you
said
around,
maybe
having
like
a
really
long
time.
D
C
D
C
An
interesting
point
about
shortening
the
the
timeout,
because
the
otlp
default
settings
are
indexed
towards
a
setup
where
your
collector
is
local,
so
that
the
defaults
for
the
otlps.
You
know
the
end
point
is
localhost,
4317
and
so
on,
and
so
you
know,
I
think.
We've
we've
made
that
decision
before
that
the
defaults
should
be.
You
know
tuned
for
for
that
type
of
setup,
where
you
have
a
collector
running
on
the
same
host.
H
So
one
one
thing:
that's
important,
though,
is
I,
think
there's
a
different
expectation
on
sampling
between
traces
and
logs
so
to
some
extent
well,
I,
guess
that
would
that
should
be
the
California
sampling
processor,
not
a
batch
processor,
but
to
some
extent
I
I.
Think
if
batch
processor
in
collector
is
different
than
in
process
SDK,
that's
actually
okay
and
kind
of
expected
to
me.
However,
I
would
have
expected
the
in
process
to
be
faster
and
The
Collector
to
be
slower.
H
H
We
made
around
batching
and
sampling
by
a
thousand
when,
especially
when
it
hits
in
process-
and
you
know
for
the
logger-
it's
just-
they
are
much
harder
to
deal
with
so
I'm,
not
surprised
that
logging's,
the
one
that
caught
it
I
guess
is
what
I'm
saying,
and
the
second
thing
is
I
think
we
do
need
to
go
fix
this
and
put
put
some
effort
on
it.
I
don't
know
if
we
want
to
do
this
part
of
the
logging
Sig
and
ask
them
to
go.
Look
at
the
trace
things.
D
A
Yeah,
actually
I
was
going
to
say
that
I
could
nominate,
Digger
and
he's
not
here
but
probably
he's
on
holidays,
but
next
next
year
probably
he
will
be
able
to
try
this.
A
But
shall
we
create
an
issue
for
that?
So
we
don't
forget:
I
mean
there's
an
issue
for
this
PR
that
Allen
created
what
having
a
not
application
but
I
need
to
explaining
that
you
know
we
should
go
and
check
the
different.
You
know.
Values
for
these
in
different
signals
would
be
a
good
one.
So
we
don't
forget.
A
Awesome
perfect,
thank
you
so
much
yeah
and
let's
see,
hopefully
we
can
help
Digger
and
if
he's
too
busy
with
on
that
one,
okay
moving
forward,
I
only
have
three
more
items.
We
don't
have
to
discuss
them.
They
are
just
basically
PR's
that
I
will
be
probably
merging
later
today,
at
least
the
first
one
deprecating
Jager
exporters.
This
is
something
that
just
support
information.
Sorry
for
everybody's
information.
Here
we
are
deprecating
the
Jager
exporters
in
the
sdks.
A
They
will
still
be
around,
but
it's
second
half
of
next
year
they
will
be,
you
know,
removed
from
the
core
Parts
at
least
Yuri's
involvement
in
Jaeger
help.
Us,
you
know
ponder
that,
so
it
makes
sense,
and
it's
good
looking
good
and
the
second
one
is
about
renaming
built-in
Exemplar
filters.
It
has
enough
approvals,
but
please
take
a
look
at
that.
The
last
one
is
a
small
clarification
on
what
responses
from
hotel
phdp
should
be
retrieved,
which
ones
shouldn't
of
approvals.
A
So
please
just
take
a
look
at
that
later
today,
I
will
be
merging
those
ones.
I,
don't
think
we
are
in
a
hurry
because
we're
not
doing
any
release
of
the
spec
anytime
soon,
but
it
would
be
good
to
get.
You
know,
a
stop
that
is
already
approved
and
all
that
merge.
You
know,
instead
of
just
letting
it
stay
there
for
a
pair
of
weeks.