►
From YouTube: 2023-01-04 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
Okay,
let's
start
we
have
one
item
in
the
agenda.
It's
about
the
the
botching
defaults,
Alan
I'm,
guessing
you
added
it.
C
Yep
yeah
I
just
wanted
to
give
this
group
an
update.
We
discussed
this
in
the
specsig
I'm
forgetting
a
week
ago
or
two.
D
C
Ago,
but
there's
some
general
agreement
that
five
seconds
seems
like
a
long
time
even
for
traces
and
there's
some
desire
to
talk
about
that.
So
I
haven't
yet,
but
I'm
going
to
open
up
an
issue
to
continue
that
conversation
just
wanted
to
give
you
all
a
heads
up,
probably
discuss
it
again
in
the
spec
meeting
this
next
week,
Tuesday
but
anyways.
C
That's
the
update
five
seconds
too
long
for
traces,
we're
gonna,
consider
lowering
that
you
know
we
most
most
of
the
things
we
talked
in
the
about
in
the
specsaver
things
that
we've
already
discussed
here.
You
know,
there's
differences
between
the
sdks
and
The
Collector,
for
example,
like
The
Collector
is
200
milliseconds
for
span
data
and
the
sdks
are
five
seconds.
C
There's.
Also
the
discussion
about
like
the
SDK
settings
for
like
otlp
are
typically
geared
towards
exporting
to
a
local
collector.
C
You
know
like
localhost
4317,
for
example,
so
like
that
would
seem
to
imply
that
a
shorter
duration
could
make
sense
just
to
stay
consistent,
but
in
the
in
that,
in
that
regard,
anyways
Jack,
you
and
I
have
talked
about
this.
Do
you
have
other
comments?
I'm
going
to
open
this
issue
today,
so
folks
can
take
their
time
with
their
thoughts.
A
B
Where
did
we
leave
this
last
time?
So
on
the
subject
of
whether
we
should
try
to
have
consistency
or
whether
we're
accepting
of
like
you
know
the
two
signals
diverging
you
know,
let's
say,
let's
say
that
we
come
up
with
some
number
for
logs,
like
one
or
two
seconds
as
the
default.
B
Are
we
happy
you
know
moving
forward
with
that
and
then
separately,
hoping
that
we
can
align
spans
to
that
default
or
do
you
do
we
need
to
kind
of
treat
both
signals
unifying
on
some
default,
as,
as
you
know,
an
atomic
thing
that
we
do
all
at
once.
A
We
don't
have
to
to
have
the
exact
same
numbers,
but
it
would
be
nice
if
it
was
possible
right.
So
if
the
the
community,
the
broader
Community
for
the
traces
specifically
they
believe
lower
numbers
are
also
useful
for
traces
and
if
we
can
make
that
happen
for
traces
and
and
and
call
it
a
performance,
bargain
fix
it
so
that
it's
not
a
breaking
changing,
it
will
allow
it
to
do
that.
Then
that's
great
right.
A
Let's
do
that
first
and
then,
if
the
number
is
is
good
like
if
it's,
if
it's
a
subsequent
something
a
second
or
whatever,
it
is
not
five
seconds
at
least
right,
something
significantly
lower.
Then
I
think
it's.
We
can
use
that
for
logs
as
well
right.
So
maybe
let's
try
in
that
particular
order,
click,
the
traces
part,
and
if
it's
good
enough
for
logs,
we
accept
that
for
logs
as
well.
A
If
that
doesn't
work,
then
I
think
it's
fine
to
diverge
slightly
and
have
a
lower
number,
especially
since
I
personally
consider
it
a
bug
right
so,
but
I
I'm
hopeful
here
right.
If
we
can
make
the
change
for
traces,
that's
that's
the
best
that
we
can
do
here
right
and
we.
We
then
use
the
same
number
for
logs.
C
Yeah
and
I
think
they
were
kind
of
two
camps
of
thought
in
regards
to
that
like,
should
they
be
the
same
or
is
it
okay
for
them
to
differ
like
like
an
argument
for
them?
Being
the
same,
is
is
like
new
users
coming
in
and
their
install
experience,
making
sense
you
know,
or
just
simplifying
their
install
experience,
also
like
a
shorter
interval
for
everything
for
a
new
user
means
that
they
see
data
right
away.
C
You
know,
typically,
your
new
users
are
testing
things
out
and
you
don't
want
to
wonder
why
your
data
is
taking
a
while
to
show
up
so
there's
that,
but
then
that
should
also
be
balanced
with
like.
C
Nice
quote
from
Ted
during
the
seg
meeting
was
like:
it
should
be
good
for
new
users,
but
you
know
the
default
also
shouldn't
like
ruin
you,
if
you,
if
you
deploy
it
to
prod,
you
know,
ideally,
so
that's
one
Camp
of
thought.
The
other
campaign
where,
like
oh
well,
maybe
it's
okay
for
them
to
differ.
C
C
C
So,
like
that's
the
case
and
also
the
collector,
if
the
batch
exceeds
some
configurable
amount,
it's
going
to
flush
anyways
like
it's
not
going
to
wait
for
five
seconds
or
whatever
so.
B
Yeah
the
relevance
of
this
setting
is
so.
This
is
how
the
Java
implementation
works
for
spans
and
logs.
So
let's
say
you
reach
your
max
batch
size
parameter
and,
let's
say
that's
set
to
100,
so
you've
accumulated
100
log
records
and
before
you've
reached
your
your
timeout
setting
or
your
your
interval
setting,
which
is
like,
let's
say
five
seconds.
B
A
Sounds
like
implementation
bug
to
me.
Well,
it
might
be
like
that
right.
B
A
I
think
it's
fine
to
temporarily
exceed
the
the
maximum
number
of
items
you
can
hold
in
memory.
While
they
are
being
exported.
That's
kind
of
expected
right.
You,
you
accumulate
a
botch,
you
start
sending
the
patch
you
give
it
to
the
exporter
and
while
it
is
being
sent
sure
it
is
held
in
memory,
but
you
should
continue
accepting
new
items
to
the
queue
right.
That's
that's
a
very
I
mean
the
classic.
Implementation
would
be
like
that.
You
definitely
don't
want
to
drop
items
while
you're
sending
the
previous
batch.
B
C
B
And
John
Watson
they
probably
have
their
fingerprints
all
over
that
I
think
I
think
I.
Think
being
defensive
about
memory
is
you
know,
there's
a
there's
at
least
like
a
case
for
that,
just
like
put
a
Max
size
on
your
queue
like
and
have
that
Max
size
correspond
to
your
your
configured.
A
Max
batch
size,
okay,
I,
may
be
confused
here,
so
we
we
probably
should
have.
If
we
don't,
should
have
two
different
settings.
One
is
the
the
batch
size
that
you
accumulate
and
once
it's
accumulated,
you
send
it.
That's
one
setting
the
other
is
the
maximum
allow
with
memory
usage.
That's
a
different
thing
right.
You
hit
that
number.
You
start
dropping
the
data,
that's
fine,
that's
expected,
but
that
should
be
much
higher
than
the
maximum
batch
size
and
those
need
to
be
different.
If
they
are
the
same,
I
would
argue.
That's
that's
the
wrong
design.
A
That's
that's
the
parameter
today,
so
that's
the
wrong
design
or
or
somehow
it
was
implied
that
they
one
should
be
equal
to
the
other
and
that's
wrong
right.
The
max
the
the
size
of
the
batch
that
you
accumulate.
That's
a
valid
size,
it's
not
something
after
which
you
start
dropping.
You
start
dropping
when
you
exceed
some
other
higher
number
of
items
in
memory.
A
A
C
But
maybe
if
we
are
going
to
spend
time
as
a
community
looking
back
at
the
span
defaults
for
that,
like
maybe
we
should
look
at
all
of
them
like
holistically?
Do
we
feel
like
the
the
span
defaults
for
things
like
the
queue
size,
the
max
cue
size
and
the
max
export
patch
size
are
right,
or
do
we
also
want
to
consider
changing
those.
C
Let
me
get
a
link
to
I'll
share
it
on
the
zoom
chat
here.
I'll.
A
We
have
four
settings
right.
We
have
the
schedule
delay,
we
have
expert
timeout,
we
have
Max
Q
size
and
Max
expert
batch
size,
I,
see
four
environment
variables
and
we
have
the
same
four
here
as
configurable
parameters
right
those
four
yeah,
although
the
default
for
the
yeah
timeout
is
30.,
delay
is
five
seconds
and
Max
Q
is
2048
and
5-12.
Yeah
yeah
yeah
the
same
numbers
I,
see
in
the
environment,
variables
right,
okay,
so
yeah
and
obviously
the
the
max
Q
size
is
higher
than
the
batch
size.
As
expected-
and
that's
that's-
that's
fine!
A
So
here
how
long
the
expert
can
run
before
it's
canceled,
that's
a
different
setting
and
the
scheduled
delay
is
the
interval
cool
yeah.
This
is
the
one
that
we're
talking
about
right,
yep,
right,
yeah,.
A
A
It's
also
not
quite
clear
right,
it's
not
it's,
not
the
interval,
it's
a
timeout
right,
if
you,
if
you,
if
you
don't
hit
the
max
expert
batch
size
within
five
seconds,
you
export
whatever
you
accumulated.
That's
the
logic
right.
That
should
be
the
logic.
That's
what
I
would
expect
to
happen.
B
C
Think
there
was
a
PR
I
haven't
looked
at
the
warning
that
I
saw
a
PR,
a
spec
PR
fly
by
that
I
think
it
was
just
a
clarification
of
these
settings
and
maybe
it
improves
this
wording.
B
But
Alan
I
think
what
you're
suggesting
is
that
if
you
change
one
of
these
settings,
potentially
other
settings
may
need
to
change.
You
know
at
the
same
time
like
so
does
the
default
Max,
Q
size
and
Max
export
batch
size
makes
sense
with
a
lower
schedule,
delay
milliseconds.
B
And
I
I
think
you
know
what
I'm
thinking
about
in
this
conversation
is
like
one.
Is
it
a?
Is
it
a
breaking
change
to
to
change
these
settings?
And
you
know
how
would
we
deal
with
that?
You
know:
there's
there's
the
question
of
whether
or
not
it's
a
breaking
change
at
the
spec
level
and
then
whether
or
not
it's
a
breaking
change
for
a
particular
language
SDK,
for
example
the
Java
SDK,
the
the
part
of
the
code
that
well
we
could
change
the
defaults
for
the
environment
variables
pretty
easily.
B
We
might
not
be
able
to
change
the
defaults,
for
you
know
the
the
programmatic
configured
programmatically
configured
batch
span
processor.
Why.
B
The
code
that
interprets
the
environment
variables
is
still
experimental,
and
so
we
can
make
a
behavioral
change
on
that
without
you
know,
without
thinking
twice
about
it,
but
the
code
that
you
know
the
defaults
for
the
programmatically
configured
batch
span.
Processor
is,
you
know,
that's
stable,
and
while
we
want
to
be
making
any
breaking
changes
to
the
apis,
it
would
be
a
and
I
mean
it.
A
Change
that
in
theory,
I
guess
you
could
say
it's
a
breaking
change
right,
so
you
you
can
observe
the
difference,
but
practically
what's
going
to
happen.
Like
does
anybody
depend
on
the
actual
value
of
the
default
value
of
the
of
the
interval
or
whichever,
whichever
change
yes,
they
incurable,
because
there's
any
recipient
really
depend
on
this
being
five
seconds
by
default
and
if
you
lower
it,
someone
is
going
to
break
I,
don't
think
so
right.
What
what
changes
is
the
is
the
performance
characteristics
of
the
system
as
a
whole
right,
so
maybe
whoever
is
receiving.
A
A
We
have
such
a
high
latency
that
that
that
itself,
arguably,
is
a
bug
so
we're
fixing
that
bug.
B
Yeah
and
I
think
I
think
we
I'd
have
to
go
and
think
about
this
a
bit,
but
I
think
we
have
precedent
for
changing
the
default
values.
Of
of
you
know
various
config
bits
of
configuration
in
our
SDK
and
not
considering
that
a
breaking
change,
yeah
so
I
think
I,
I,
think
I
agree
with
you,
yeah
I
think
everyone
in
the
javasig
would
would
come
to
the
same
conclusion.
A
To
me,
a
breaking
change
is
something
you
change
and
the
code
doesn't
compile
right,
an
obvious
breaking
change.
You
change
something
and
the
behavior
functionally
changes
like
like.
You
can
really
see
the
difference.
Writing
the
data
has
a
different
shape,
or
whatever
this
I,
don't
think
so.
I
think
it
would
be
a
stretch
to
call
a
change
in
the
interval
between
exports.
Like
a
breaking
change,
I
mean
no
I
I,
don't
think
so.
Personally,.
B
So,
let's,
let's
assume
for
a
second,
that
we
can
make
the
changes
to
the
the
trace
SDK
and
in
that
folks
are
happy
enough,
at
least
in
the
language
sdks.
B
We
can
change
it
and
potentially,
let's
assume
that
we
can
change
it
at
the
in
the
specification
as
well,
and
you
know
we
can
use
phrasing
that
says
you
know
the
default
should
be
one
second
or
whatever
value
we
come
up
with
unless
there's
a
good
reason
for
it
to
be
otherwise
like
backwards
compatibility,
so
we
give
sdks
the
ability
to
choose
a
different
default
value.
We've
done
that
before
when
we've
changed
the
default
values
in
the
in
the
specification-
and
you
know
we
could-
we
could
presumably
do
that
again.
B
So,
assuming
that
we
can
make
the
change
the
next
question,
is
you
know
what
how
how
do
we
make
a?
How
do
we
come
to
an
agreement
about
a
default
value
that
seems
more
intentional
and
less
arbitrary
than
this
five
seconds,
like?
Basically,
what
data
supports
it?
What
data
do
we
use
to
support
a
particular
value,
or
do
we
just
guess.
A
Yeah,
no,
the
way
I
would
argue
about
this
number
is:
what
is
it?
What
number
results
in
an
acceptable
end-to-end
latency
for
the
for
the
cases
where,
where
it
matters,
particularly
when
life
tailing
is
involved,
so
this
is
the
one
that
about
which
I
can
argue
more
or
less
objectively
right,
like
you,
humans
expect
that
whatever
happens
is
observable
within,
let's
say
a
second
or
so,
if
it's
longer
than
that,
it
starts
getting
annoying
and
effect
impacts
the
human
perception.
A
So
we
we
should
aim
for
an
end-to-end
latency
of
about
a
second
or
less
right,
something
like
that
and
I.
Don't
know
how
to
argue
about
it
from
the
from
the
I
guess
throughput
perspective,
which
is
the
other
thing
that
impacts
right
or
the
number
of
requests
per
second,
second
or
whatever
it
is
that
is
I.
Don't
know
what
principles
to
to
base
that
discussion
on.
So
the
latency
is
the
only
argument
that
I,
more
or
less
can
can
talk
about
through
kind
of
objectively
right.
C
It
would
help
to
have
a
little
bit
more
context
around
that
argument.
So,
like
are
we
assuming
a
language
SDK
that
is
reporting
through
a
local
collector,
because
both
of
those
things
are
going
to
have
a
delay?
That's.
A
A
C
A
A
The
collector
will
do
additional
batching
on
top
of
that,
so
I
I
don't
really
see
how
a
number
reducing
from
five
seconds
to
one
second,
how
it's
going
to
end
up,
causing
a
catastrophe
or
some
some
back
end,
because
we
are
we're
just
hitting
it
with
a
million
requests
per
second
I.
Just
don't
see
that
happening.
Yeah.
B
And
I
think
I
think
we
have
some.
We
have
something
to
an
argument
we
can
lean
on
that
can
allow
us
to
take
it
down
to
about
200
milliseconds,
probably,
which
is
the
default
configuration
for
the
batch
processor,
and
so
you
know
the
default
timeout
for
the
batch
processor
is
200
milliseconds
and
you
know
the
line
of
reasoning
is
we
say,
hey
the
defaults
for
the
language
sdks
are
you
know
set
up
to
for
like
a
scenario
where
you're
running
the
collector
and
you're
exporting
to
a
a
collector
on
localhost?
B
You
know,
that's
why
we
chose
the
endpoint,
that's
why
we
chose
to
disable
compression
by
default,
and
you
know
the
collector
advises
you
to
use
the
batch
processor
before
you
export
to
your
ultimate
destination
in
the
batch
processors.
Timeout
is
default,
timeout
is
200
milliseconds,
so
you
know
the
back
end
should
already
be
capable
of
handling
the
noisiness
of
collectors
that
export
every
200
milliseconds,
and
so
you
know
reducing
the
sdks
from
Five
Seconds
to
something
I,
don't
know
between
200
milliseconds
and
one
second
shouldn't
be
a
problem
from
a
noisiness
perspective.
C
This
brings
up
another
good
point.
I
think
Josh
sarith
brought
this
up
during
the
Sig
meeting,
which
was
you
know.
He
said
it
doesn't
surprise
me
that
the
collector
and
the
sdks
are
different
in
their
defaults,
but
that
intuitively
he
would
have
expected
that
the
SDK
be
shorter
than
the.
B
C
B
Or
we
can
just
align
them
both
to
200
milliseconds
and
say
the
difference
between
the
collector
and
the
SDK.
They
both
have
the
same
interval,
but
the
difference
is
their
default
batch
size.
You
know
the
sdks,
the
512
The
Collector,
you
know
8192,
and
so
you
know
that
reflects
that
a
collector
may
be
expected
to
collect
data
from
multiple
sdks.
A
Yes,
it
does.
We
expect
the
collector
to
to
at
least
in
some
cases,
to
handle
higher
volume
of
data,
especially
when
it's
an
intermediary
that
receives
data
from
multiple,
multiple
hosts
or
whatever
right.
So
that's,
that's
kind
of
kind
of
expected,
and-
and
in
this
case
I
would
also
ask
the
the
opposite
question:
why
why
five
seconds?
What's
the
justification
for
that?
How
is
that
calculated
I?
Don't
think
there
is
an
answer
to
that
at
all.
C
A
C
A
Okay
sounds
good
thanks,
Alan,
hey
yeah,
we
used
half
the
time.
So
maybe,
let's
move
to
the
second
item.
Santoshi.
Are
you
here?
Yes,
you're
here?
Okay,
what.
E
Sorry,
so
this
is
clarification
on
the
any
I
I
just
want
to
understand
how
to
buff
Works.
E
So
I
gave
two
examples
of
you
know
the
log
records
here
you
know
one
having
an
even
data
and
a
bunch
of
Q
value,
Pairs
and
here
in
this
option,
one
the
key
value
pairs
are,
you
know
they?
They
don't
have
the
type
they're
all
implicit
types,
whereas
the
in
this
option,
two
I
have
explicitly
mentioned
the
type
for
each
value
and
I
think
this
is
the
current
representation
in
open
Telemetry.
Because
of
the
way
you
know
we
have
defined
the
value.
Yes,.
C
E
Question
is
if
we
want
to
have
a
free
form
without
specifying
the
types,
what
what
is
the
way
to
go
about
it
like,
for
example,
in
the
case
of
a
log
records
body,
it
says
any
value
right
so
does
it
mean
that
it
is
going
to
be
a
you
know
the
option
two
like
you,
can't
have
a
free
form
Json
there
and
and
every
option.
A
E
A
A
E
Okay,
so
for
the
for
this
PR
or
for
for
support
for
adding
the
support,
for
you
know,
nested
attributes
are
support
for
maps
for
attribute
values.
If
we
want
to
support
a
free
form,
is
that
something
nobody
else
wants?
Is
it
just
the
drumstick
or
are
there
more
folks
asking
for
it.
A
A
It's
it's
a
valid
question,
but
it's
a
different
question,
because
what
this
is
about
right
now
this
particular
pull
request
at
the
ocean.
It's
about
the
abstract
notion
of
any
being
allowed
as
a
type
for
for
the
attributes
in
the
API.
That's
what
it
is
about
right.
It
doesn't
say,
then
how
you
convert
that
any
value
to
a
wire
format.
That's
that's
a
related,
but
separate
discussion.
The
definition
of
of
the
wire
format
is
is
in
the
realm
of
TLP
specification.
E
E
But
even
in
the
case
of
protobuf
I
believe
today,
with
every
value,
you
also
indicate
the
type
right.
A
Exactly
it
is
exactly
like
that
and
then
and
that's
how
one
of
one
of
definition
in
product
works,
there
is
no
there's
no
other
way,
and
this
particular
Json
structure
is
product
of
specification
of
how
the
one-off
fields
should
be
represented
in
Json
format.
It's
not
our
definition.
We
didn't
do
that.
It's
the
standard
definition
that
comes
from
the
protobuf
libraries.
B
A
E
Okay,
so
I'm
just
curious
in
in
protobuf.
What
was
the
reason
to
pass
on
the
type
like?
Is
it
the
type?
Can
it
not
be
automatically
detected.
A
I
I
guess
why?
Why
didn't
they
do?
This
is
because
there
is,
you
can
have
more
choices
in
your
one-off
definition
in
protobuf.
Then
then
it
is
possible
to
to
figure
out
by
just
looking
at
the
at
the
sorry
I'm
taking
this
back.
So
maybe
that's
not
I
I,
don't
know,
I,
don't
know
the
answer
to
that.
Why
they
did
it
the
way
they
did
it.
So
I
don't
know,
but
it
is
what
it
is.
Okay,.
E
Okay,
yeah
yeah
yeah
I'll,
discuss
with
now
on
what
he
was
expecting.
A
E
Yeah,
even
in
the
case
of
body
I,.
A
Thought
the
body
and
the
attributes
it's
yeah,
I,
agree
with
you
like.
The
way
that
one
of
data
is
represented
in
Json
is
not
how
you
would
do
that
yourself.
If
you
would,
if
you
would
want
to
come
up
with
some
sort
of
data
structure
in
Json,
but
that
especially
that
would
be
a
different
format
that
I
wouldn't
then
call
that
a
a
product
buff
derived
at
all
right.
That
would
be
very
different
right.
E
Yeah,
because
you
know
I
think
if
we
ever
get
to
you
know
work
more
closely
on
cloud.
Events
in
a
cloud
event
is,
is
a
wrapper
of
an
existing
object
right,
so
they
they
wrap.
So
the
original
event
goes
as
he
is
in
the
body.
So,
in
which
case
you
know
it
will
be,
it
will
have
to
be
transformed
into
the
otlps
any
value
format,
yeah.
A
Yeah,
but
that
mapping
is
well
defined.
The
good
good
thing
is
that
it
is
well
defined
you
you
can
do
that
unambiguously.
The
definition
is,
is
very
precise,
so,
given
an
arbitrary
Json,
you
can
map
it
to
otlp
any
value
unambiguously
and
and
and
and
it
should
work
without
any
problems-
okay
and
and
the
opposite
direction
as
well.
A
Given
that
is
not
using
the
the
data
types
that
do
not
exist
in
Json,
like
the
bytes
or
or
some
others
I
think
we
have
a
few
that
Json
doesn't
support
as
a
concept,
but
otherwise
they
are
equivalent
right.
Otherwise,
like
you,
you
can
represent
the
same
data
both
ways.
It's
a
bit
weird
I
agree
with
you.
The
way
that
otopj
isn't
represents
this
a
bit
less
efficient,
unnecessarily
verbose,
slightly
more
verbose
right,
but
that's
the
standard
definition.
Okay,.
E
Okay,
yeah
I
I
will
get
back
if
if
Nev
thinks
you
know,
we
don't
want
to
go
this
way,
which
is
unlikely.
B
I
think
that
that's
a
much
bigger
held
account
by
the
way
to
to
you
know
this
has
already
become
pretty
accepted.
A
B
E
A
A
E
B
We're
on
the
subject
of
these
these
complex
attributes,
figuring
out
I
was
curious.
If
you,
if
you
see
any
sort
of
path
to
a
resolution
about
this
it,
the
subject
seems
to
kind
of
have
you
know.
We've
folks
haven't
tried
to
address
it
from
a
number
of
different
angles,
and
it
doesn't
seem
to
be
making
a
lot
of
progress
and
maybe
a
it's.
Maybe
it's
stagnated
altogether.
A
Yeah,
it
is
stagnated,
I,
don't
think
neither
enough
supporters
nor
enough
people
who
don't
want
it
strongly.
So
it's
it's
it's
hard
to
tell.
Maybe
maybe
what
is
necessary
here
is
actually
to
liven
the
discussion
a
bit
right,
make
sure
that
that
people,
people
actually
Express
their
opinions
because
I
don't
see
a
whole
lot
of
participants
in
that
thread
we
had
a
lot
more
in
the
past,
so
maybe
we
need
to
I
mean
I
need
to
circulate
PR
a
bit
more
and
and
let's
see
I
mean
well
you're
right.
B
Yeah
and
I'm
curious,
if
at
some
point
we
can
use
the
the
TC
to
you
know
to
make
a
decision
on
it.
When
do
we
get
to
that
point,.
A
Foreign,
the
agenda
Jack
I,
wanted
to
ask
you
about
one
thing:
I
want
to
try
the
the
current
logging
implementation
in
Java.
A
What's
what's
what's
a
good
way
to
do
that,
like
I
I
created
a
a
blank
Java
project,
I
imported
the
the
regular
open
Telemetry
stuff
and
producing
a
span
that
works
I'm,
not
sure,
what's
the
next
step
for
me
to
do
the
logging
part
because
I
don't
see
any
documentation,
do
you
have
any
pointers
for
me
to
to
try
this
thing
to
play
a
bit
with
it?
Well,.
B
Yeah,
so
the
there's,
the
log
API
and
the
log
SDK
which
we're
all
familiar
with-
and
you
know
as
we're
also
familiar
with
the
API-
is
not
meant
to
be
used
by
the
by
end
users.
B
So
it's
meant
to
be
used
by
appenders
to
bridge
you
know
other
logging
Frameworks
into
the
into
open,
Telemetry,
and
so
there's
we
have
these
appenders
that
are
published,
there's
one
for
log
for
J
and
there's
one
for
log
back
and
basically,
if
you
use
those
log
Frameworks
in
your
project,
you
can
configure
those
log
Frameworks
to
use
these
appenders
and
Bridge
them
over
into
the
the
open,
Telemetry,
API
and
SDK
and
I
get
sorry
I'll
stop
there
for
a
second
are.
A
The
appenders
also
in
the
same
repository
or
they
are
in
the
instrumentation
report,
they're.
A
B
It
lives
here
and
there's
some
documentation
here
on
how
you,
okay,
like
configure,
log4j
and
then
configure
the
SDK
and
wire
it
up,
so
that
the
SDK
can
be
accessed
by
the
appender.
So
this
is
probably
the
key
link.
A
Okay,
cool,
correct,
okay,
I'll.
Try
that
okay
thanks
this
is
this
is
enough
for
me
to
get
started.
I
I
may
probably
need
your
help
a
bit
more
when
I'm
you
trying
to
use
this,
but
this
is
good
enough
for
me
to
get
started.
Thank
you.
Yeah.
B
Just
let
me
know:
there's
we
maintain
New
Relic
maintains
some
examples
and
there's
like
some
working
code.
That
does
this
exact
type
of
setup
that
you
can
like
you
can
clone
and
reference.
If
you
want.
Oh.
A
Great,
thank
you.
I
want
to
play
a
bit
with
with
what
we
have
so
far
to
get
a
better
sense
of
where
we
are
okay.
Thank
you.
Someone
was
saying
something
so
I
interrupted.
D
D
D
The
college
try
and
pivot
to
you
know,
emit
open
Telemetry,
logs
I
guess
the
question
that
I'm
asking
is
and
what
I
chose
to
implement
from
the
beginning
was
to
use
standard,
slf4j,
log4j
and
spring
Boot,
and
then
wrote
like
a
a
layout
that
conformed
to
the
open,
Telemetry
log
data
Sig
or
the
log
data
specification
and
I
think
I
have
that
I
guess
the
question
that
I
have
is:
should
I
not
do
that
and
instead
use
use
something
like
a
just
instead
of
using
log4j
use
a
lot
using
the
Pender.
D
That
does
it
all
for
me
or
for
us
instead
of
maintaining
our
own.
You
know
basically
parallel
layout
that
conforms
to
the
to
the
open,
Telemetry
log
standard.
B
So
you
you,
basically
you
write
your
logs
to
the
council
and
then
you
use
something
like
fluent
fit
to
forward
those
logs
onto
some
destination.
D
Well,
yes,
that
I
mean
that
the
intent
is
that
would
be
yes,
but
I.
Think
before
we
get
to
fluent
bit.
What
we'd
like
to
do
is:
have
it
already
be
in
a
form
that
makes
sense
from
an
open,
telemetries
perspective?
So,
instead
of
using,
like
you
know,
like
a
just
standard,
a
log
for
J
flat
file,
a
Pender
use,
Json
appender
and
have
that
append
or
layout
conform
to
what
y'all
have
already
you
know
specified
in
the
open,
Telemetry
log
data
log
data
specification
if
I'm
saying
that
right.
B
I
think
I
think
both
approaches
are
are
valid
and
you
know,
speaking
from
a
vendor,
like
we've
seen,
customers
do
both
we've
seen,
customers
that
you
know
they're
happy
enough
having
their
logs
exported
out
of
their
application
via
otlp,
and
so
in
that
situation
you
configure
a
log
SDK.
You
configure
log4j
to
bridge
the
the
logs
into
the
log
SDK,
and
then
you
configure
that
log
SDK
to
export
to
some
some
some
Source
over
otlp.
B
That's
fine
that
works
that
puts
a
little
bit
of
extra
there's
a
little
bit
of
extra
load
on
the
application
because
it
has
to
you
know,
accumulate
these
logs
in
memory
and
then
export
them
to
some
Network
location.
B
The
approach
that
you're
doing
is
valid
too,
so
you
know
just
printing
them
out
to
the
council
in
some
format
and
then
having
some
separate
process,
take
those
from
the
the
council
and
forward
them
onto
some
location.
That's
I,
I,
that's
just
as
valid
and
we've
seen.
Customers
do
that
and
I
I
think
that
that
will
be
the
route
that
that
more
performant
performance-centric
applications
will
choose
to
take,
there's
more
moving
Parts,
but
it's
less
load
on
the
application
and
there's
there's
Pro,
there's
pros
and
cons
to
each.
B
You
know
if
you,
if
you
export
out
of
the
application
in
otlp,
then
you
know
you
can
send
those
on
to
the
collector
and
the
logs
will
already
be
in
a
very
structured
format.
That
is,
you
know,
allows
The
Collector
to
operate
and
process
them
in
an
easy
way.
If,
instead,
you
get
those
logs
to
the
collector
in
like
just
a
raw
Json
format,
then
you
have
to
do
more
things
in
the
collector
to
decompose
the
Json
and
extract
out
the
bits
of
of
the
fields
that
you
want
to
that.
B
You
want
to
operate
on
and
there's
tools
to
do
that,
like
like
the
the
logs
transform
processor
in
The
Collector
can
can
do
those
types
of
things.
It's
just
you
know,
there's
there's
a
little
bit
more
required.
D
A
C
A
Then
have
that
collected
by
open,
Telemetry
collector
and
then
sent
to
the
backend,
or
you
can
go
directly
from
the
application
to
The
Collector,
and
to
do
that.
You
use
the
appender
that
that
I
was
just
talking
about
asking
Jack
about
how
I
use
that
right
that
that
is
in
the
instrumentation
library.
And
so
so.
Essentially
you
you
just
use
your
your
log
4G.
A
The
way
that
you
normally
use
with
the
appender
provided
by
open
Telemetry,
and
you
just
configure
it
once
and
point
it
to
the
collector
and
then
logs
go
over
the
network
over
likely
local
network,
because
the
collector
likely
runs
on
the
same
host.
That's
one
approach
and
the
other
is
you,
send
it
to
console
or
to
a
file
and
The
Collector
just
just
collects
those,
but
in
that
case
it
parses
them
as
well
right.
You
need
the
parsing
part
on
The
Collector
right.
D
D
You
very
much
I
think
the
other
issue
that
we
have
and
one
of
the
reasons
why
we
were
kind
of
proposed
why
we
were
proposing
to
just
write
them
out
to
System
out
and
allow
something
like
fluent
bid
to
harvest
them.
Is
that
we
don't
really
we're
not
really
necessarily
in
control
of
where
they're
deployed,
because
it's
going
to
be
code
that
we
ship
to
customers
they're
going
to
deploy
it
somewhere
right.
So
we
don't
know
whether
that's
going
to
be
gke
ocp.
D
We
don't
know
so
that
we're
we're
our
our
goal
is
to
admit
it.
You
know
as
to
to
generate
the
logs
as
close
to
open
Telemetry
as
we
can
and
then
allow
them
to
use
whatever
tool
change
they
may
already
have
in
in
place.
So
I
I
don't
know
if
that's
a
pipe
dream,
but
that's
that's
kind
of
where
we're
going
or
where
we
think
we'd
like
to
go
so
anyway.
Thank
you.
D
I
apologize
for
for
using
up
too
much
air
time
there,
especially
for
a
a
new
lurker,
so
I
appreciate
the
the
consideration
and
the
and
the
feedback
and
the
information
thanks.
D
C
D
C
B
Can
technically
do
that
with
the
log
appender
approach
as
well?
There's
there's
Tooling
in
that
Java
publishes.
That
will
take
your
your
your
whatever
your
Trace
contacts
is
your
active
span
and
Trace
ID
and
make
that
available
in
your
in,
like
log4j's,
MDC
mapped
diagnostic
context,
and
then
you
can
reference
that
span
ID
and
Trace
ID
in
your
in
your
template
that
you
print
out
to
the
console.
D
And
that's
exactly
what
we
did
in
our
starter
kit,
we're
using
MDC
and
stamping
it
and
using
the
the
servlet
filters
to
be
able
to
set
those
correctly
I
mean.
So.
Thank
you.