►
From YouTube: 2021-11-10 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
D
D
D
E
D
Clear
you
know
the
guy.
Actually,
I
know
the
guy
a
little
bit
because
we
we
we
spent
some
time
with
colin
fletcher
back
when,
like
2014
2015,
when
he
was
kicking
around
trying
to
see
whether
he
could
carve
out
something
for
himself.
You
know
for
the
magic
quadrant
for
like
log,
basically
that
never
existed,
and
then
at
one
time
he
came
up
with
like
ai
ops
and
they
meant
algorithmic
id
operations
which
I
was
like.
B
All
right,
let's
go
over
the
agenda,
we
have
lots
of
items.
I
think
we
should
probably
start
with
the
one
that
was
the
last
one,
the
previous
sig
meeting
and
which
we
didn't
discuss.
I
guess
so
that
we
don't
again
miss
that
one
which
was
the
timestamps
right.
B
Maybe
you
want
to
maybe
summarize
what
we
discussed
in
github
issue
as
well.
I
think.
A
Yeah
we
so
we
have
a
longer
discussion
and
actually,
I
think,
a
quite
interesting
suggestion
by
youtube
how
to
handle
that.
But
it
all
goes
to
the
fact
that,
right
now
we
have
a
single
timestamp
in
case
of
logs,
and
this
works
just
fine
when
we
are
dealing,
let's
say
with
first
party
locks
or
something
similar.
A
But
this
time
stand
might
be
incorrectly
parsed
like
when,
when
you
look
at
the
today
today's
date,
it's
november
10th,
which
in
my
part
of
the
world,
will
be
written
down
as
10,
so
sorry
11-10-12,
but
in
some
other
parts
of
the
world
it
could
be
written
down
as
11-10-21
and
if
incorrect
parser
is
being
used
here
and
the
log
is
like
having
this
country-specific
format,
then
the
lock
can
have
incorrect
information
and
the
back-end
might
be
not
aware
if
this
was
a
lock.
A
A
The
second
is
the
receipt
timestamp
that
would
greatly
help
with
understanding
what
happened
to
this
lock,
if
that
was
the
incorrect
timestamp
handling,
or
indeed
this
lock
was
just
recently
retrieved,
even
if
this
was
something
generated
a
month
ago
and
we've
been
having
discussion
about
several
use
cases
here
for
for
lock,
time
stamps
and
what
could
be
done
and
etc.
A
One
of
the
arguments
was
that
we
don't
have
receipt
timestamps
for
other
signals
for
metrics
and
for
locks
or
and
and
for
spans,
and
maybe
we
should,
if
that
would
be
the
case,
but
that
would
add
some
complexity,
and
I
think
that
then
eventually,
the
suggestion
from
tigran
was
that
maybe
we
could
instead
have
the
timestamp
always
assigned
to
when
the
record
was
observed
for
the
first
time
in
case
of
first
party
locks.
This
will
be
when
this
record
was
actually
generated
in
case
of
logs
parsed
by
far
log
receiver.
A
It
will
be
when
the
lock
has
been
observed
for
the
first
time
and
then
additionally,
at
the
parse
timestamp
field,
where
explicitly
we
could
put
the
parsed
content.
So
it
would
change
this,
but
this
was
my
understanding
so.
B
Yeah,
let
me
clarify
why
why
I
was
suggesting
that
right.
I
guess
I'm
convinced
that
there
are
more
than
one
timestamps,
possibly
that
we
want
to
record
that
that's
clear.
The
question
is
where
we
want
to
record
the
those
time
timestamps
and
how
we
indicate,
which
one
is
which
right
and
your
original
proposal
was,
that
we
have
this.
B
But
then
the
times
the
definition
of
the
timestamp
feel
itself
becomes
a
bit
unclear
to
me
right.
So
I
was
trying
to
to
understand
what.
How
do
you
concisely
describe?
What?
What
is
it
that
you
have
two
fields,
but
with
the
other
one
is
not
clear
right,
so
I
I
was
trying
to
come
up
with
this
definition,
and
there
is
what
I
saw
that
what
then
suggested.
B
I
think,
then,
you
said
the
first
time
that
open
telemetry
interacted
with
the
log
right.
So
maybe
if
we
rephrase
that
the
first
time
that
the
event
was
observed
by
open
climate
right,
which
means
that
either
it
was
let's
say
as
a
result
of
calling
the
logging
logging
api
in
open,
telemetry
right,
which
generated
the
event,
the
log
record
or
the
first
time
when
open,
telemetry
somehow
observed
right
or
collected
this
from
a
file
logo
or
from
any
other
solutions
from
a
network.
So
that
kind
of
is
a
more
precise
definition.
B
In
my
opinion,
right
the
first
time
we
observed
that
particular
event.
That's
what
that's
a
slight
redefinition
of
the
current
timestamp
field.
It's
not
like
a
significant
deviation.
We
don't
really
actually
tell
what
the
timestamp
field
is
today
in
the
data
model.
So
so
maybe
we
make
that
slight
clarification
there.
B
B
They
either
have
this
original
source
timestamp
that
they
got
from
somewhere
or
they
have
the
now
the
current
moment
right.
They
can
put
it
there,
which
is
yes,
that's
the
observed
time.
So
there
is
a
consequence
of
that
that
probably
I
don't
know
how
desirable
that
is,
and
then
the
second
field
in
this
case
becomes
parsed
timestamp
right,
because
it
seems
like
that's
the
only
use
case.
We
have
this
for
this
for
second
timestamp,
which
is
the
parsed
or
maybe
extracted,
which
probably
means
the
same
thing.
Somehow
right.
B
B
Hopefully
there
is
not,
which
kind
of
would
then
make
it,
make
it
necessary
to
either
generalize
this
field
or
introduce
the
third
time,
something
which
probably
we
don't
want
to
do.
I
don't
know
what
any,
but
what
others
think
about
this
yeah.
So
I
think
I.
F
I
generally
agree
with
the
overall
set
of
use
cases
we're
trying
to
support
with
those
two
time
stamps.
I
don't
actually
like
the
the
proposal
to
use
parsed
timestamp
to
sort
of
invert,
the
the
meaning
of
which
one's
parsed
and
whatnot,
because
basically,
I
think
of
the
first,
the
the
primary
time
stamp
on
college
is
timestamp.
F
This
is
supposed
to
represent
the
to
the
best
of
our
knowledge,
the
time
at
which
that
log
or
that
event
was
generated
right,
and
so,
if
we're
processing
a
log
file
that
was
created,
you
know
any
substantial
amount
of
time
ago.
Then
we're
basically
representing
it
now,
because
we
may
capture
it
as
a
parsed
timestamp.
But
I
think
most
backgrounds
are
just
going
to
be
looking
at
time,
stamps
and
they're
going
to
be
thinking
of
it
as
when
it
was
consumed,
and
so
it
kind
of
muddies
the
meaning
of
that
timestamp.
F
To
me
that
that's
the
way
to
describe
that
timestamp
should
be
the
the
moment
that
the
the
event
or
log
was
generated
basically
and
to
sharemix
point.
We
can
get
that
wrong
if
we
do
things
incorrectly
and
that's
where
having
that
secondary
timestamp
the
receipt
timestamp
would
be
nice
because
then
we
can.
We
can
reason
about
that.
F
You
can
do
you
know
if
you
can
compare
and
see
if
there's
a
big
delta,
we
can
various
things,
but
we
can't
necessarily
assume
that
things
are
happening
like
in
real
time
always
when
it
comes
to
consuming
especially
log
files.
B
That's
a
good
point,
so
you're
saying
the
timestamp
is
when
the
event
happened,
to
the
best
of
our
knowledge
as
best
as
we
we
know,
and
it
may
be
wrong
because
we
parsed
it
and
the
parsers
may
be
incorrect,
but
that's
that's
the
best
we
could
do.
We
also
recorded
this
additional
information
which
may
be
useful
for
you
to
detect
that
something
wrong
happens
during
the
population
of
this
first
time
stamp
field.
That's
what
you're
saying
correct.
F
And
we
and
we
may
not
even
be
able
to
provide
a
time
stamp,
the
primary
timestamp,
if
they're,
for
example,
in
a
log
file,
if
there's
no
timestamp
to
parse
right.
So
this
would
leave
that
one
remaining
optional,
even
though
it
is
the
primary
it's
a
little
counter-intuitive
in
that
sense,
but
a
back
end
can
reason
about
that
pretty
easily
by
saying:
okay,
if
there's
nothing,
there
then
look
at
the
receipt
timestamp
and
that's
really
the
best
thing
we
have
at
that
point.
G
G
That
I
lost
the
the
very
first
sentence
that
you
said.
Certainly
so
one
of
them
is
the
time
that
we
observed
it,
and
then
one
of
them
is
the
time
that
we
got
from
the
origin
wherever
that
came
from,
and
so
we
know
which
of
those
is.
You
know
some
in
some
circumstances,
one
of
them
is
going
to
be
suspect.
In
some
circumstances,
the
other
one
is
going
to
be
suspect,
like
we
have
the
same
same
problem.
G
If
we've
got
a
big
problem
with
cueing
right,
if
we've
got
a
long
queue,
then
the
origin
timestamp
is
going
to
be
a
lot
more
reliable
than
the
first
received.
B
B
Right
without
I
guess
the
the
actual
names
of
those
things
I
I
I
wouldn't
call
them
primary
or
secondary
in
any
way.
I
agree,
but
again
you're.
I
think
you're
saying
the
first
one
is
the
observed.
One
which
I
believe
is
the
same
as
I
was
suggesting,
or
is
it
different
like
when
you
say
observed,
is
the
definition
different
from
what
I
was
suggesting?
What.
B
G
We're
entirely
I'm
entirely
agreeing
with
you
on
that
the
observed
and
then
the
other
one,
I'm
just
trying
to
say
explicitly.
That
is
where
this
comes
from
is
the
origin.
We
don't
expect
that
that
comes
from
open
telemetry
at
all.
If
we
can
get
it
from
the
origin
and
it's
there
if
we
can't-
and
it's
not
okay,
so
basically.
D
Yeah,
so
I
was,
I
think
I
was
going
to
say
something
very
similar
to
what
david
said,
which
is
just
basically,
you
know,
have
a
menu
of
you
know:
timestamp
timestamp
fields
that
encode
their
semantics,
and
then
you
know
make
them
all
optional
and
put
what
you
can
right
just
make
it
explicit
basically
and
then
leave
leave
the
interpretation
of
that
to
the
receiver.
D
But
the
receiver
knows
what
is
meant
when
it
looks
at
receipt,
timestamp
versus
message
timestamp.
But
that
happens
to
be
some
terminology,
but
you
know
you
know
there
could
be
a
collector
timestamp,
which
is
we
just
have
to
define
it
right
and
not
try
to
map
something
that
is
other.
That
is,
that
is,
like
then,
hard
to
explain
how
it
ended
up
there
on
an
event
by
event
basis.
B
Then
I
think
this
somehow
answer
is
what
you
were
asking
about,
that
that
there
is
no
single
field,
which
is
the
best
that
we
know,
but
the
combination
is
very
clear
if
the
second
one,
the
origin,
timestamp,
is
populated,
that's
the
best.
We
know,
if
not,
then
the
first
one
right,
the
the
observed
one.
So
it's
it's
also
very
deterministic.
It's
not
like,
like
you,
don't
know
on
the
back
end.
What
to
do
you
look
at
the
second
field?
If
it's
there,
it's
good.
B
F
G
And
christian
did
toss
out
a
third
date
here
in
terms
of
the
collector
timestamp,
so
you
know
one
way
we
could
look
at
it
as
well.
Is
that
the
only
one
that
we
know
that
we
can
actually
have
is
the
first
observed,
and
then
we
could
have
a
you
know.
An
array
of
tuples
of
this
is
the
type
this
is
the
the
other
type,
so
that
this
is
the
time
stamp.
So
this
is
the
time
it
was
received
by
the
collector.
This
is
the
time
that
was
received
by
the
back
end.
G
A
Yeah,
because
this
is
a
very
interesting
subject,
because
we
can
also
think
it
about
think
about
it
beyond
a
lock
collection
and,
for
example,
with
all
ram
use
cases
or
use
cases
that
involve
the
getting
data
from
mobile
devices.
They
might
be
like
some
funny
time,
synchronization
issues,
and
I
think
that
this
was
discussed
several
months
ago,
like
very
briefly
during
one
one
of
the
specifications,
six,
that
there
is
some
space
for
the
protocol
to
provide
some
synchronization
time,
synchronization
capabilities.
A
A
But
I
think
that,
like
all
the
problems
related
to
inaccuracies
between
different
agents,
that
take
that
that
are
involved
in
in
assigning
those
time
times
that
time
stamps
is
a
separate
problem.
B
Yeah,
I
think
that's
yeah.
That's
that's
definitely
separate
if
you,
if
you
don't
assume
that
the
clocks
are
accurate,
then
you
have
a
whole
set
of
other
problems
to
deal
with,
but
even
even
with
accurate
clocks.
It's
very
easy
to
have,
let's
say
late
collection
of
the
data
when
the
collection
time
is
very
different
from
the
timeline
when
it
was
generated
right.
So
I
I
guess,
that's
very
it's
fairly
common
to
have
the
collection
happen
at
some
collector
collection
to
log
a
bit
right,
maybe
a
bit,
but
maybe
a
bit
more
inaccurate
clocks.
B
B
Okay,
so
yeah,
I
agree
there
is
room
to
maybe
record
the
other
timestamps.
It's
just
that
how
much
more
valuable
that
is
right
are
you
and-
and
there
is
cost
associated
with
that
right.
You,
you
record
this
additional
data.
There
is
the
cost
of
recording
of
transmitting
and
processing.
B
What
do
you
do
on
the
back
end
with
all
this
data
are
we
do?
We
know
that
people
will
actually
use
this
information,
or
this
is
just
the
case
of
oh,
let's
put
it
there,
maybe
somebody
maybe
who
knows
maybe
somebody
leaves
it,
but
maybe
not.
D
So,
on
our
side,
there's
a
huge
difference
between
what
we
call
receipt
time,
which
is
what
gets
assigned
when
we
actually
see
the
message
in
the
back
end
you
know
and
what
we
pass
out,
which
that
parts
are
you
know.
Of
course
ours
is
better
than
anybody's,
but
like
it's
also
very
easy
to
get
that
wrong.
All
right
all
hell
ensues.
If
that
happens
downstream.
You
know
both
from
a
backend
perspective
in
terms
of
data
layout
as
well,
people
not
finding
their,
and
so
so.
D
Those
two
are
very
crucial
right
from
my
for
my
experience,
I
can,
I
can
say
that
very
safely.
D
I've
always
wanted
to
actually
add
a
collector
timestamp
as
well
to
maybe
have
a
chance
to
see
when
I
can
detect
some
amount
of
clock
drift,
and
maybe
you
know,
alert
customers
about
that
when
you
think
that
all
the
way
through
then
you
end
up
like
it
should
really
because
there's
you
know,
there's
often
routing
and
kind
of
you
know,
you
know
aggregation
happening,
then
you
might
want
that
to
even
be
actually
a
chain
like
an
array
of
some
sort
right
and
they
get
more
and
more
complicated.
B
Yeah,
so
if
we
have
this
the
collection,
time
and
the
origin
time
in
otlp,
then
you
can
assign
additionally
the
receipt
time
back-end
receipt
time
and
back-end
parsing
time
that
that
gives
you
what
you
need
right.
Those
like
you
now
have
four
four
time
stumps
to
deal
with.
Do
do
you?
Do
you
need
more
more
than
that
or
that
that
is
good
enough
you're
saying.
D
I
think
what
I'm
yeah,
I
think,
I
think,
more
than
anything
else
what
I'm
saying
I
would
have
to
think
through
what
like
the
total
list
is
frankly
and
then
maybe
also
like
you
know,
you
know
ping
into
the
organization,
but
but
what
I'm
basically
saying
is
that
it
needs
to
be
clear
what's
in
the
field,
and
that's
it's
quite
normal
that
you
have
multiple
of
them,
I'm
I'm
not
sure
I
can
impr.
I
could
I'm
not
sure
I
could
kind
of
spec
out
a
perfect
list.
D
You
know
on
the
spot,
but
but
you
know
yeah,
just
knowing
what
you're
getting
is
very
important.
I
feel
right
because
different
backends
might
have
different
preferences
for
how
they
interpret
it
right,
yeah
and
yeah.
Trying
to
you
know,
reason
through
a
a
a
third-party
kind
of
mapping
of
priorities.
Even
if
that's
fake,
this
is
probably
not
up
to
me.
G
It
seems
like
we've
probably
got
a
situation,
that's
one
two
or
many,
that's,
probably
not
one,
but
probably
two
or
many,
rather
than
trying
to
figure
out
an
exact
enumeration
of
what
time
stamps
we
would
use.
That's
that's
probably
going
to
be
different
based
on
each
system.
B
We
have
certainly
recorded
all
of
this
as
attributes
right.
The
question
here
is
how,
if
it's
an
attribute,
we
obviously
need
a
semantic
convention
for
that.
But
the
question
is:
how
often
do
do
these
attributes
need
to
be
present
right
if
it
means
they
are
always
there,
then
a
field
top
level
field
is
very
useful
there
and
it
seems
like
everything
that
is
collected
by
open,
telemetry
collector,
which
is
not
received
directly
from
open
telemetry
sdk
is
going
to
have
both
of
the
timestamps.
B
D
D
D
D
D
You
know
you
could
argue
that
if
you
have
concentrators
sort
of
change
that
somebody
might
find
it
interesting
to
sort
of
basically
get
a
timestamp
at
every
hop
right.
You
know,
like
you
know,
like
whatever
you
know
you
deploy
on
every
node
in
kubernetes,
but
then
you
shoot
everything
up
to
another.
You
know
otc
that
maybe
pulls
in
the
kubernetes
metadata.
Like
all
this
stuff.
I
think
you
guys
aren't
familiar
with
that
right.
So
now
you
already
have
two
guys
right
and
there
can
be
queuing
in
between.
D
There
can
be
all
kinds
of
happening,
and
you
know,
in
which
case
having
having
each
hop
timestamp
can
allow
potentially
some
sort
of
warning
around
it,
which,
which
would
argue
for
the
collector
time
field
to
be
potentially
looked
at
as
a
list
of
timestamps.
You
know
just
putting
it
out
there.
D
B
B
D
B
Relayed
late
late,
like
by
an
intermediary,
is
that
the
problem
of
that
intermediary,
not
the
problem
of
the
log
record
right-
that's
probably
part
of
the
observability
of
the
collector
yeah.
As
we
understand
it,
I
that
that
was
something
that
we
discussed
in
the
past,
for
the
collector
itself,
like
the
collector,
probably
should
be
emitting
metrics
that
describe
latency
of
the
data
as
it
propagates
through
the
collector.
So
if
we
had
that,
then
probably
there
wouldn't
be
a
necessity
to
timestamp
every
single
log
record
passing
through
the
collector.
D
Is
if
there
is
a
chain
going
on
which
timestamp
wins
like
if
a
collector
gets,
if
a
collector
gets
a
record
from
one
from
an
upstream
that
already
has
collector
timestamp
set,
really
then
collaborate
with
its
own
or
will
it
leave
the
existing
one
in
place?.
B
A
B
Okay,
so
I
guess
what
I'm
hearing
is
that
we
think
that
it's
good
to
have
two
time.
Stumps
seems
like
right,
one
of
them
being
the
first
time
observed
the
second
being
the
source
or
origin
origin,
timestamp
and
the
origin
will
be.
We
will
attempt
to
populate
based
on
parsing
or
whatever
it
will
receive
from
some
some
network
protocol.
It
will
be
likely
there
already,
maybe
right
and
the
first
observed
will
be
stumped
by
the
collector
itself,
when
the
collector
sees
it.
D
I
think
that's
sensible,
I
think
I'm
not
sure
if
origin
is
necessarily
the
best
term
and
consider
maybe
message,
I'm
not
sure,
that's
the
best
one
either,
but
it
happens
to
be
the
one
that
we
use.
You
know
event
time.
I
meant
time
stem
could
could
be
an
option.
You
know.
Origin
like,
in
my
mind,
has
a
topological
meaning
right,
which
brings
us
back
to
collector
chains
and
so
forth.
So
that's
confusing
in
me
in
my
head.
That
might
just
be
me,
but,
like
just
gonna
say
that.
H
G
I
was
just
going
to
say
that
I
would
think
that
the
source
timestamp
was
much
better
than
origin
or
the
you
know
anyway.
Better
terms
coming
would
be
something
that
could
be
changed
in
the
pathway.
G
So
if
your
collector
is
actually
parsing
as
you
go
through
something
like
that,
if
it's
reading
it
from
the
logs-
and
it
could
be-
but
there's
several
places
that
if
it's
not
said
it
can
be
set
later-
and
I
would
think
that
if
you
had
something
emitting
that
knew
both
the
source-
and
it
was
the
first
thing
like
a
the
sdk-
you
know
it's
coming
from
log4j
directly
through
a
outputter
then
would
set
both.
F
That's
that's.
The
point
I
was
going
to
make
too
is:
I
think
we
should
think
of
the
logging
libraries
right,
like
that's
the
case,
they're
going
to
set
the
origin
timestamp
or
the
message
timestamp,
because
they're
originating
it
themselves
and
in
some
sense,
if
they're,
a
first
party,
you
know
code
base,
then
really
we
all
it's
also
the
first
interaction
with
open,
telemetry,
the
first
observation
of
it
so
that
you
would
set
both.
B
H
H
If
it
exists
and
we
can
properly
parse
it
or-
and
it
would
have
an
observed
by
when
we
parse
the
that
line
of
that
file,
but
it
will
always
have
when
it
parses
it
when
it
first
saw
it
and
it
would
be
the
sck
or
whatever
is
originally
speaking
otlp
over
the
wire.
That
would
be
the
providence
of
that.
That
field
that
particular
field.
B
H
H
G
But
I
I
think
the
point
as
far
as
if
you
had
an
adapter
that
was
reading
off
of
kafka.
For
example,
there's
some
lag
between
the
when
kafka
received
it
and
when
we
received
it,
so
it
wouldn't
necessarily
be
distinguishable.
B
So
what
do
we
do
with
other
structured
sources
like
we
read?
Let's
say
we
use
wind
load
receiver
in
the
collector
windows
logs.
Have
they
have
a
time
stamp,
which
is
the
time
when
the
event
happened
there,
and
there
is
the
time
when
we
are
actually
reading
it,
which
is
different,
which
is
later
right.
So
we
we
populate
both
right.
We
populate
the
wind
logs
original
timestamp
in
the
source
and
the
the
now
we
populate
the
now
in
the
observed
right.
Do
we
call
it
observed
right?
D
B
Okay,
so
yeah,
how
about
we
maybe
give
us
a
bit
time
to
think
about
this
offline,
because
there
are
several
other
items
that
we
need
to
discuss
is
that,
okay,
if
we
pause
this
for
now.
B
Okay
yeah,
so
please,
please
continue
commenting
on
the
issue
there.
If
you
have
additional
thoughts,
this
probably
will
take
a
bit
more
time.
Let's
move
to
the
next
to
the
next
item.
There,
then
I
think
you,
you
populated
the
document
you
want
to
go
over
the
items.
Sure.
F
F
So
there
are
currently
16
spec
issues
for
logs
and
seven
of
them
have
that
label
on
them.
You
know,
of
course
you
can
question
which
ones
should
have
the
label,
but
we
have
seven.
I
want
to
wanted
to
focus
on
these.
Many
of
these
are,
I
think,
fairly
straightforward.
Just
want
to
make
sure
we're
on
the
same
page,
but
the
first
one,
the
ability
to
name
a
logger.
F
B
B
B
There
is
there's
a
pr
right,
yes,
36,
yes,
there's
a
pr
and
bogdan
objected
to
this
change.
Okay,
so
his
his
suggestion
was
that
there
is
this
instrumentation
library
name,
which
is
supposedly
where
the
logger
name
should
go
to,
and
I
think
I
do
not
quite
agree
with
that.
That's
where
we,
where
we
are
right
now,
I
believe,
where
we
post
this
effort.
B
B
The
thing
is
that
logger
is
not
necessarily
a
library
it
can
be.
It
has
other.
It
has
a
different
granularity
right.
You
can
request
a
logger
per
class
per
module
per
package
per
whatever
right.
It's
up
to
you,
it's
not
an
instrumentation
library,
it's
a
different
concept.
It
may
be
a
library,
a
library
may
have
a
logger,
but
it
may
have
multiple
loggers.
So
what
do
you
do
in
that
case
right-
and
I
think
that
creates
this
mismatch
between
what
the
expectation
is,
that
the
instrumentation
library
main
field
should
carry
and
what
we
put
there.
B
If
that's
what
we
decide
to
do,
I
don't
know
if
there
are
other
opinions
like
maybe
we
say
that
instrumentation
library
name
is
actually
yeah.
That's
kind
of
cultural
thing
you
put
there.
Whatever
is
the
name
of
the
thing
that
you
are
talking
about
right,
whatever
is
the
unit
of
of
source
code
that
is
associated
with
the
batch
of
the
log
records
that
are
under
that
we
have
this
hierarchy
of
instrumentation.
D
B
And
under
that
we
have
a
bunch
of
block
records
right
so,
but
that
requires
maybe
slightly
redefining
what
instrumentation
libraries
it's
no
longer
just
a
library,
but
maybe
we
give
it
a
different
name.
We
say
it's
some
sort
of
a
unit
of
source
code,
and
in
that
case,
maybe
maybe
it
sounds
kind
of
a
stretch.
To
me
to
be
honest
right,
you
say
that
instrumentation
library
is
not
just
library
but
anything.
Maybe
it's
a
class.
Maybe
something
else.
G
F
F
B
B
Okay,
if
not
then
yeah,
let's,
let's
go
to
the
maybe
spec
meeting.
Let's
comment
on
the
issue:
let's
yeah:
let's
discuss
this
with
with
pogba
and
we
should
discuss
it
with
him
since
he's
not
here,
maybe
yeah.
Maybe
we
do
that
offline.
F
B
F
All
right
next
one
is
the
log
record
id
field.
So
this
one.
So
my
under
my
takeaway
from
the
last
last
week's
meeting
was
that
we
decided
we.
We
basically
don't
need
to
make
a
decision
on
this.
Yet
we
don't
need
to
make
a
decision
on
this
prior
to
declaring
the
log
data
model
stable,
because
it
would
just
be
an
additive
change
later,
but
tigre
and
I
saw
that
he
did
put
the
label
on
here.
So
did
you
either?
Did
I
misinterpret
this
or
did?
Did
you
rethink
this
or.
B
F
Okay,
cool
so
that
one's
simple
limit
body
to
string
in
logging
library
apis.
So
this
one
I
think
basically
we
need
to
it-
sounds
like
the
decision.
We
need
to
write
a
new
otap
that
it
would
be
essentially
an
amendment
to
ochap150
that.
B
F
B
I
F
Okay,
thank
you.
We
need
more
people
from
working
on
the
language,
libraries,
okay,
all
right,
so
we
need
to
write
a
note
up
there.
The
trace
flags.
B
In
the
in
the
space,
sorry
in
the
spec
meetings
in
the
I
think
the
explanation
I
received.
I
think
it
was
okay,
but
I
needed
the
confirmation
one
more
time
and
maybe
confirmation
from
you
as
well
guys.
What
do
you
think
about
that
explanation?
I
posted
there.
Does
that
sound
right
to
you
or
not,
because
I
guess
I
am
satisfied.
I
think.
F
B
F
B
F
Okay,
good
max
max
size
for
the
log
body,
so
we
we
had
a
good
discussion
on
this
last
week.
I
captured
it
as
best
I
could
in
the
in
the
on
the
issue,
and
I
think
there
were
two
takeaways.
It
was
not
entirely
clear
to
me
what
exactly
we'd
do
with
them,
though
so
number
one
was.
B
B
And
for
for
for
the
sdks,
this
is
very
similar
to
the
other
limits
that
we
have
in
the
in
the
metrics
or
or
or
tracing
sdks.
There
is
the
attribute
currently
meets,
I
think
sizes
as
well.
So
the
the
concept
exists.
The
sdks
have
this
environment
variables
defined
which
through
which
you
can
define
the
limits
for
for
a
variety
of
things.
So
this
is
one
more
thing
that
is
logging
specific,
but
it's
very
much
in
line
with
the
with
the
existing
concepts.
So
we
should
be
fine,
but
I
don't
think
this
is
required.
B
F
No
yeah,
sorry
that
was
funny
my
mistake.
Thinking
about
it
that
way,
the
the
the
max
log
size,
body
or
max
logs
max
size
for
the
log
body
is
required,
so
I
just
want
to
identify
why
what
we
need
to
do
with
it
right
sounds
like
that.
Particular
takeaway
is
another
issue
to
do
later.
There's
one
more
on
this,
which
was
that
we
may
want
to
re,
define
a
response
code
for
servers
when
they
need
to
log
too
big.
F
F
Okay,
so
the
action
there
is
we'll
take
the
label
off.
F
Okay,
all
right
and
then
the
last
one
is
name
field
necessary
for
the
log
data
model.
There's
been
a
couple
different
threads
on
this.
I
think
the
summary
as
I
understand
it
is
that
basically,
this
is
primarily
concerned
for
events
or
a
requirement
for
events
potentially-
and
I
know
next
week
we're
going
to
have
a
conversation
with
some
folks
from
the
ebpf
group.
So
perhaps
we
we
just
postponed
that
conversation
or
this
conversation
about
this
issue.
F
D
Yes,
yes,
and
I
think
that
was
planned
for
today,
but
I
think
it's
it's
pushed
the
next
week.
I
just
saw
jonathan's
message
earlier.
B
F
Okay,
thanks
for
going
through
those,
I
think
we've
got
clarity
and
next
steps
on
a
lot
of
them
took
a
couple
off
our
plate.
So
I
think
that's
good,
that's
all
I
have
for
those
great.
Thank
you.
C
D
On
ted's
behalf,
we
heard
about
it
last
week
when
dan
and
I
joined
the
the
the
event
working
group-
the
sorry,
the
ebpf
working
group
and
then
which
which
was
actually
quite
nice.
I
got
a
good
demo
of
pixie
there
and
then
we
talked
to
them
about
you
know.
D
You
know
wanting
wanting
to
listen
more
closely
and
that's
how
we
hashed
out
that
they'll
come
over
to
this
to
this
one
next
week
now,
but
but
then
at
some
point
that
also
jumped
in
and
basically
I
think,
he's
trying
to
evangelize
this.
You
know
this.
This
kind
of
prototype
right
to
introduce
sort
of
you
know
more
efficient,
columnar-based
encodings.
D
You
know
for
batches
of
stuff
and-
and
I
think
the
original
author
there
is
looking
at
multivariate
time
series,
but
I
think
it's
kind
of
scoped
in
a
sense
of
you
know
being
applicable
to
sort
of
any
sort
of
structured
data.
I
I
personally
think
this
is
actually
pretty
interesting.
I
I'm
not
sure
if
there's
an
actual
item
here
and
just
trying
to
be,
you
know,
I
thought
it
was
super
interesting
and
it's
probably
gonna.
D
You
know
come
away
at
some
point
to
see
whether
that,
whether
that
we
will
be
you
know
if
somebody
actually
goes
ahead
and
builds
this
for
efficiency,
whether
there's
additional
work
that
we
need
to
do,
and
yes,
I
just
put
it
there-
I
I
was
just.
I
was
just
also
wondering
if
anybody
else
had
seen
that
opinions
on
it.
B
D
Sorry
go
ahead,
yeah
yeah,
the
tigran
is
chances.
Are
that
you've
probably
seen
it?
You
know,
I
think
jonah
jonah
didn't
think
you
know
in
the
slacktron.
I
didn't
seem
to
think
that
it
was
actually
you
know
time
to
do
something
like
that
from
my
own
experience
over
the
years.
You
know
actually
going
back
quite
a
long
time
I
just
when
I
still
would
quote
myself
all
the
way
to
you
know
what
we're
doing,
and
you
know
today
with
you
know,
with
metrics.
D
There
is
a
huge
amount
of
repetition
and
labels,
and
you
know,
to
some
degree
could
be
argued
in
any
of
these
structured
data
formats
in
terms
of
values,
right
and
column
line
coding
can
help
right
and
there's
pretty
there's
pretty
interesting
options
out
there,
and
I
think
you
know
they
got
it
to
like
more
than
an
order,
magnitude
of
sort
of
improvement
and
serialization
digitalization
all
about
this
type
of
stuff.
If
you,
if
you
look
at
the
readme
there,
so
I
find
it's
interesting,
I
just
wanted
to
yeah.
B
Yeah,
it
is
indeed
interesting
the
the
problem
there
is
that
this
representation
is
so
different
from
the
current
one
that
all
of
the
processors
that
we
have
that
operate
on
on
the
on
the
data
model
that
we
define
they
are
pretty
much
useless.
They
can't
operate
on
that
data
and
I
think
even
worse
is
that
if
you
want
to
have
processors
which
operate
on
this
data,
it's
complicated
to
write
such
processors.
B
Maybe
not
maybe
it's
kind
of
just
an
opinion,
but
the
thing
is
that
if
you
receive
this
sort
of
data,
the
only
thing
that
you
can
do
today
in
the
collector
is
you
convert
it
to
the
current
data
model,
because
there's
no
other
way
to
work
with
that
data.
That's
the
internal
data
model
of
the
collector.
B
So
how
do
you
actually
make
this
one?
And
if
you
do
the
conversion
in
the
collector,
you
have
to
do
the
maybe
the
opposite:
conversion.
When
sending
out
of
the
collector
you
lost
all
of
the
performance
benefits,
that's
going
to
be
a
huge
performance
hit,
so
you're
saving
on
the
network
costs
here
significantly
by
you
losing
the
processing
cost.
So
I
don't
know
if,
if
the
net
result
is
going
to
be
negative
or
positive,
it's
hard
to
tell
if
you
were
to
do
this,
I
guess
the
right
way.
It
means
a
lot
more.
B
B
I
don't
think
we
at
the
moment.
I
clearly
see
that
people
we
have
the
right
engineering
resources
to
actually
make
this
a
reality.
To
be
honest,
that's
that's
the
largest
problem.
I
see
here
as
a
solution
to
a
particular
problem.
This
is
great
right.
You
do
a
great
amount
of
compression
on
the
network.
B
If
that's
the
problem,
you
have
that's
great,
perhaps
this
can
be
used
as
a
protocol
in
the
last
leg
right
between
the
last
collector,
in
which
runs
maybe
on
your
premises
and
the
back
end
of
the
vendor,
where
you
incur
the
the
network
costs
right.
This
is
a
great
way
to
compress
data.
There
is
no
longer
an
intermediary
which
needs
to
do
any
sort
of
processing,
so
I
I
I
see
that
probably
in
this
limited
capacity
it
can
be
very,
very
useful,
but.
H
B
A
broader
data
model
for
open
telemetry,
I
think
at
least
I
see
like
like
resourcing
problems
with
actually
implementing
this
everything
that
comes
with
it
right.
It's
not
just
the
protocol.
D
B
Dictionary
compression
for
open
climate
protocol,
which
sort
of
kind
of
also
does
a
similar
thing,
not
exactly.
Obviously
it
does
help
to
compress
the
data
a
lot
when
it
is
in
uncompressed
form.
If
you
do
a
gzip
on
top
of
it,
it's
no
longer
that
much
beneficial.
The
gzip
is
also
very
efficient
at
removing
the
application.
B
So
if
it's
only
about
getting
rid
of
the
duplicate
data,
then
then
you
get
a
lot
of
that
benefit
just
by
doing
the
traditional
gzip
or
lc4
or
whatever
the
compression
is
right.
B
You
can
get
a
lot
of
that.
The
the
step
further
is
to
have
this
pre-populated
dictionaries
so
that
you
never
actually
send
those
over
the
network
even
and
then
having
dictionaries
which
the
the
sender
and
receiver
can
agree
on
where
you
send
the
delta
so
the
dictionaries.
So
it's
it's
a
lot
of
interesting
and
fascinating
work.
If
somebody
wants
to
tackle
it,
but
it
is
not
without
its
own
baggage
of
problems.
B
That's
all
I'm
saying
so
you
you
you're
now
required
to
keep
some
state
which
has
its
own
problems,
so
you
lose
the
state
or
you
need
the
state
to
to
do
the
processing.
So
it's
it
was
a
trade-off.
I
I
did
look
at
some
of
these
things
when
I
was
designing
the
first
version
of
the
protocol.
B
What
we
have
here
is
obviously
not
the
best
in
any
particular
dimension.
It's
it's
a
trade-off.
It's
a
compromise
across
several
dimensions.
So
that's
that's
what
our
tlp
is.
It
really
is
a
compromise
if
there
is
a
need
for
the
the
most
compact
representation,
I'm
absolutely
sure
that
can
be
beaten
like
this.
This
clearly
demonstrates
right.
A
Yeah,
the
cool
thing
with
arrow
is
that
it's
totally
different
memory
representation
that
is
optimized
for
processing
like
large
volume
of
data,
but
this
also
means
that
the
the
last
thing
you
have
mentioned
to
grant
like
that
this
could
be
made
as
the
last
step
like
when
this
goes
out
to
vendor.
Maybe
it's
where
it
makes
the
most
sense
other
than
that.
If
I
recall
with
arrow
there's
like
slightly
different
approach
to
processing
data,
but
maybe
there
will
be
still
some
space
for.
A
I
don't
know
using
some
iterators
with
the
current
api,
but
that
would
be
like
still
significant
effort,
but
what
I
can't
see
is
using
arrow
in
many
of
the
sdks
like,
for
example,
javascript
api
doesn't
even
like
using
protobuf,
because
json
is
so
much
simpler
when
you
don't
want
to
add
too
many
dependencies.
So
I
don't
think
that
the
current
otf
will
go
anytime
soon.
B
D
It
cool
that's
super
helpful
yeah
and
I
guess
arrow
fires.
An
in-memory
representation
is
probably
more
geared
towards
read
rather
than
mutate
right.
D
This
is
my
hunch,
because
it
really
comes
from
you
know,
processing,
large
flocks
and
the
collector
appears
to
be
in
the
business
of
mutating
pretty
heavily
so
that
that
I
think
those
are
those
are
potentially
aspects.
The
benchmark
doesn't
cover.
So
it's
quite
interesting
yeah
the
benchmark
is
presented
nicely.
I
have
to
say:
whoever
did
this
did
a
pretty
good.
D
Making
it
is
it's
it's.
It's
pretty
interesting,
good.