►
From YouTube: 2021-10-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
did
I
join
the
wrong
zoom
or
or
you
wanted
to
join
the
boxing.
B
No
logs
yeah
you're
in
the
right
place,
yeah
I'm
looking
for
some
help
for
the
the
go
vlogging
stuff
which
added
to
the
agenda.
But
oh
that's,.
B
Well,
maybe
I
think
that's
kind
of
what
I
wanted
to
get
a
little
bit
of
a
better
understanding
of
today,
because
I'm
not
exactly
sure
where
the
logging
signals
like
state
is
and
how
that's
gonna
interact
with
you
know
like
existing
language,
sig,
so
yeah
I
put
at
the
end,
because
I
don't
wanna
disrupt
this
too
much,
but
it
was
recommended
on
one
day
I
come
in
and
ask
you
guys
about
it.
B
A
Okay,
so
I
put
the
first
item
so
opens
level.
3
is
right
now
working
on
publishing
a
roadmap
for
the
next
year,
so
this
question
is
driven
by
that
when,
when
do
they
think,
particularly
with
regards
to
the
data
model,
when
do
we
think
we
will
be
able
to
have
a
stable
data
model?
And
then
maybe
we
can
also
talk
about
the
the
other
items
that
we
want
to
put
into
the
roadmap.
The
reason
I
want
to
discuss
the
log
data
model
first
is
because
it
a
lot
of
things
depend
on
it
right.
A
A
I
was
thinking
that
maybe
we
could
do
that
by
the
end
of
the
year,
but
then
it's
almost
november
right
and
the
the
holiday
season
is
approaching.
So
I
guess
it
gives
us
less
than
two
months,
maybe
a
month
and
a
bit,
so
I
kind
of
maybe
I'm
a
bit
more
optimistic
than
then
actually
is
achievable.
I
wanted
to
see
what
do
people
think
about
it.
A
C
Thought
that
the
java
work
was
much
further
along
at
least
based
on
when
I
was
attending
in
the
work
that
some
of
my
colleagues
did
towards
it.
But
I
thought
all
of
that
was
already.
A
So
there
is
an
open
pr
that,
if
merged
probably
makes
it
very
close
to
what
the
prototype
is
going
to
be,
but
it
is
still
open.
So
I'm
not
completely
sure
when
exactly
it's
going
to
be
merged,
it
will
be
great
if
it
is,
but
still
it's
probably
not
going
to
there's
not
going
to
be
a
lot
time
remaining
after
that
is
merged,
and
we
have
time
to
kind
of
discuss
the
results
of
that
and
then,
if
there's
anything
necessary
to
alter
in
the
data
model
to
make
those
decisions.
D
I
was
I'm
part
of
this
client
side
telemetry,
where
we
started.
You
know
exploring
using
the
logs
to
represent
events,
and
when
I
looked
at
the
the
log
spec
it
you
know
there
is
there.
E
D
Enough,
I
mean
there
aren't
enough
examples
of
how
the
logs
have
been
used
for
representing
the
events,
although
it
says
it
can
be
used,
are
there
considerations
on
the
data
model
for
events
that
that
this
group
has
considered?
Is
there
anything
other
than
the
the
name
field.
A
So
the
data
model
says
that
the
events
are
another
name
for
the
log
records.
There
are
probably
not
enough
examples
of
things
that
people
call
events
that
show
how
they
are
represented
in
log
load
data
model.
A
A
Maybe
yeah
maybe
also
open
an
issue
and
then
we'll
see
if
anybody
is
able
to
contribute
that
do
you
have
do
you
have
anything
specific
in
your
mind
with
regards
to
maybe
things
not
working
well
for
the
current
data
model
for
certain
type
of
the
events
that
you're
aware
of
that
would
be
an
interesting
discovery.
If
there
is
anything
like
that.
D
Not
necessarily
you
know,
I
just
started
looking
at
the
logs
data
model
and
most
of
actually
all
of
the
fields
seem
to
be
optional
right,
so
it
should
be
accommodative
yeah,
but
there
are.
I
know
there
are
several
use
cases
on
the
mobile
applications
and
the
browser
applications,
which
are
events,
and
I
heard
even
the
ebpf
sig.
He
you
know
wants
to
use
the
events
from
the
ebpf.
D
You
know
as
the
log
records
yeah,
so
we
could
provide
examples
for
sure
okay,
so
my
specific
question
was
when
you
want
to
mark
the
log
data
model
as
stable,
would
what?
What
does
that
mean
to
the
semantic
conventions?
Would
that
be
part
of
the
stable
spec
or
because
we
would
be
coming
up
with
prs
on
semantic
conventions
for
the
client
elementary
on
the
event
yeah.
A
A
It
defines
your
space
of
possible
data
that
that
is,
that
is
possible
to
represent
right,
but
the
semantic
conventions
then
tell
what
exactly
you
want
to
represent
using
this.
This
data
model
it
it
evolves
separately,
so
it's
not
going
to
be
declared
stable
at
the
same
time,
it's
not
even
declared
stable
for
traces
or
for
metrics,
so
that
that
will
be
separate
and
there
will
always
be
more
contributions
in
semantic
conventions.
So
data
model
means
the
the
fields
the
the
their
naming
data
types
of
the
fields.
A
What
is
possible
to
represent
and
stable
means,
no
breaking
changes
are
allowed.
Extending
it
in
the
future
in
a
non-breaking
manner,
is
still
a
possibility.
We
do
allow
that,
so
we
can't
remove
stuff.
We
can't
change
things
in
a
breaking
way,
but
it
doesn't
set
things
in
stone
with
regards
to
data
model,
in
the
sense
that
nothing
ever
can
be.
That
can
be
added
to
the
data
model.
If
we
declare
it
stable
this
year,
okay.
F
Hey
santosh,
can
I
can
I
ask
you
a
question:
what
like
can
you
kind
of
give
us
the
idea?
What
the
shape
of
an
event
looks
like
in
your
mind,
and
you
know
specifically
the
reason
I'm
asking
is
because
you
know
the
way
that
we
have
been
so
far.
Thinking
about
the
log
stator
model
is
that
you
know
the
body
can
really
kind
of
include
anything
you
want
right.
So
let's
say
your
event
is
a
json
of
some
sort.
Let's
say
probably
likely
right,
I
wouldn't
be
surprised.
F
Well
then,
even
with
the
current,
you
know,
you
know
spec,
you
just
put
it
there
right
and
then
you
rely
on
the
receiver
to
to
know
what
it
means
right.
So
now
my
question
is:
is
that
really
all
we're
talking
about
here
quote-unquote,
or
are
you
also
thinking
about
a
agreed
upon
normalized
schema
for
events
that
would
have
to
get
lifted
up
potentially?
And
you
know
you
want
to
that-
you
are
thinking
about
maybe
putting
into
the
envelope
versus
the
body.
D
No,
I
think
the
former
an
example
would
be,
let's
say,
on
a
mobile
or
a
browser
application
you,
you
click
a
button,
so
we
we
we
want
to
capture
how
many,
let's
end,
users
click
this
button
versus
that
button.
Right
we
want
to.
You
know,
identify
that
statistic,
so
you
we,
we
just
want
to
capture
that
information.
So
so
the
name
of
the
event
could
be,
let's
say,
a
button,
click
and,
and
then
the
body
could
be.
You
know
more
details
about.
D
You
know
what
that
button
is,
so
I
think
it
would
fill
fit
in
into
the
existing
data
model.
Although
you
know
I
originally
thought
there
are
some
fields
like
severity.
You
know
which
are
not
applicable
to
events,
but
I
noticed
later
that
they
are
optional,
so
data
model
wise,
I
think
at
least
from
the
browser
and
mobile
applications
perspective.
You
know
it's
it's
a
free
form.
The
body
is
a
free
form,
json,
anyways,
right
and
envelop.
D
D
I
haven't
looked
closely,
but
I
I'll
definitely
you
know
get
back
in
the
next
week.
You
know
if
there
is
anything
I
find.
F
A
A
Right
yeah
yeah,
the
earlier
we
find
such
counter
examples
the
better
right
so
that
we
can
fix
the
date
model
if
it
requires
fixing,
and
if
not,
then
we
will
gain
a
bit
more
confidence
that
it
does
actually
work.
Yeah,
yeah.
G
I
think
tigran's
original
question,
I
you
know
I
commented
on
the
issue,
but
I
would
agree
it's
an
ambitious.
It
would
be
an
ambitious
goal
and
perhaps
even
unrealistic,
but
but
I
think
it
would
be
great
if
we
could
make
a
substantial
push
towards
this
over
the
next.
You
know
six
weeks
or
so
I
think
we
could
get
very
close,
if
not
all
the
way
there
with
a
focused
effort-
and
I
think
a
lot
of
people
in
this
group
would
like
to
see
momentum
on
that.
A
Yeah
yeah,
I
agree.
I
think
it's
realistic,
that
we
achieve
most
of
what
needs
to
be
done,
but
I
don't
know
if
there
will
remain
some
unknowns
or
maybe
doubts
right
that
the
prototypes
are
unfinished.
We
don't
know
if
there
is
anything
yet
to
discover
more
like
that,
rather
than
if
we
look
at
the
list
of
the
currently
open
issues,
I
think
it's
quite
doable
right.
A
If
we
we
actually
do
put
put
in
the
work,
it's
doable,
I'm
more
concerned
about
things
that
we
don't
know
like
giving
us
a
bit
more
breathing
room
like
for
for
the
discovery
of
things,
and
I
think
then
you
also
suggested
to
maybe
change
the
the
meeting
to
be
weekly
instead
of
bi-weekly.
A
I
think
yeah
that
that
would
be
useful.
If
people
here
think
that
they
will
be
able
to
make
it
every
week,
then
we
can
change
to
that
any
thoughts.
Anybody.
G
I'll
just
extend
that
thought
to
say
that
I
think
I
wanted
to
propose
that
we
try
to
use
this
time
to
have
the
conversation
specifically
about
the
specification,
because
we,
these
meetings
do
tend
to
be
quite
short,
and
we
do
have
all
these
open
questions.
A
Okay,
so
I
guess
perhaps
to
be
to
be
safe
so
because
we
need
to
publish
the
roadmap
to
be
safe,
I
will
probably
put
q1
there
and
if
we
do
manage
to
finish
it
early,
that's
great,
maybe
happens
in
january.
That's
great,
but
it
gives
us
enough
time.
I
think
that
should
be
sufficient.
As
a
group,
though,
we
should
probably
aim
to
to
do
it,
maybe
maybe
by
the
end
of
the
year,
if
we're
able
to
do
that.
A
A
G
I
put
this
on
here.
I
know
this
okay,
so
there's
this
big
point
of
contention,
I
think
around
what
is
the
how
what
is
the
data
model
meant
to
represent
in
terms
of
attributes
and
body
and,
as
I
said,
there's
been
quite
a
number
of
github
issues
opened
some
pull,
requests
up
and
proposing
concrete
things
and
pretty
much.
G
Everyone
has
gotten
bogged
down
in
discussion
or
just
been
sort
of
abandoned,
so
just
wanted
to
try
one
more
time
here
to
see
if
we
can
have
consensus
in
this
group
as
to
what
we
think
at
least
some
of
the
agreed
upon
points
are,
and
in
particular
I
think
that
the
most
fundamental
point
is
this
question
of
whether
or
not
the
body
can
be
structured
or
or
whether
the
data
model
must
allow
the
body
to
be
structured.
G
I
think
it
links
to
a
couple
of
comments
where
I
think
that
it's
suggested
that
it
should
be,
and
there's
at
least,
like
one
other
opinion,
sort
of
sort
of
seconding
that
that
comment
in
each
case.
So
I
think
that
there's
some
consensus
around
that,
but
I
will
say
that
I'm
not
convinced
that
I
have
seen
a
great
example
that
says
100
clearly
like
like
concretely.
This
is
the
case
when
this
is
necessary.
G
I
personally
think
it
is.
It
is
a
more
flexible
model
for
sure,
but
I
think,
as
a
group,
it
would
be
great
if
we
can
just
produce
some
concrete
examples
of
these.
Are
these
cases
where
this
is
necessary
and
that's
why
it's
going
to
be
that
way
and
just
capture
that
in
the
documentation
you
know
the
data
model
or
wherever
it
would
be
so
yeah
I
mean
any
thoughts
or.
A
Specific
examples
yeah
so
before
I
answer
that
I
think
yuri's
framing
of
the
question
was
interesting.
I
guess
he
said
who's
asking
right.
Why
do
you
need
to
answer
the
the
question
whether
the
body
is
structured
or
not,
and
I
think
there
are
two
angles
here:
one
is
I'm
using
open,
telemetry
logging
sdk
and
I
need
to
produce
log
records.
A
Do
I
need
to
be
able
to
produce
structured
bodies
at
all,
and
I
think
for
this
one?
The
answer
is
probably
no.
You
probably
don't
need
to
do
that
at
all,
because
likely
you're
using
some
logging
library,
where
the
log
message
is
just
just
a
string
or
something
right,
and
then
there
is
some
additional
attributes
that
come
with
that
likely.
A
The
answer
here
is
no
the
body
it's
sufficient,
that
the
body
is
a
string,
because
we
also
allow
additional
structured
data
to
be
recorded
as
attributes
and
then
the
second
angle
is
okay,
but
I
have
this
log
record
that
comes
who
knows
from
where
produced.
Who
knows
how
it's
maybe
json?
Maybe
it's
some
some
sort
of
it
has
a
structure.
I
have
no
idea
about
that
structure
for
those
sources.
A
I
want
my
log
data
model
to
be
flexible
enough
to
represent
them
in
a
way
that
is
non-ambiguous
that
we
can
then
later
use
inside
processing
data
processing
units
like
the
collector
to
work
with
the
data
to
modify
it
in
a
way
that
doesn't
corrupt
the
data
right
so
that
I
can
pass
through
the
data
unchanged
or
I
can.
I
can
work
with
that
data.
A
Now,
to
answer
the
particular
question
you
had,
I
think
we
have
two
examples
there
in
the
document.
One
is
google
cloud
logs,
I
think,
and
the
second
is
splunk
heck
and
both
have
bodies
which
can
be
arbitrarily
complex
objects
like
with
nested
maps
and
arrays,
and
all
that
stuff
with
with
google
cloud
logs.
A
I
think
they
allow
three
types
of
bodies:
one
is
regular
strings,
the
second
is
json
objects
and
the
the
third
is
a
protobuf
objects,
even
which
I
don't
know
if
they
they
think
is
going
to
be
represented
in
binary
form
or
or
some
structured.
For
me,
the
document
the
google
cloud
logs
document
is
unclear
about
that,
but
I
think
these
are
examples.
These
are
the
best
examples
I
have
at
least
right.
A
So
maybe
we
can
find
better
examples,
but
at
least
these
two
tell
me
that
indeed,
there
are
likely
some
sources,
some
log
sources,
which
think
that
the
body
is
something
that
is
structured
and
is
not
simply
and
the
structured
data
is
not
simply
captured
in
the
form
of
in
the
form
of
key
value
pairs.
Let's
say
right,
so
that's.
A
H
Yeah,
I
think
so
we
spoke,
go
ahead.
Go
ahead.
I
was
just
going
to
point
out
that
that
we
do
also
have
some
lugging
libraries
that
are
naturally
a
zap.
I
think
is
one
of
them
that
doesn't
think
necessarily
in
strings.
It
does
have
a
structured
output
and
any
of
the
structured
or
semantic
logging
libraries
are.
Are
that
way
so
yeah?
We
can.
You
know
obviously
right
now,
they're
going
to
files
they're
serialized,
but
you
know,
having
to
create
a
interchange
format
to
be
able
to
put
it
into
our
structured
event,
seems
counterintuitive.
F
So
I
think
we
started
this
data
model
thing
with
the
idea
that
body
is
basically
opaque
right.
You
know
going
back
to
you
know.
Lots
of
experience
represented
here
and
you
know
blocks
can
be
all
kinds
of
crazy
right.
You
know.
At
the
same
time,
of
course
the
world
is
moving,
you
know,
but
the
modern
log
in
gets
gets
a
little
bit
less
crazy
right.
It's
still
diverse,
though
I
know
that
I
might
sound
like
a
bit
like
a
broken
record
here,
but
so
let
me
repropose
something
that
I've.
F
You
know
that
I've
talked
about
a
few
times
before,
which
was
to
to
solve
this
problem
by
allowing
people
to
optionally
tag
in
the
in
the
in
the
in
the
envelope.
What
to
expect
in
the
body
all
right.
F
So
the
idea
would
be
that,
like
you
know
from
bottom
up,
you
would
basically
assume
that
if
there
is
no
such
annotation
at
all,
then
you
know
we
have
to
decide
whether
the
body
is
supposed
to
be
interpreted
as
a
string
or
by
the
way
we
can
figure
that
out.
But,
let's
just
say,
we
start
with
there's
nothing
else,
given
just
assume.
It's
a
string
right
and
then,
if
that
string
happens,
to
be
serialized
json,
you
know,
and
you
can
figure
this
out
on
the
back
end
then
good
for
you.
F
If
not
you
know,
you
know,
you
know,
you
know
then
they're
not
right
now,
if
you
have
something
that
wants
to
actually
indicate
that
it
is
serious
chase
and
you
know
put
that
in
the
body.
You
know
sorry
put
that
in
the
envelope
say
something
like
you
know,
body
type
equals
json
right
and,
and
so
this
is
a
little
bit
inspired
by
you
know,
http
headers,
which
I
think
you
guys
you
know
content
type
and
this
type
of
stuff
right.
F
Which
is
like
there's
a
lot
of
sort
of
evolution
of
that,
and
you
know
some
folks
might
look
at
this
http
example.
As
you
know,
hey,
this
is
kind
of
applied
craziness
on
some
levels,
because
these
are
not
really
normed
and
sometimes
they
can
cause
problems.
But
you
know
if
we
can't,
I
don't
think
we
want
to
go
back
and
say
the
body
needs
to
actually
be
structured
and
here's
one
structure
deal
with
it.
F
I
just
don't
think
that
works
right
and,
and
the
only
other
option
that
I
can
think
of
then
is
to
basically
allow
people
to
annotate
what
to
expect
there
right
and
if
it's
sap
and
something
that
actually
has
a
nice
structure
that
you
can
just
club
into
proto
into
the
board
and
like
as
a
serialized
as
a
byte
array
into
the
body
just
annotate.
It.
I
Yeah,
so
I
think
that
this
was
largely
what
I
was
thinking
too,
that
let's
keep
the
body
just
string
or
bite
error
in.
In
the
other
case,
we
could
later
describe
it
as
some
tag
or
whatever
else.
What
kind
of
bite
era
is
that
so
the
back
end
or
the
vendor
could
do
whatever
kind
of
good
stuff
with
that,
and
we
had
this
like
this
long
discussion.
The
thread
is
like
quite
long
on
this
on
this
issue,
but
this
is
really
what
treatment
has
brought
that.
I
We
have
specifically
said
that
we
want
to
be
able
to
convert
some
external
data
format
to
internal
data
format
and
then
back
and
preferably
keep
the
same
structure,
and
since
the
heck
and
and
google
already
have
this
capability,
that's
then
this
implies
that
we
need
to
have
the
ability
to
express
body
as
a
structured
data,
and
I
think
that
if
this
is
like
a
hard
requirement,
then
we
need
to
keep
the
body
structure
and
also
what
I
want
to
say
is
that
originally
we
have
proposed
just
some,
let's
say:
conventions
when
to
use
structured
body
and
whatnot
and
actually
yuri,
was
against
this
idea.
I
A
We
essentially
well.
If
the
implementation
is
carefully
done
and
done
correctly,
we
essentially
guarantee
that,
unless
you
specifically
define
transformations
of
the
data,
the
data
that
you
receive
in
the
input
of
the
collector
is
going
to
be
exactly
the
data
you
see
at
the
output
of
the
collectors.
I
think
this
is
a
very
important
property
of
the
collector,
which
makes
it
very
usable
as
an
intermediate
gateway.
You
can
put
it
anywhere.
A
If
we
say
that
the
body
is
a
park
right,
it's
a
a
brighter
raid.
It
means
that
the
collector
has
no
choice
but
to
serialize
whatever
it
receives
from
from
whatever
source
it
has
at
the
input
time
right
it
receives
something
which
is
is
not
a
byte
already
it's
something
like
like
a
json
json
right,
it's
maybe
a
protobuf
or
whatever
it
is.
You
put
it
into
the
body,
and
now,
when
you
have
to
do
the
the
reverse
operation,
when
you're
sending
it
out
to
do
that,
you
need
to
actually
know
what
those
bytes
represent.
E
A
A
When
you
receive
you
have
to
do
the
driver
standardization
when
you
send,
but
also
if
you
want
to
have
a
processing
of
that
data,
now
you
have
to
unpack
it
and
you
have
to
understand
actually
all
the
possible
if
it's
my
main
codings
right,
you
have
to
understand
all
those
you
have
to
be
able
to
unpack
those
to
some
internal
representation
so
that
your
generic
processors
can
operate
on
it.
I
think
this
is
probably
realistically
impossible
to
do
at
least
with
the
current
architecture
of
the
collector.
A
I
Yeah
yeah
there
will
be
like
one
way
to
get
away
with
that.
We
can
have
semantic
convention
over
like
a
special
attack
when,
like
when
body
is
structured,
then
we
extract
this
value
into
let's
say
some
property:
let's
call
it
like
body,
dot,
structured
and
put
it
as
attribute
and
then
when
we
do
the
conversion
the
other
way
we
could
check
if
this
field
is
present
and
and
convert
it
back,
but
it
has
other
implications
so.
A
A
If
we
do
that,
we
are
going
to
lose
some
interesting
things
that
the
collector
can
do
today
and-
and
we
also
have
to
so
when,
when
when
you
describe
that
christian-
that
the
backhands
can
deal
with
the
mime
type
and
they
can
understand
it,
it's
not
just
the
back
ends.
The
problem
is
that
we
have
the
collector
it
has
to
be
as
smart
as
the
back
ends
are
to
be
able
to
deal
with
that
data,
which
I
think
is
difficult
to
achieve
right.
It
needs
to
understand
all
those
types
in
that
case.
A
F
No
yeah,
this
time,
ping
pong
okay,
tigran
thanks
for
for
walking
through
this
one
more
time,
I
think
I'm
starting
to
kind
of
finally
get
to
the
point
of
understanding
what
you're
after
there.
So
then,
what
do
we
do
like?
What's
the
what's
the
proposal
here
like
I'm,
I'm
still.
A
F
Clear
I.
A
H
Yeah,
but
well,
I
just
just
wanted
to
jump
in
with
two
things.
One,
my
kind
of
understanding
of
what
christian
was
saying
is
that
that
they're
tags,
it's
not
necessarily
something
that,
like
with
mime
types,
you
don't
necessarily
have
to
understand
all
mime
types
to
be
able
to
process
something.
That's
mine
right
and
then
the
other
thing
it
was.
I
was
reading
something
on
something
dan
rhodes
that
I
thought
was
a
really
good
way
of
putting
it.
H
I
got
sidetracked
and
I
wasn't
able
to
comment
but
dan
put
out
that
a
a
suggestion
that,
with
with
regards
to
the
body
and
the
attributes
that
if
we
look
at
the
body
as
being
the
original
thing
that
we
got
is
and
correct
me
if
I
get
this
wrong
then,
but
that
if
we
look
at
the
body
as
being
the
original
thing
we
got
and
then
the
attributes
are
kind
of
anything
that
we
discovered
after
it
or
extract
from
it.
But
we
keep
the
body
kind
of
as
we
got
it.
H
A
A
F
So
so,
basically
in
your
example
right
if,
if
heck
comes
in-
and
we
consider
that
to
be-
you
know
the
body,
then
there's
some
processing
that
lifts
a
bunch
of
stuff
up,
you
know
adds
resource
annotations
right,
you
know
ads,
adds
attributes,
and
then
it
hits
the
strong
forwarder
or
the
heck
forwarder.
Let's
say
right
and
the
heck
forwarder
then
once
basically,
what
is
it
going
to
do
at
that
point
right?
F
It's
going
to
re-hack
the
whole
thing,
but
you
can't
just
pass
it
through
right,
because
otherwise
it
would
have
to
have
a
otherwise,
whatever
flows
through
the
pipelines
would
have
to
be
some
representation
of
heck
in
particular
right,
especially
if
you
want
to
save
which,
which
is
possible
right,
but
it
would
just
be,
then
you
know
perform
at
a
different
thing
that
flows
through
the
pipeline.
If
you
want
to
save
these
sort
of,
you
know,
could
constant
you
know
serial,
I
say
or
basically
translation,
which
I
I
know
that
understand
that
that
does
cost
cpu.
H
You're
right
that
that
does
break,
I
mean
with
with
hex.
Specifically,
I
think
we've
got
some
more
crowds,
but
but
that
does
break
what
tigran
was
talking
about
about
what
comes
in
stays
the
same
as
what
goes
out.
If
you
don't
change
it,
I'm
just
trying
to
figure
out
how
to
to
both
split
up
the
attributes
in
the
model,
because
that's
one
of
the
things
we
keep
going
around
and
around
about,
but
also
this
this
piece
of,
like
you
know
what
is
that
body?
How
does
it
doesn't
need
to
be
structured.
F
So
if
the
body
was
structured,
you
know,
let's
say
if
we
solve
it
this
way.
You
know
we
have
the
envelope
and
then
we
have
two
things.
One
is
raw
that
has
the
raw
incoming
thing.
Just
like
you
said
david
right
that
in
many
cases
that's
probably
all
that's
going
to
happen,
especially
if
you
receive
some
stuff
from
syslog,
and
nobody
has
you
know,
process.
You
know,
put
some
crazy
processors
in
there,
but
then
there
are
cases
where
you
maybe
want
to
run
the
parsing
or
some
sort
of
normalization.
F
You
know
in
the
collector
already
pushing
that
down,
so
you
don't
have
to
do
it
on
the
back
end
is
some
customers
might
prefer
that
you
charge
the
blast
for
the
back
end
right
and
and
then
that
becomes
a
structured
thing.
So
essentially,
what
attributes
is
probably
right
and
then
so
we're
back
to
attributes
for
attributes,
though
I
think
we
have
said
that
we
want
to
follow
semantic
conventions,
and
now
that
ties
us
into
yet
another
normalized
schema.
That's
evolved,
yeah,
all
right,
anyways.
E
F
A
I
guess
that
was
before
david
said
about
zap,
so
I
was
thinking
that
maybe
we
have
different
guidelines
for
sdks
and
for
people
who
want
to
represent
their
data
format
in
open
telemetry
data
model
for
the
for
the
logging
libraries.
I
guess
we
could
have
this
limitation,
which,
which
would
be
natural
right.
A
You
have
the
blog
body
as
a
message
and
then
you
have
attributes
which
can
be
can
be
structured
and
for
the
for
the
and
then
there
would
be
a
recommendation
for
people
who
actually
implement
their
own
mapping
like
they
write
receivers
or
exporters
in
the
collector
and
there
in
the
recommendations
we
would
say:
okay,
so
what
does
your
format
say?
If
you
have
a
format
where
the
body
is
structured,
then
you
put
the
structured
body
in
the
in
the
open,
climatry
data
model.
If
it's,
if
it's
a
string,
you
put
a
string.
A
So
whatever
you
have,
whatever
the
semantics
of
your
own
data
format
are,
then
you
follow
those
semantics?
Essentially
we
we
allow
you
to
represent
the
things
the
way
that
are
in
your
own
data
format.
So
there's
nothing
to
decide
here
about
right.
It's
it's!
A
one-to-one
possibility
or
to
represent
precisely
what
you
have
you
don't
need
to
convert
it
to
any
other
form.
So
that's
what
I
was
thinking
kind
of,
I
think,
would
resolve
this,
so
why?
Why
is
this
question
arising
even
right?
Why
why
do
I
need
to
know
why?
A
Why
do
I
need
to
answer
that
question?
Whether
I
put
something
in
the
body
or
in
the
attribute?
It's
those
two
cases,
one
is
I'm
generating
data,
I'm
inventing
data
in
a
sense
right
and
now
I
have
a
few
places
to
put
it
in
in
a
few
different
forms.
I
don't
know,
but
that's
when
I'm
probably
an
application
developer,
I'm
generating
logs
and
in
that
situation
I'm
constrained
by
what
is
available
to
me
as
an
api
to
generate
work
records.
It
doesn't
really
matter
what
downstream
my
my
data.
A
What
in
what
types
my
data
can
be
represented
as
soon
as
it
is
capable
of
representing
what
can
be
expressed
in
the
api.
So
that's
one
use
case,
and
the
second
is
I
am.
I
already
have
this
data
at
hand.
In
some
data
format,
I
need
to
convert
it
to
to
open
clementry
and
back
and
so
for
this
we
have
another
set
of
recommendations.
So
that's
what
I
was
thinking
about.
A
Right
so
somebody
who
implements
zap
appender
or
whatever
it's
called
there,
which
which
sends
data
using
open
telemetry
protocol
provided.
E
A
I
I
don't
know
if
zap
actually
has
a
way
of
calling
an
api
by
an
application
developer.
So
if
an
application
developer
can
make
a
call
and
give
it
a
structured
data,
then
we
have
to
like
there's
no
way
we
have
to
allow
them
to
do
that
right.
So
I
don't
see
why
we
would
need
to
force
them
to
flatten
that
data.
Why,
if
we're
capable
of
representing
them?
Okay.
F
Okay,
okay,
yeah,
so
I
think
it's
sap
from
the
few
times
that
I
have
touched
it
which
it
like
it's
a
little
while
ago.
So
if
somebody
knows
this
better,
you
know
remind
me,
but
I
think
this
is
actually
really
valid
example.
This
does
that
something
is
pretty
kv,
like
a
p
value
pair
oriented
right
compared
to
that.
A
G
I
think
you're
gonna
do
it.
This
is
a
really
good
example.
This
conversation,
this
part
of
it
at
least,
is
a
really
good
example
of
where
having
guidelines
is
really
important,
because
I
think,
even
within
this,
we're
not
like,
like.
I
think
that
there's
some
people
here,
assuming
that
in
zap
you
have
you
provide
a
string
and
then
key
value
pairs.
Then
all
that
goes
into
the
body
and
there
are
other
people
thinking
that
the
string
is
the
body
and
the
key
value
pairs
are
attributes.
A
I
Just
remembering
I
put
this
into
it
into
the
chat,
the
last
link,
so
it's
specific.
The
mapping
specifically
says
that
zap
has
a
message
which
goes
into
body
and
the
type
is
string.
So
this
was
what
was
assumed
by
the
mapping
and
there
are
like
other
fields
that
go
to
different
fields
and
of
course,
there
are
attributes
that
go
to
attributes
yeah.
F
G
Yeah,
I
think
just
to
be
clear.
My
point
was
not
that
it
should
be
one
way
or
the
other.
Clearly
we
have
established
this.
It's
just
that.
The
guideline
is
what
tells
us
what
it
should
be
and
so
yeah.
This
is
why
it's
important
to
have
those
guidelines,
and
in
some
cases
we
don't
have
them,
I'm
just
advocating
that
we
really
should
push
to
have
these.
Just
even
if
some
people,
I
don't
think
who
are
here,
are
pushing
against
that
idea.
A
So
so,
essentially
we're
looking
what
we
have
are
examples
of
mappings,
but
we
want
guidelines
which
explain
the
principles
that
how
we
arrived
at
those
particular
mappings
right,
so
that
somebody
who
is
in
a
similar
situation,
who
needs
to
come
up
with
the
mapping
for
some
other
logging
library.
They
know
what
to
do
right,
so
they
they
can
produce
something.
That
is
also
reasonable,
and
I
think
we
can
do
that.
A
Maybe
we
can
try
to
do
that
if
we
narrow
this
down
just
to
be
guidelines
for
people
who
need
to
implement
log
blog
libraries
that
output
telemetry
format,
I
think
I
think
that's
achievable,
you
probably
can
do
it
and
I
think
that's
what
we
that's.
The
only
thing
that
we
need
to
do.
Data
model
can
remain
as
it
is,
and
the
declaration
here
would
be
that
the
data
model
is
designed
to
be
flexible
enough
to
cover
imaginable
cases
right.
A
Let's
say
this
way
that
we
think
that
the
data
can
exist
in
in
somewhere
there
right
and
we
do
not
necessarily
have
to
exactly
justify
all
the
expressiveness.
By
the
examples,
I
think
even
that
probably
is
unnecessary.
That
would
be
great,
like
we
show
examples
where
this
particular
level
of
flexibility
and
expressiveness
is
necessary,
but
slight,
maybe
over
engineering.
If
you
will,
I
think
it
doesn't
hurt,
because
it's
not
hugely
more
complicated.
A
G
Okay,
so
let's
I
think
what
we
should
do
here
is
try
to
capture
some
specific
actions.
We're
gonna
we're
gonna,
take
here
right,
because
I
think
we
have
I'm
hearing
a
lot
of
consensus
now.
G
I
think
just
to
articulate
what
I
think,
I'm
hearing
there's
basically
no
disagreement
over
the
fact
that
the
body
should
be
structured,
but
to
my
knowledge,
there's
not
a
clear
rationale
documented
in
the
data
model.
That
says
that
or
says
why
that
is
the
case,
and
so
I
want
to
suggest
that
we
should.
That
should
be
one
action.
Is
that
we
add
a
rationale?
G
G
And
then,
with
respect
to
guidelines,
it
sounds
like
we
could
break
those
down
into
guidelines
for
logging,
library,
authors
basically
and
then
I
think,
there's
probably
another
category
of
guidelines
for
mopping
off
people
yeah,
basically
consuming
yeah.
Basically
yeah.
G
I
don't
know
someone
want
to
articulate
that
better
than
I
can,
but
but
maybe
we
can
extract
a
few
actions
here
on
that
topic
of
what
specifically
we
need
to
do.
It
sounds
like
we
need
more
than
we
have,
but
yeah.
H
You
know,
I
think,
one
of
the
other
one
of
the
other
things
that
I
keep
bouncing
off
of
that.
I
think
we
could
probably
approach
directly
here
too,
is
that
we're
kind
of
asking
that
body
or
that
the
attributes
follow
semantic
conventions,
but
like
in
this
zap
example,
the
person
that's
defining
the
name
of
the
attribute
that's
getting
added
is
the
developer,
who
doesn't
necessarily
know
anything
about
open,
telemetry
or
the
conventions,
so
it
might
be
that
in
mapping.
H
Okay,
if
you've
got
body
attributes,
it
goes
into
this
particular
semantic
convention
attribute
because
those
can
be
key
value
pairs.
So
if,
instead
of
you,
know
each
each
of
these
attribute
anyway,
the
point
isn't
the
solution.
The
point
is,
I
think,
as
dan's
saying
kind
of
going
through
this
and
saying
these
are
the
ways
that
we
don't
tie
ourselves
in
knots.
A
I
tried
to
capture
the
action
items
in
the
document
if
they
can
think
of
more
please
up
there
and
then
we
can
create
issues,
and
then
we
can
work
on
this,
but
I
think
this
kind
of
will
help
to
narrow
down
the
problem
a
bit
more
to
make
it
more
focused
so
that
we
can
actually
connect
it.
For
the
sake
of
time,
there
is
one
more
thing
that
tyler
wanted
to
discuss.
A
If
you
guys
don't
mind,
let's
move
to
the
next
item
and
we
can
continue
this
discussion.
I
don't
think
we're
going
to
have
a
full
resolution
anyway.
Today.
B
Cool
yeah
thanks
yeah.
So,
first
off
sorry,
if
my
it
looks
like
my
internet's
been
kind
of
flaky
this
meeting.
So
sorry,
if
I
cut
out
I'll
try
to
cut
video
if
that
happens,
but
also
apologize,
because
I'm
coming
at
this
request
with
a
lot
of
ignorance
about
the
current
state
of
the
project.
So
I'm
going
to
say
some
stupid
things
so
just
recognize
that
that
I'm
not
cognizant
of
a
lot
of
things.
B
But
I
wanted
to
touch
basically
because
I
was
pointed
to
the
sig
on
monday,
as
the
go
say
is
trying
to
add
logging
capabilities
to
the
sdk
that
we
currently
already
have
released
so
right
now
we
have
it
released
and
our
troubleshooting
guidelines
are
not
great
and
there's
no
logging
ability
in
the
sdk
and
so
we're
trying
to
solve
that
for
the
user.
B
And
in
doing
so,
we
wanted,
to
you,
know,
start
integrating
some
sort
of
logging
package
into
the
project
and
making
sure
that
that
those
logs
are
being
able
to
be
shipped
somewhere,
and
so
there's,
I'm
sure,
as
you're
already
noticing
overlap
with
what
this
sig,
I
think,
is
ultimately
trying
to
do.
One
of
the
things
that
we
really
did
have
a
problem
with
is
that,
like,
what's
noted
in
a
lot
of
the
documentation
that
I've
read,
is
that
like
there
is?
B
No,
you
know
a
single
one
defined
logging
packaging
go
that
a
lot
of
people
always
use,
and
so
we
would
just
use
that
and
we
would
automatically
be
able
to
work
with
whatever
you
know.
Users
of
that
project
are
using.
So
we
wanted
to
make
some
sort
of
extension
or
some
sort
of
pluggable
system.
I
think
for
users
to
integrate,
with
whatever
their
logging
system
is
and
as
I've
kind
of
noted
in
the
docs.
Here
we
found
this
project
called
log
r,
which
seems
to
separate
these
concerns
very
similar.
B
It's
very
similar
I
think,
to
what
this
project
is
trying
to
eventually
offer
where
there
is
a
way
to
synchronize
logs
to
you
know
a
lot
of
different
outputs,
and
one
of
them
would
be
open
telemetry,
and
I
wanted
to
kind
of
ask
about
this,
because
I've
noticed
in
reading
the
docs
and
I'll
be
honest.
B
I
haven't
read
all
of
the
logging
specification
most
of
the
overview,
some
of
the
data
model-
and
I
just
saw
this
otep
zero
150
as
well,
I'm
just
starting
to
read
that,
but
the
logging
api
like
what
is
kind
of
the
plan
there
I'd
really
not
like
the
gosig
to
go
and
import
a
project,
and
then
we
have
to
build
our
own
api.
That
replaces
it
eventually.
So
maybe
we
could
just
kind
of
stop
there
and
I'll.
Ask
that
question.
B
A
So
I
think
zero
150
is
very
useful
to
read
it
probably
answers
some
of
the
questions
you
have.
The
thinking
was
that
we
do
not
want
to
produce
our
own
login
library,
yet
another
one
for
people
to
use.
What
we
want
to
do
is
whatever
logging
library
you
you
already
use,
or
you
prefer
to
use
in
your
language,
go
whether
it's
up
or
loggers
or
whatever
it
is.
A
We
want
to
make
it
possible
for
these
libraries
to
output
logs
in
open,
telemetry
compliant
manner,
then,
to
do
that,
provided
that
the
library
is
extensible
many
libraries
are.
It
should
be
easy
to
build
this
this.
This
extensions
for
those
libraries
like
like
vendors
in
java.
I
don't
know
what
they
are
called
means
up
so
and
to
do
that
to
to
to
to
for
that,
to
be
easily
possible
to
do.
We
will
provide
a
logging
library,
sdk
so
sdk,
to
build
appenders
for
logging.
Libraries,
if
you
will
right
that's
what
zero
150
defines
it.
A
It
says:
here's
the
things
that
can
be
implemented
in
in
open,
telemetry
sdk.
They
can
be
part
of
open,
telemetry
sdk.
They
are
not
intended
to
be
directly
called
by
end
users
by
by
application
developers.
They
are
intended
to
be
used
by
developers
of
logging,
libraries
or
developers
of
extensions
for
logging
libraries
so
that
they
allow
them
to
emit
the
data
the
logs
using
open,
telemetry
formats,
open
telemetry
exporters,
so
the
exporters
will
be
also
provided
as
part
of
the
loading
sdk,
and
this
autop
is
already
implemented
in
python
and
there
is.
A
There
is
a
prototype
in
progress
for
for
java.
So
I
think
that
is
the
direction
that
we
want
to
go
in.
Will
we
we
eventually
have
a
logging
api
like
a
full-blown
loading
api
offered
by
open
telemetry,
maybe
sometime
in
the
future,
and
then
that
logging
library
can
that
logging
api
can
use
this
logging
sdk
as
to
implement
right
to
implement
the
the
outputs.
B
A
I'm
not
aware
of
a
discussion
like
that
happening.
I
don't
know,
that's
a
good
question.
Do
you
do
you
want
to
do
that?
Maybe
maybe
as
an
option
like
if
things
go,
why
do
you
care
about
the
logs
of
openclavity
sdk
itself?
Probably
because
something
went
wrong
in
the
sdk
and
if
something
went
wrong
in
the
sdk,
are
there
any
guarantees
that
the
logs
themselves
will
make
it
to
whatever
the
network
destination
is
you're
sending
them
to
like?
A
E
B
Yeah,
okay,
so
yeah,
I
think
it's
to
be
clear,
doesn't
answer,
I
think
a
lot
of
my
questions,
but
it
gives
me
a
lot
of
really
great
resources,
so
I
think
I
probably
have
enough
to
go
on
and
do
some
more
research
on
this
one.
So
I
guess
thanks.
I.
I
definitely
there's
a
lot
going
on
here.
So
sorry
to
inconvenience.
A
B
Okay,
yeah,
I
get
that
feeling
as
well
from
like
I
just
found
out
about
it
in
this
meeting,
so
I'm
kind
of
skimming
it
but
yeah.
I
think
it
should
answer
some
of
my
questions
and
help
design
choices
so
yeah
thanks.
A
B
Yeah
I
mean
it
looks
very
similar
to
the
the
trace
design
signals.
So
I
I
imagine
a
lot
of
the
code
could
probably
be
copied
but
yeah
we'll
take
a
look.
A
A
A
E
A
F
G
I
just
wanted
to
say
to
tyler
like
you're,
not
inconveniencing
this
meeting.
I
think
from
my
perspective,
we
need
more
involvement
from
logging
library,
authors.
You
know
I'm
focused
on
the
collector.
We
have
a
lot
of
conversations
about
the
specification,
but
honestly
there
aren't
a
lot
of
perspectives
from
the
logging
library,
authors
and
I
think
it
would
be
helpful
to
producing
producing
better
solutions
and
more
clarity.