►
From YouTube: 2021-12-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
All
right,
no
no
come
on
in
yeah,
come
on
in
yes,
christian
in
a
meeting
here.
No.
C
I'm
in
austin
today
we
are
finalizing
the
renovation
of
the
new
house,
and
so
I'm
hanging
out
in
australia.
C
A
Right,
let's
go
ahead,
so
elastic
folks
welcome.
I
think
we
know
what
the
topic
is.
We
wanted
to
talk
about
the
ecs
right.
Do
you
guys
want
to
maybe
go
ahead
and
maybe
start
the
discussion.
H
Exactly
for
us,
it's
it's
more
an
explorative
now
session
or
we
want
basically
to
know
where
you're
at
what
what
the
ideas
would
be
objections,
etc.
I
think
either.
I
think
a
lolita
wanted
to
drive
it.
This
discussion,
maybe
jonah
or
a
christian
but
yeah.
E
Yeah
either
one
I
don't
know
if
she's
gonna
be
joining
us
today,
but
yeah
I
mean.
I
think
this
is
something
that
we
explored
a
year
ago,
which
is
in
logging.
One
of
the
big
challenges
that
most
users
have
is
is
how
to
do
correlation
and
how
to
get
a
standard
way
to
explain
different
sources
of
logging
and
obviously
one
of
the
main
frameworks
out
there
or
schemas
out.
E
There
is
ecs,
which
is
apache
2
license
today,
and
so
this
is
an
interesting
thing
for
us
to
explore,
because,
right
now
the
logging
work
and
open
telemetry
is
pretty
focused
on
sdks
and
how
to
omit
custom
logging
from
code
and
yet
there's
tons
of
existing
logging
that
we
also
want
to
send
through
open,
telemetry
and
format
in
a
standard
schema
and
have
some
recommendations
as
to
how
that
data
should
look
in
terms
of
of
how
it
how
it
goes
into
open,
telemetry
and
also
how
it's
exported
potentially.
E
So
the
idea
was
to
bring
together
actually
a
couple
of
different
groups
of
people
that
have
joined
the
sig.
Today,
specifically,
we
have
a
couple
of
product
managers
from
elastic
joining
us
and
we
also
have
some
of
the
open
search
team
from
aws.
That's
joining
us,
because
everyone's
interested
in
the
same
problem,
which
is
how
do
we
have
a
standard
schema
in
open
telemetry,
to
make
it
easier
for
all
of
us
to
do
correlation
and
that
that's
kind
of
the
intention
of
the
discussion
and
christian.
C
I
think
you
covered
it.
Well,
I
think
just
maybe
in
my
words,
you
know
for
breadth.
We
there's
a
couple
of
us
that
met
you
know
impromptu
at
aws,
re
invent
and
we
had
a
bunch
of
coffee
and
as
it
were,
when
you
have
once
you
have,
you
know
when
you
have
a
lot
of
coffee,
you
know
ideas
start
you
know
flowing
freely
and
it
became
clear
on
the
table.
You
know
between
my
conversation
with
leona
and
jonah
and
then
also
with
you
know,
with
our
lolita.
C
F
C
You
know
probably
also
have
a
common
schema
right
and
that's,
I
think,
a
lot
of
you
have
been
in
this
business
for
a
long
time.
The
industry
has
never
like
really
achieved
this.
As
such,
you
know,
attempts
have
been
made
in
the
past.
Vendors
have
been
trying
to
push
their
thing.
You
know
back
at
our
side.
You
know
your
splunk
has
something.
There
was
a
miter
effort
at
some
point
that
was
trying
to
be
cross
bender,
it
ended
up,
you
know
getting
getting
stuck
in
a
spec
state,
it's
not
an
easy
problem.
C
My
observation,
I
think
what
elastic
has
done
is
solid.
You
know
it's
actually
better,
it's
actually
more
than
solid.
I
think
it's
good
again
there's
a
hard
to
solve
problem,
but
you
know
from
the
perspective
of
pragmatism,
and
you
know
rough
consensus
and
all
of
those
things
I
think
what
what's
on
the
table
or
what
they
have
created
is
is
is
actually
quite
good
and-
and
I
think
you
know
if,
if
if
given
that
we
are
fundamentally
a
vendor
consortium
over
here
right,
I
think
it
provides
a
great
starting
point.
C
You
know
for
a
discussion
about
you
know
broader
adoption
right.
I
think
it
solves
problems
specifically
for
those
of
us
that
are
that
are
building.
C
You
know,
security
products,
and
so
you
know
or
trying
to
solve,
let's
just
say,
soft
security,
more
security
related
use
cases,
that's
kind
of
where
the
classic
sort
of
normalization
kind
of
idea
comes
from,
but
I
think
this
is
going
much
broader
right
and
the
other
thing
that
all
that
I'll
observe
is
that
we've
had
you
know
this
year,
I
feel
like
we've
had
many
calls
where
people
were
asking
okay,
but
where
am
I
supposed
to
stick
this
field?
Where
am
I
supposed
to
stick
that
field
right?
C
And
you
know
we
have
in
the
spec,
of
course,
from
open
telemetry.
We
have
this
semantic
convention
for
resources,
which
I
personally
think
makes
perfect
sense.
There's
just
it
is
important
to
understand
where
something
comes
from
right,
a
piece
of
telemetry.
C
At
the
same
time,
I
think
we
we
have
been
kind
of,
or
it's
been
kind
of
a
little
bit
awkward.
I
feel
you
know
to
be
didn't
really
have
a
good
response
to
books
and
I
should
have
put
it
in
the
body.
Should
I
put
it
in
attributes,
but
like
what
conventions
do
I
need
to
invent
my
own
and
and
and
so
the
idea
of
of
adopting
you
know,
a
relatively
a
well-developed
standard
here
is:
is
it's
just
appeal?
It's
just
appealing
understood.
C
A
A
Right
right,
so
we
have
the
semantic
conventions
in
open
telemetry,
which
essentially
try
to
serve
the
same
purpose
as
the
elastic
common
schema
right.
It's
it's
likely
not
as
elaborate
more
than
as
extensive
as
the
common
schema
is,
but
the
goal
that
it
tries
to
achieve
is
the
same
now.
The
question
is,
I
guess:
can
we
somehow
adopt
elastic
common
schema
because
it
has
definitions
for
more
things,
more
concepts,
and
it
will
be
very
valuable
to
have
these
definitions.
A
I
think
we
we
didn't.
We
didn't
get
answers
to
that
question.
Is
it
possible
to
do
and
how
we
can
do
that
right?
So,
probably
a
few,
a
few
things
to
consider
right.
How
do
we
so?
Are
there
any
conflicting
definitions?
Let's
say
in
open
telemetry
that
exists
already,
which
are
which
conceptually
also
exist
in
elastic
common
schema,
but
they
are
defined
differently
and
which
makes
it
difficult
to
somehow
then
then,
to
try
to
merge
these
two
things
right.
These
two
definitions.
G
It
doesn't
matter,
but
it
is
conceptually
completely
compatible
with
the
feeling,
I
would
say
I
think
our
our
perception
here
at
elastic,
so
we
we
we
started.
We
resurrected
this
thought
process
last
week
on
now
we
have
add
space
to
allocate
to
to
see
so
we
with
daniel
we
come
here
today
on.
G
G
We
are
very
willing
to
work
with
the
hotel
community
on
this
on.
We
acknowledge
that
there
are
differences
because
open
damage
we
have
moved
forward.
There
are
some
amazing
accomplishments
of
the
ultimate
temperatures
that
we
love.
G
We
are,
I
think
we
are
pretty
familiar
with
where
hotel
semantic
conventions
are
today,
because
we
have
been
supporting
it
for
a
while,
and
we
acknowledge
that,
as
I
think
it's
huge
and
I
would
have
said
it,
it's
a
idea
of
merging
things
on.
If
we
merge
things,
it
means
that
it
will
be
slightly
different,
and
this
is
something
that
we
completely
acknowledge.
In
the
thought
process.
A
A
We
have
this
notion
of
of
dot
notation
for
for
the
names
of
the
attributes
which,
via
which
we
define
hierarchies
of
concepts
there,
whereas
in
elastic
common
schema
they
are
actually
nested
objects.
So
are
we?
Are
we
going
to
come
up
with
some
sort
of
generic
approach
to
convert
the
the
structure
of
one
of
to
to
the
other?
Somehow,
because
it's
not
entirely
clear
what
we
want
to
do
here
right.
G
I
think
yeah
yeah,
this
fact
that
elastic
common
schema
is
hierarchical
when
hotel
7d
conventions
are
more
flat.
Isotopic
that
we
discussed
18
months
ago
on,
I
recently
saw
on
the
logs
schema
that
in
some
ways
I
had
the
feeling
that
there
were
some
kind
of
hierarchy
that
was
bubbling
up
with
a
log
dot
file
to
track
the
file
name.
G
A
Yeah
in
open
telemetry
concepts
are
hierarchical;
sometimes
they
are
just
the
representation
of
the
concepts
in
the
attributes
is
just
flattened
right,
but
conceptually
the
notion
of
of
nesting.
It
does
exist.
You
you
have
this.
Let
me
bring
maybe
an
example:
we
have
kubernetes
q
k,
8
s,
dot
is
a
starting
name
space
for
concepts
which
are
nested
inside
that
that
concept
of
kubernetes,
so
you
have
poles
and
nodes,
and
everything
goes
under
ka
test.
A
A
That
is
not
a
problem.
I
don't
think
there
is
a
mismatch
there
in
understanding
of
the
world.
There
is
a
mismatch
in
how
you
record
that
knowledge
about
the
world
in
the
attributes
in
open
telemetry.
It's
flattened,
you
flatten
it
into
a
certain
form
using
dot
notation
in
elastic
common
schema.
You
keep
the
structure,
as
is
in
the
form
of
nested
objects.
So
I
don't
know
what
do
we
do
about
this
mix?
Much
right,
it's
not
clear.
G
Elastic
common
schema,
elastic
search,
make
it
equivalent
to
dot
things
or
to
nest
them
in
json
yeah
on.
So
I'm
wondering
if
this
would
be
a
kind
of
implementation
detail
for
the
elastic
storage.
H
Yeah-
and
I
also
think
I
mean
there
are
two
things
maybe
and
then
I
just
hand
over
to
ani,
because
you
raise
your
hand
but
two
things.
The
one
thing
is
field
naming
and
structure
and
the
other
one
is
semantic
meaning
of
things.
I
think
it's
it's
easy
to
maps
map
structural
differences,
at
least
from
my
understanding,
yeah.
It's
it's
way
harder
to
map
semantic
meanings
right,
so
I
I
I
expect
that
there
will
be
would
be
technically
on
the
technical
side.
H
B
Thanks
guys,
jeremy
yeah
hi,
sorry,
I
was
joining
this
for
the
first
time.
I
just
got
a
last-minute
invite
and
knowledge
about
this
meeting,
so
I'm
just
on
my
way
to
work
this
past
time
talking
from
here
by
the
way
of
quick
introductions.
My
name
is:
I
go
by
ani,
I've
been
leading
the
search
and
query
language
tier
at
open,
search
and
currently
also
observability.
Before
this
I've
been
leading
observability
at
bloomberg
and
search
and
lucien
solar
stuff
for
a
very
long
time.
The
space
is
particularly
fun
for
me.
B
Talking
about
schemas
and
adding
onto
the
current
conversation.
We've
been
thinking
about
similar
stuff
with
schemas.
Currently,
the
state
of
the
world
and
observability
for
most
use
cases
is
garbage
in
and
we
can't
do
in
much
but
do
garbage
out
and
there's
it
isn't
much
value.
Add
that
we
can
do
on
top
of
it
thinking
about
schemas
or
drawing
from
the
financial
domain.
B
Before
this
I
was
working
at
bloomberg
and
we've
been
working
with
a
lot
of
taxonomies
schemas
and
the
financial
instrument
work
and
the
way
I
think
about
schemas
is
there
are
four
schemas.
It's
not
just
one
schema
and
there
are
relationships
between
them,
and
maybe
we
are
talking
about
one
of
them
right
now
or
all
of
them,
so
just
introduce
and
see
what
you
guys
think
about
them.
B
B
The
third
part
is
the
physical
representation,
and
this
changes
with
respect
to
engine,
so
open
search
is
good
for
certain
things.
Lucine
is
good
for
certain
things,
but
prometheus
is
good
for
certain
other
things,
but
the
way
you
reorient
stuff
the
stuff
you're
talking
about
dot,
notations
and
how
we
are
flattening
stuff.
These
are
really
because
of
the
physical
schemas
representing
and
having
certain
properties
within
themselves,
and
the
fourth
type
of
schema
is
the
ingestion
schema
in
which
the
data
is
actually
sent
over
transport.
So
what
we
have
been
working
is
you've
loved.
B
What
elastic
has
done
and
started
we're
also
looking
into
what
open
telemetry
has
been
doing
and
we're
trying
to
create
a
combination
of
these
things
which
can
work
together.
The
base
implementation
of
the
schema
definition
we're
thinking
is
in
apache
avro
avro
is
a
very
great
schema
definition
language.
It
allows
you
to
version,
do
backward
forward
compatibility
and
do
all
those
tiny
nuances
very
nicely
and
build
good
software
with
test
cases
on
all
those
base
primitives,
but
from
a
transport.
Tier
arrow
is
not
that
good.
B
So
from
a
transport
tier
you
build
on
top
of
avro,
but
move
to
avro
a
protobuf
or
some
form
of
column
store,
but
use
apache
arrow
in
between.
So
we
have
a
zero
copy
store
which
goes
in
and
transfers
data
high
performance
when
you're
actually
moving
data
through
the
wire,
and
we
need
libraries
without
libraries
in
multiple
languages.
The
schema
is
going
to
be
more
of
a
conversation
piece
rather
than
implementation
with
everybody
else.
How
do
we
get
these
languages
pre-built
and
auto
compiled
into
different
go
rust,
java
python?
E
E
But
in
this
case
we
are
definitely
focused
on
the
ingest
layer,
not
the
storage
layer,
not
the
query
layer,
not
any
of
the
other
pieces.
But
how
do
we
normalize
the
data
so
that,
in
terms
of
the
way
that
we
structure
it
and
the
way
that
we
can
interrelate
data
as
it
comes
into
an
ingestion
pipeline,
gets
processed
by
the
open
telemetry
collector
that
we
can
have
more
intelligence
in
the
definition
of
the
of
the
data
and
the
schema
itself?.
F
E
That's
the
main
goal:
it's
not
any
of
the
other
pieces
that
you
mentioned,
but
hotel
does
provide
for
obviously
the
ability
to
do
instrumentation
and
languages.
That's
one
of
the
main
points
of
the
project
is,
if
you're
a
a
go
developer,
you
have
an
sdk
and
the
sdk
obviously
allows
you
to
emit
telemetry
data
in
logs
metrics
and
traces.
E
Now
the
question
is:
do
we
want
to
add
a
schema
in
so
that
when
the
person
emits
logging
telemetry
that
there's
more
to
it
than
a
pretty
basic
construct,
which
is
what
we
have
today
in
the
schema
for
logging,
but.
H
That
did
initiate
because,
like
differentiating
like
the
query
time
thing
is
the
thing,
obviously,
because
open
telemetry
will
in
some
way
also
then
emits
this
data
in
in
a
well-defined
way,
which
will
automatically
then
grow
into
the
into
the
back
end,
and
people
will
store
it
in
this
way.
Right
so
and
then
it
will
become
a
common.
You
will
know
that
you
will
find
this
type
of
information
likely
in
this
field,
wherever
you
could
do
it.
E
B
E
A
I
I
Also,
you
know
for
advanced
use
case
support
and
you
know
have
had
some
initial.
You
know
discussion
with
with
the
awesome
elastic
team
with
daniel
and
with
cyril
on
trying
to
figure
out.
You
know
some
next
steps
and
then
kind
of
composing
a
otep.
You
know
based
on
based.
A
On
right,
yeah
yeah,
so
I
guess,
if
I
try
to
summarize
what
we
what
I
was
just
saying
very
quickly,
we
discussed
this.
I
think
it's
valuable
to
try
to
do
this.
I
am
not
sure
how
we
do
this.
We
need
a
champion
to
actually
go
ahead
and
make
a
proposal
about
how
this
goes
forward,
because
I'm
not
certain
what
the
details
look
like.
I
I
believe
there
is
definitely
a
value
in
trying
to
bring
that
valuable
definition
that
ecs
is
into
open
telemetry
in
one
way
or
another.
I
I
A
B
I
No,
no,
it's
a
yeah.
I
mean
it's
a
specification
first,
because
you
know
there
are
specific
definitions
in
the
schema
that
can
be
first,
you
know
supported
in
the
spec
and
then
you
know,
implementation
is
separate
right,
that's
a
different
layer
so
typically
and
as
stegrin
you
know
already
has
pointed
out,
it's
typically
an
open,
telemetry
enhancement
proposal
that
is
made
based
on
the
specification.
C
C
Well,
three
things
I'll
start
from
the
top
a
world
in
which
more
and
more
people
will
use
a
common
schema
to
shove.
Data
around
you
know
for
sort
of
telemetry
use
cases,
not
any
sort
of
data
analytics.
I
think
it's
aiming
too
high
right,
but
for
the
stuff
that
we're
doing
over
here
when
it
comes
to
it
comes
to
telemetry
I
am.
I
would
be
happy
if
that
is
an
evolution
of
some
sort
that
has
all
the
of
of
ecs.
C
Etc,
right
a
way
for
people
that
you
know
emit.
C
You
know
this
ecs
or
a
future
version
of
it,
whatever
we,
whatever
that's,
going
to
look
like
and
be
called
and
pass
it
cleanly
through
the
open,
telemetry
collection
infrastructure,
you
know,
but
if
I'm
a
source
I'll
put
it
out
there,
you
know,
including
you
know,
log
collection
and
maybe
concentrated
agents
and
all
that
type
of
stuff
in
a
way
that
ideally
leaves
the
leaves
it
doesn't
doesn't
cause
terrible
processing
overhead.
You
know
on
the
collector
for
parsing
and
reparsing
and
restructuring,
and
all
of
this
is
eventually
just
you
know.
C
You
know,
burnt
cpu,
right
and
and
if
it
already
comes
in
a
sympathetic
format
and
we'll
just
shove
it
through
right.
That
would
be
great
and
then
finally,
you
know
for
folks
who
want
to
kind
of
use,
open,
telemetry
libraries
you
need
to
to
create
telemetry
right,
like
logs
in
particular,
can
come
from
all
kinds
of
places
right.
You
know
other
things
hopefully
in
the
future
locks
will
also
come.
You
know
they
don't
don't
even
come
from
some
other
place,
but
that
will
be
used.
C
You
know
straight
up
from
the
you
know
from
the
from
from
the
open,
telemetry
libraries
or
the
lock
for
chair
bender,
or
that's
probably
not.
I
probably
should
not
say
look
for
j
this
week,
but
you
know
eventually
again,
you
know
when
people
are
okay,
we're
using
that
thing
again
to
sort
of
have
sort
of
a
clean
way.
You
know
to
give
us
a
a
to
pass
it
through.
You
know
to
to
maybe
you
know,
do
a
bit.
C
You
know
to
get
to
a
sort
of
a
a
rough
consensus,
common
standard
format
for
the
internet,
for
this
type
of
stuff
right-
and
you
know
finally-
and
and
you
know,
and
and
to
have
guidance
for
people
who
want
to
emit
you
know
structured
data
that
is
not
traces
or
metrics,
but
logs
events
that
type
of
stuff.
So
I
think
those
would
be
great
outcomes.
C
You
know
I
just
sort
of
I
haven't
written
it
down.
I
synthesized
it
out
of
my
thoughts,
probably
not
perfectly
formulated
you
guys,
should
add
or
delete,
but
that
would
be
one
thing
that
I
think
if
we
were
to
go
right
up
a
note
tip
or
something
that's
gotta
in
my
mind,
there's
got
to
be
a
section
that
motivates
it
on
top
right
and
those
are
some
of
my
ideas
on
on
what
could
motivate
it
and
why
I'm
excited
about
you
know
about
this
process.
H
C
So
the
you
did,
you
know
you
know
ticker,
and
you
know
you
have.
You
have
a
very
you
know
deep
and
wide
view
on
things
right,
both
from
the
sort
of
spec
perspective
and
obviously
you're,
obviously
an
implementer.
That
knows
all
the
way
down
to
the
wire
and
the
protocol,
definitions
and-
and
I
think
you
know
all
of
these
things
you
know
I
can-
I
can
sense
a
lot
of
those
questions
are
in
your
head.
I
don't
think
we
can
resolve
all
of
this
today.
C
You
know,
from
my
perspective,
is
the
takeaway
is
to
clearly
write
and
we've
got
tautology
here,
but,
like
takeaway
is,
you
know
we
have
a
set
of.
We
have
a
quorum
of
people
who
seem
to
be
mostly
smiling
right
and
seem
to
be
okay
with
this
idea
to
sort
of
attempt
this
right.
You
know
you
just
spelled
it
out,
you
know
you
gotta
come
you
know,
and
you
know
the
vendor
that
owns
this
thing.
You
know,
you
know
we
are
there,
you
know
via
the
pm
leadership.
C
You
know-
and
I
know,
there's
people
above
and
all
of
that
and
all
kinds
of
sides
of
people
that
need
to
agree
eventually,
but
I
think
the
folks
that
can
potentially
actually
make
this
happen
are
all
here
and
they
all
seem
to
be
quite
interested
in
making
it
happen
right.
So
so
then,
what
I'm
wondering
is
you
know?
C
F
C
You
know,
as
you
said,
who
is
the
champion,
you
know
I.
I
feel
that
we
had
a
person
that
raised
their
hand,
and
you
know,
can
we
run
this
through
sort
of
a
you
know,
an
old
tip
process,
and
then
you
know
from
the
locksticks
perspective
in
any
case,
who
would
we
want,
like
who's
the
one
or
two
people
that
we
would
like?
You
know,
on
top
of
the
elastic
folks,
obviously
that
we
want
to
be
part
of
this,
something
like
that.
Yeah.
A
I
I'm
happy
to
support
this
and-
and
you
know
again,
work
with
you
know
everyone
collaboratively
to
make
sure
that
we
have,
you
know,
captured
all
the
different.
I
You
know
details
for
the
otep
and
and
composing
that
as
and
submitting
that
again,
because
I
think
that
this
will
be,
you
know
very
useful
for
our
project
as
well,
as
you
know,
for
being
able
to
have
a
robust,
well-rounded
spec
for
supporting
logs.
So
I
think
that,
fundamentally
you
know
that.
B
H
And
we
will
so
from
that
commitment.
We
can
make
that
suril
and
we
will
collaborate
on
this
old
tap
as
well
and
see
where
we,
where
we
get
with
that.
A
I
A
I
Correct
and
the
idea
was
really
that
you
know
we
would
actually
collaborate
on
on
developing
the
and
then
presenting
that
to
the
spec
sig
right.
So
absolutely
it
has
to
align
with
the
overall.
You
know
tenants
that
the
spec
has
been
you
know,
subscribing
and
then
prescribing
and-
and
you
know
just
make
sure-
that
the
definitions
and
the
semantic
conventions
and
everything
all
the
other
details
are
aligned.
H
Yeah
and
maybe
if,
if
if
you
tigger,
then
everyone
like
involved
here
figures
out
that
there
are
already
major
objections
like
the
while
you're
talking
with
your
internally
or
something
then
just
let
us
know
up
front
because
not
that
we
are
going
all
the
way
you
know
and
then
it's
it's.
It's
blocked
at
some
point.
B
I
had
one
question:
maybe
a
fundamental
one:
if
hotel
is
describing
the
ingestion
side
of
things
and
the
core
part
of
the
schemas
that
are
used
to
align
on
sending
data,
how
do
we
fill
the
gap
between
sending
the
data
versus
things
that
are
required
to
use
data
like?
Where
does
that
responsibility
line,
or
how
do
we
design
or
align
on
that?
Would
that
be
also
part
of
this?
Or
do
we
keep
that
separate.
I
Are
you
talking
about
processing
the
data
annie
after
it's
collected.
B
So
after
it's
collected
it
takes
some
form
of
a
or
you
need
some
enrichment
depending
on
how
users
are
going
to
query
it
and
the
user
use
cases
and
the
injection
use
cases
don't
always
match
yeah.
So
there
needs
to
be
some
different
types
of
schemas
or
some
different
ways
to
query
data.
How
do
those
two
align
do?
Does
each
vendor
take
care
of
this
independently
and
we
only
define
injection?
I
mean.
I
Again,
tigran,
maybe
you
can
address
this.
You
know
because
I
think
it's
a
semantic
conventions
discussion
there
as
well.
As
you
know
how
instrumentation,
if
any,
is
applied
right
from
a
from
our
from
hotel's
standpoint,
but
would
like
to
hear
from
you
yeah.
A
A
I
Yeah
and
that's
one
of
the
reasons
why
you
know
otlp
matters
so
much
because,
as
long
as
you
know,
your
back
end
service
can
can
understand
otlp
so
that
standardizes,
the
ingestion
of
data
and
being
able
to
handle
a
rich
set
from
you,
know
anywhere
right
and
then
being
able
to
process
it
according
to
what
value
add
or
not
you're
providing
on
the
back
end.
F
Hey
everyone,
I'm
my
name's
eli,
I'm
a
product
manager.
I
work
on
open,
search
at
aws
lots
of
great
ideas
going
on
what
I
wanted
to
ask.
You
know,
there's
like
since
there's
so
many
different
things
going
on,
I'm
just
trying
to
wrap
my
head
around
like
where
we
need
to
kind
of
first
put
energy
as
a
group.
Is
it
like
defining
the
specification
like
what
are
the
facets
that
represent
a
log
event?
Is
it
something
else?
F
I
just
want
to
make
sure
that
you
know
we're
all
focusing
on
sort
of
the
right
first
step
here.
I
I
mean
eli,
I
can.
I
can
provide
you
the
the
guidelines,
but
pretty
much
the
there
is
a
log
data
model
and
a
spec
in
in
you
know
alpha
form
today
on
the
project
already,
and
that
is
something
you
know
where
there
have
been
multiple
prototypes
in
terms
of
implementation.
I
Zealous
tigran
has,
you
know,
been
the
leader
in
terms
of
driving,
along
with
the
larger
you
know,
all
of
us
and
larger
community
to
actually
establish
and
specification,
along
with
semantic
conventions.
You
know
to
be
able
to
define
what
the
data
protocol
is
for
the
for
the
collection
of
logs
and
then
how
you
know.
That's
then
delivered
to
the
back
ends,
but
that's
that's
kind
of
our
focus
and
there
is
a
fair
bit
of
documentation
which
I
would
recommend
you
guys
go
through
as
you
get
more
involved.
Yeah.
E
I
just
shared
with
the
aliva
logging
thanks.
I
I
I
F
G
I
I
mean
and,
and
that's
definitely
something
that
cyril
I
think
we
talked
about
earlier-
also
right,
that's
one
option
because
again
it
gives
the
flexibility
for
elastic
to
be
able
to
innovate.
On
top
of
you
know
what
exists
as
a
baseline
for
ecs,
as
well
as
open
telemetry,
to
actually
be
able
to
support
ecs
fully
as
a
as
well
as
any
other
use
cases
that
you
know
intersect
with
the
logs
scenarios
right
jonah,
I
mean,
I
think,
that's
aligned
with
what
we
had
discussed.
I
A
Yeah,
I
think
so
anyway.
I
think
what
we're
expecting
here
is
an
autep
which
describes
how
this
all
works.
Yes,
and
I'm
looking
forward
to
reviewing
that.
I
B
Oh,
this
is
good,
I
think
I'll
work
with
you
to
get
the
first
piece
out.
I
B
I
Awesome
christian
any
other
areas
on
your
end,
which
again
will
we
will
iterate
so
no
worries
and
so
who's.
C
Gonna
yeah.
Absolutely
happy!
Sorry!
Sorry!
I
you
know,
I'm
I'm
really
happy.
I
think
this
has
legs
and
you
know
I
I'm
I'm
open.
You
know
I'm
I'm
happy
to
contribute
as
much
as
necessary
or
you
know
being
to
observe
as
much.
You
know
as
needed.
It's
all
fine.
You
know
alolita
thanks
for
being
kind
of
raising
your
hand,
I
think
this
is
going
to
be
great
and
you
know
we
will
we'll
get
there.
So.
E
I
I
Tigran,
did
you
have
other
items
that
you
wanted
to
cover
in
terms
of
an
update
on
where,
where
you
think
we
are
versus
where
we
should
be
with
the
existing,
you
know
issues
that
we
are
going
through.
Do
you
need
any
kind
of
support
on
the
semantic
conventions,
work
or
anything
else
right
now
or
are
we
just
you
know
kind
of
in
a
holding
pattern
until
until
we,
you
know,
deliver
metrics.
A
A
J
Hi,
so
I
feel
like
the
thing.
That's
on
my
mind,
may
be
in
part
related
to
the
private
prior
conversation,
but
very
narrow
in
focus.
What
I'm
first,
I'm
I'm
alan.
I
work
at
new
relic,
I'm
participating
in
some
of
the
other
sigs,
but
this
is
the
first
time
that
I've
joined
this
group
and
also
I
just
joined
the
the
client
instrumentation
sick
for
the
first
time
about
an
hour
ago
to
talk
with
them
about
this.
J
This
same
thing,
so
I'm
looking
to
start
a
conversation
or
find
the
people
who
may
already
be
having
this
conversation.
But
what
I've
observed
is
that
the
log
data
model
you
know
is
beginning
to
be
picked
up
and
used
for
other
things.
So
I
think,
like
alolita,
I
think
it
was
you
that
opened
up
the
initial
proposal
for
a
rum
event,
so
real
easy
monitoring.
J
Should
they
be
a
different
thing
you
know
like.
Should
we
create
a
new
new
data
model
or
what
it
seems
to
be
is
that
the
community
has
decided
no,
let's
use
the
open,
telemetry
log
data
model,
and
I've
heard
a
like
a
lot
of
great
arguments
for
why
that
should
be
the
case,
and
I
feel
there's
value
there.
J
But
the
thing
that
I'm
personally
trying
to
assess
out-
and
I
and
I
want
to
get
an
idea
of
where
I
want
to
get
some
guidance
of
like
some
next
steps
like.
Is
it
an
issue
that
I
should
open
with
open,
telemetry
specification
or
an
otep
or
or
something?
Basically,
I
want
to
propose
an
idea
that,
if
we're
going
to
use
the
log
data
model
to
basically
create
whole
new
things,
you
know
things
that
are
going
to
come
with
it
like
semantic
conventions
for
those
things
like
real
user
monitoring
or
like.
I
Events
that
are
actually
picked
up
from
that
whole
layer
of
metrics,
as
well
as
other
data.
J
Yeah,
I
feel
like
things
like
attributes
on
resources
or
semantic
conventions
about
attributes
on
like
a
log
payload.
This
is
just
my
gut
right
now,
I'm
not
100
attached
to
this,
but
I
feel
like
it's.
It
falls
a
little
short
specifically
from
the
context
of
like
talking
with
some
of
my
back
end
teams
who
are
like
hey.
You
know
if,
if
I
receive
a
log
payload,
we
are
going
to
ship
that
to
our
our
login
jess
team,
but
if
it's
a
real
user
monitoring
event.
J
J
A
I
think
so
the
way
that
log
data
model
is
defined
today
makes
it
generic
enough
to
be
suitable
for
a
variety
of
use.
Cases
where
you
can
talk
about
events
right.
Is
it
useful
for
everything?
I
don't
think
we
can
make
that
blanket
statement,
probably
for
some
use
cases,
it's
not
in
particular,
I
guess
for
ebpf.
It
may
not
be
efficient
enough
because
of
the
nature
of
the
eppf,
which
requires
huge
volumes
of
data
to
be
passed,
but
for
some
other
use
cases,
it
probably
is
probably.
A
I
guess,
for
example,
for
ram-
is
probably
a
good
enough
representation
of
ram
events.
Now,
how
do
you
differentiate
ram
events
from
other
events
so
that
you
can
put
it
in
the
right
bucket
of
your
back
end?
And
for
that
I
think
we
have
an
answer,
inflammatory
you
define
a
semantic
convention
and
you
put
an
attribute
which
defines
which,
which
says
what
kind
of
event
that
is.
A
A
So
if
you're
recording
ram
data
in
a
log
event,
you
probably
want
to
define
something
like
open,
telemetry.rum
equals
true
or
something
like
that,
and
then
you
record
the
rest
in
in
other
attributes
in
that
same
name,
space
right
or
maybe
rom
is
a
top
level
name.
Space,
like
other
thing,
related
to
ram,
goes
under
ram
dot,
whatever
right
and
then
you
have
ram
dot
present
equals
true,
or
something
like
that
right.
So
usually
we
did
things
like
that
like
this
is
not
inflammatory.
A
A
A
I
want
to
have
a
different
kind
of
classification
where
I
have
type
equals
something
else
not
round,
but
ram
is
also
valid.
There
I
want
to
have,
I
don't
know,
type
is
what
is
it
some?
Some,
let
me
come
up
with
an
example.
Here
type
is
ram
events
from
from
from
browsers
from
from
mobiles
right,
it's
a
type
of
a
thing:
is
it
mobile
or
is
it
a
browser?
A
Now
I
want
to
record
another
type,
but
I
can't
type
is
already
used
for
for
something
that
we
wanted
to
record
as
a
run.
So
you
don't
do
that
right.
You
you
put
more
specific,
attribute
names
that
are
under
the
governance
of
that
particular.
Let's
say,
pillar
right,
the
ram
is
responsible
for
everything
that
comes
under
run,
dot,
name
space,
and
then
you
put
everything
related
to
ram
there.
J
What
would
be
the
guidance
you
might
give
to
say
the
client
instrumentation
sig
as
far
as
like
they
I'm
not
a
I'm,
not
a
I'm,
not
a
rum
guy.
So
I'm
not
super
familiar
with
the
types
of
things
that
it's
trying
to
model,
but
my
understanding
is
they're
trying
to
model.
You
know
some
set
of
different
types
of
of
events
like
session
start
type
of
it's
like
an
example
of
one
thing,.
J
J
Yeah,
that's
interesting.
The
other
example
that
this
this
may
be
just
more
of
a
a
new
relicism,
but
we
have
customers
that
send
us
data
that
are
very
specific
about
their
business.
Use
cases
so,
like
might
have
an
e-commerce
customer
that
sends
like
point-of-sale
system
data,
and
they
have
this
notion
of
like
for
lack
of
a
better
term
like
an
event
type
of
like
pos
system,
and
they
have
a
means
of
you
know
that
doesn't
go
to
our
login.
Just
necessarily
that's
that's.
J
That
goes
to
a
generalized
goes
through
and
generalized
like
ingest
pipeline,
but
then
has
the
analysis
tools
available
for
the
customer
to
be
able
to
slice
entice
that,
however,
it
makes
sense
for
their
very
custom
use
case.
So
we're
you
know,
that's
another
thing.
J
That's
in
the
back
of
my
mind
is
you
know
one
is
that
is
that
really
just
a
new
relative
concern,
or
is
that
a
larger
observability
concern
for
other
vendors
to
be
able
to
enable
customers
to
basically
kind
of
again
for
lack
of
a
better
time
when
thinking
about
like
construct
our
own
signal
types
of
sorts.
A
Yeah,
so
if
it's
a
company
specific
thing
open,
telemetry
says
you
just
use
your
fqdn
as
a
prefix
for
the
attribute
name,
if
it's
just
for
you,
nobody
else
uses
that
you
put
your
company
name
as
a
prefix
and
then
from
there
on
you're
free
to
use
whatever
continuations
of
that,
you
want
to
use
as
the
attribute
names
right.
So
if
that
company
has
a
unique
pos
system,
they
probably
want
to
use
a
company
name,
dot,
pos
dot.
Whatever
is
it
that
they
want
to
record?
A
If
it's
something
that
is
applicable
to
the
community
at
large,
then
you
probably
want
to
make
a
proposal
for
open
clementry
to
add
that
to
the
more
generic
semantic
conventions
that
are
part
of
open,
telemetry
specification.
So
in
that
case
it
becomes
more
top
level
sort
of
a
namespace.
Maybe
maybe
pos
dot
is
the
namespace
for
the
entire
inflammatory
in
that
case,
so
it
depends
right
how
specific
or
how
generic
that
is,
depending
on
that
the
answer
can
be
different
and
and
that
that
is
defined
in
the
specification.
J
Thank
you
for
all
that
I
know
we're
kind
of
running
low
on
time.
I
I
still
have
questions
and
thoughts
that
I'm
still
mulling
over
in
my
mind
and
I'm
wondering
if
I
could
open
up
an
issue,
would
it
make
sense
to
open
to
like
the
open,
telemetry
specification
repository
and
continue
this
discussion,
as
I
kind
of.
J
Okay,
thank
you.
Thank
you.
C
Can
I
have
a
can
I
have
a
quick
follow-up
on
this,
please
so
to
go,
and
you
know
my
hobby
horse
of
you
know
trying
to
sort
of
somehow
classify
what's
in
the
body
right-
and
I
think
you
know
this
to
some
degree
goes
back
to
that-
and
you
just
explained
this
pretty
concisely
with
this
idea
of
having
a
semantic
like
a
boolean,
basically
a
semantic
convention,
rather
than
a
field
with
some
sort
of
rather
than
any
numb.
I
guess
right
are
there?
C
A
A
Totally
implicit
is
okay,
and,
and
almost
everything
is
like
that,
if
you
go
there
like
all
of
the
concepts
that
are
supposed
to
be
recording,
telemetry
have
this
set
of
attributes
which
have
common
prefix,
and
you
typically
have
one
of
those
as
required
attribute,
and
the
presence
of
that
required
attribute
is
telling
you
that
this
is
the
entity
that
is
emitting
the
data.
That
is,
the
source
of
the
data.
J
A
A
Typically,
the
applications
don't
know
this.
This
is
this
telemetry
typically
passes
through
collector,
which,
if
installed
as
a
helm,
chart,
knows
how
to
enrich
the
telemetry
and
put
the
kubernetes
related
attributes
there.
So
that's
how
typically
it
works
right.
You
emit
your
application,
specific
attributes.
It
goes
through
the
collector
which
looks
up
the
source
emitting
source
and
knows
how
to
enrich
the
data
by
the
appropriate
kubernetes
port
attributes.
J
Right
and
I
guess
the
part
that
I'm
still
a
little
bit
confused
about
is
like
it
seems
important
to
know
what
type
of
entity
or
resource
I
guess
is-
is
emitting
data,
and
it
sounds
like
in
addition
to
looking
at
if
I
were
ingesting
this
data.
I'd
have
to
look
at.
Oh,
it
has
k
it's
pod
name,
but
does
it
also
have
these
other
attributes
that
help
me
further
determine?
A
A
A
A
Definitely:
okay:
okay
yeah
one
one
possible
approach:
there
was
to
introduce
this
notion
of
entities
which
have
which
have
identifiers
which
are
sets
of
attributes
which
have
not
not
had
non
identifying
attributes
which
are
more
descriptive
for
the
entities
and
then
you
record
not
just
a
resource
which
is
just
a
flat
list
of
key
value
pairs,
but
you
re
report.
So
sorry,
you
record
with
each
telemetry
data,
you
record
a
set
of
entities
and
then
you
you
specify
which
of
these
entities
is
actually
the
source
of
the
telemetry
and
possibly
optionally.
A
You
record
relationships
between
the
entities,
so
I
have
an
application
that
is
part
of
a
process
which
is
running
on
kubernetes
port,
which
is
running
well.
This
kubernetes
node
as
part
of
this
kubernetes
cluster
right
and
those
are
all
those
the
cluster,
the
node,
the
port,
the
process,
those
are
separate
entities
and
each
has
their
their
own
attributes
recorded
separately.
A
I
Yeah
I
mean
again
as
tigran
said:
I
think
there
are
the
really
we
haven't.
You
know,
there's
a
lot
of
work
to
be
done
there.
So
again,
I
think
that
we
will
need
to
do
some
more
work
on
the
semantic
conventions.
Also
in
order
to
better
define
you
know
how
we
handle
these
values.
So
alan
again,
you
know
feel
free
to
you
know,
chat
with
us
to
you
can
discuss
and
help
in
any
way.
C
Think
one
last
one
last
thing
like
just
looking
at
the
notes:
people
are
wondering
whether
we're
going
to
meet
next
week.
My
suggestion
is
probably
not
and
then
I
think
the
same
question
for
the
following
week.
Does
somebody
want
to
make
a
call?
We
can
record
it
here.
I.
I
Think
most
meetings
for
the
last
week
are
cancelled
anyway
for
the
project
for
next
week,
tigran
depends
on
availability.
I
guess.