►
From YouTube: 2020-10-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
D
Okay,
it
starts
what
do
we
have
here,
andrea,
triage
and
prioritization?
D
That's
great!
You
can
help
with
that.
That
would
be
actually
very,
very
useful.
It's
been
a
while,
since
we
last
did
the
triaging,
so
it
will
be
very
useful
to
do.
One
thing
that
we're
doing
slightly
differently
from
the
specification
is
we're
using
the
milestones
to
mark
things
that
are
necessary
for
the
ga.
E
Milestone
is
just
for.
Let
me
take
a
quick
look.
D
So
there
is,
there
are
two
milestones
defined
there
currently,
but
the
1.0
ga,
which
is
what
we
believe
is
necessary
for
the
g
and
the
backlog,
which
is
what
we
believe
is
after
the
g8
and
then
there
is
it's
easy
to
choose
the
issues
without
the
milestone,
which
is
what
we
use
for.
Looking
at
autistic
reaction,
that's
what
we
were
doing
so
far
and
it's
been
a
while,
since
you
did
it
last
time,
so
we
I
think
we
have
some
that
actually
need
attention.
D
E
The
the
some
of
the
items
I
would
like
to
bring
to
the
table
for
triage
and
prioritization
are
like.
I
saw
that
there
are
some
bugs
there's
26
bugs,
but
not
much
indication
as
to
which
one
is
like
p1
p2p3
or
whether
they're
all
like
super
urgent
and
whether
they
have
to
go
in
the
very
next
release.
So
that's
some
of
the
things
that
I'd
like
to
bring
to
the
table.
Yeah.
E
D
That's
a
good
point,
so
I
guess
yeah
we're
not
using
the
priorities.
It
would
be
useful
to
do
as
that
as
well,
since
we
already
have
significant
number
of
those,
so
this
is
yeah.
I
think
it's
important
at
this
point.
So
do
you
think
it's
still
okay
to
use
the
milestones
class
labels
for
priorities,
or
you
would
prefer
to
have
the
labels
universally
to
mark
the
requirements
for
the
va
as
well.
E
So,
usually
the
mouse,
don't
usually
the
milestones
work
best
for
representing
those
things
that
are
going
into
a
release
with
a
due
date
on
it
it.
It
can
then
help
with
generating
like
a
change,
log
or
double
checking
on
certain
things,
and
then
the
labels.
I
usually
have
a
couple
of
different
dimensions
that
can
be
cut
so
that
way
we
can
get
reporting
on
different
areas
and
choosing
those
areas
is
what
I
need
help
on.
Like
one
dimension
is
cut,
it's
like
through
priority
and
the
spec
sig.
E
D
E
How
about
I
put
together
like
specific
labels,
and
then
I
think,
as
a
maintainer
like
you
guys,
can
decide
like
okay,
that's
same
or
maybe
we
should
have
this
or
that
and
then
I'll
need
to
set
up
a
meeting
with
like
maintainers.
You
know
to
do
initial
scrub.
E
Once
we
get
the
the
issues
triaged,
then
a
regular
cadence
at
the
beginning
of
sig
meetings
just
to
check
new
issues
coming
in
and
triggers
them
quickly.
Much
like
we
do
with
the
spectator.
So
it's
going
to
be
like
a
bit
of
a
a
hump
in
terms
of
investment
time
to
get
get
this
all
sorted
out,
but
then
afterwards
it's
easy
to
keep
going,
and
I
can
help
with
that.
Whole
process
sounds
good.
E
That
would
be
great,
okay,
so
I'll
put
more
notes
in
the
getter
and
also
on
the
slack
channel
and
and
in
the
google
okay.
D
Okay,
so
I
guess
we're
good
for
now
it
will
be
for
for
the
next
time.
I
think
the
next
significance
we
can
go
maybe
over
or
maybe
you
can
schedule
a
separate
people
to
do
that
for
the
first
time.
Yes,.
E
Who,
who
do
you
think
is
best
to
to
rope
in?
Is
it?
Is
it
just
boggling
you
this
maintainers
for
maintainers.
D
And
whoever
from
the
approvers
would
like
to
attend,
I
think
they
would
be
optional,
let's
invite
everybody
and
then
but
the
maintainers
will
be
very
useful.
F
G
Sorry,
no
sorry
to
interrupt
you.
I
just
wanted
to
ask
you
to
include
me
as
well
if
it
is
on
the
same
time
zone
as
this
one
here
we
have
every
couple
of
weeks.
We
have.
We
alternate
the
time
zones
so
and
then
the
other
one
is
really
hard
for
me,
but
for
for
this
one
here
I
can
join
even
if
I
cannot
help
that
much,
but
I
would
like
to
participate,
and
hopefully
the
future
be
more.
G
D
D
Okay
cool
next
can
we
improve
the
assigned
review
process
yeah?
I
saw
the
issue
that
you
created.
I
completely
agree
with
you.
I
received
so
many
email
notifications,
I'm
swamped
if
we
can
do
anything
about
it.
It's
very
very
welcome.
I
am
not
sure
why
I'm
actually
receiving
all
those
emails
on
every
single
pr
that
I'm
neither
the
urine
nor
scientology.
So
if
we
can
fix
this,
that
would
be
great.
I
just
don't
know
how.
G
G
Yeah,
in
the
end,
it
is
defined
by
whoever
is
assigned
as
a
reviewer
to
the
apr
and.
G
D
I
don't
know:
what's
the
reason
for
all
of
us
being
assigned
as
code
owners
for
every
subdirectory,
we
need
to
check
with
pogba.
I
remember
he
we
didn't
have
that
initially
and
then
he
said
there
is
a
problem
because
of
that,
but
I
just
don't
remember
what
was
the
problem?
Let's,
let's
check
with
you.
If
you
don't
know,
there
is
some
reason
for
that,
but.
D
Very,
I
would
very
be
very,
very
welcoming
for
this
change,
because
it's
just
unknown.
D
I
D
I
Expand
on
this
yeah
sure
the
thing
is
not
for
the
current
behavior
for
the
current
receiver,
because
we
see
the
livestream
message
and
parse
it,
and
then
they
will
pass
the
matrix
to
the
following
workflow
like
the
processor
exporter,
but
like
for
two
cats,
I
mean
the
similar
behavior
like
the
statsd
server.
We
need
to
have
like
an
interval
so
that
we
can
calculate
the
for
example.
If
we,
if
we
like
talking
about
gauge,
we
can
get
the
last
value
in
the
interval
and
pass
that
value
to
the
following
process.
I
Yeah.
So
that's
reason
why
we
need
an
interval
to
aggregate
yeah
and
you
can
also
think
of
a
counter
and-
and
you
need
to,
like
add
some
add
some
value
in
this
interval.
If,
if
there's
no
interval,
we
cannot
do
the
aggregation
yeah
so
like
we
talked
with
josh
and
yeah
yeah,
I
think
before
we
plan
to
I
mean
our
plan,
is
that
the
receiver
only
parse
the
matrix
and
don't
do
any
aggregation,
but
it
seems
a
little
bit
different
for
stasd.
D
B
B
Yeah,
so
so
we
okay,
we've
discussed
this
like
endlessly
with
josh
mcdonald.
I
think
we
understand
what
we're
doing
with
the
statsy
receiver.
Our
question
here
is
not
around
implementing
any
part
of
statsd.
Really
it's
more
about
like
it's,
not
a
statsd
specific,
it's
about.
How
do
we
implement
the
code,
so
the
the
code
needs
to
continually
be
listening
on
a
udp
port
and
then,
like
every
configured
time
interval
say
like
30
seconds.
B
It
needs
to
process
all
of
the
messages
collected
in
the
last
30
seconds,
the
udp
messages,
and
then
you
know,
process
the
metrics
and
send
them
down
and
gavin
was
not
sure
how
to
do
the
code
for
that
he
couldn't
find
any
existing
receiver.
That
does
something
similar
like
there
are
receivers
that
do
something
on
a
time
interval,
but
they
don't
then
like
collect
things
continuously.
So
we're
wondering
like.
Okay,
do
you
have
like
two
go
routines?
Maybe
one!
That's
like
listening
one,
that's
running
periodically
like
how
does
that
work.
D
Yeah,
I
think
that
that
sounds
yeah
reasonable
to
me.
Whatever
you
want
to
do
in
terms
of
this
sounds
like
some
sort
of
background
process
that
needs
to
run.
The
receivers
are
expected
to
do
whatever
they
need
to
do
right
for
doing
that
sort
of
things.
Poor
things
sound
good
to
me.
If
you
can
have
a
skeleton
of
the
code,
you're
not
sure
create
a
draft
pr.
I
can
have
a
loop,
but
I
think
that's
that's
the
way
you
do
it
right.
D
D
G
We
use
an
after
func
time,
dot
after
func
for
the
group
by
trace,
so
we
we
hold
things
in
memory
and
then
once
we
receive
a
trace,
a
new
trace,
we
schedule
something
to
run
in
10
seconds,
so
you
can
do
something
similar
to
here.
Instead
of
having
a
pull
of
of
guru
jeans,
you
can
just
have
a
recursive
time
func.
You
know.
G
Or
the
group
by
trace
it's
not
as.
G
Yes,
it's
not
exactly
the
same
use
case,
but
it's
very
similar.
A
D
Thanks
right
next
time
to
talk
to
the
park
issue,
previous
receiver
stops
scraping
wonder
if
I
could
get
some
kindness,
because
does
anybody
who
is
familiar
with
prometheus
receiver?
Who
knows
anything
about
this
guy?
Can
somebody
help
with
this.
H
I'm
not
sure
if
jason
is
here,
but
I
think
jason
is
here.
J
J
J
Yeah
we've
just
been
looking
into
this
issue.
It
doesn't
look
like
it's
an
issue
on
prometheus,
so
it's
I
guess
it's
specifically
to
the
collector
itself,
but
yeah.
H
Yeah,
so
it's
breaking
like
breaking.
Basically,
you
know
prometheus
receiving
on
kubernetes,
because
you
know
as
soon
as
you
do,
a
rolling
update,
which
is
very
common.
Like
I
mean
you
will
do
it
when
you're
deploying
new
versions,
the
collection
is
broken,
so
we
cannot
like
suggest
the
collector
to
the
kubernetes
users
at
this
point.
Who
are
you
know
if
they're
using
prometheus
and
we
looked
at
you
know
we
were
just
reviewing
the
receiver.
H
It
looked
like
you
know
there
is
it
looked
very
similar
to
you
know
what
prometheus
is
doing
by
all
the
discovery,
and
you
know
collecting
based
on
those
like
auto
discovered,
like
endpoints
and
so
on,
but
we
are
both
not
very
familiar.
So
if
anybody
is
like
you
know,
experience
we
would
like
to.
You
know
escalate
it
to
that
person.
Otherwise
we
will
be
spending
time
on
this,
and
you
know
we
probably
should
do
that
if
no
one
else
is
up
for
it.
K
H
H
H
Okay,
okay,
so
I
mean
the
next
one
is
also
me
by
the
way,
the
I
have
a
quick
question
since
I
have
very
little
context
on
what's
going
on
on
the
data
model
side.
So
you
know,
the
you
know
summary
is
not
represented.
The
summary
type
from
prometheus
is
not
represented
on.
You
know
open
telemetry,
so
I
wonder
if
you
know
this
is
an
existing
issue
for
other
people
or
you
know.
Is
there
any
like
existing
plans
to.
H
D
H
D
Support
to
the
to
the
protocol-
I
am,
I
don't
know
where
exactly
we're
going
with
this,
but
I'm
guessing
that
it
was
there
previously
it
was
removed
because
we
didn't
have.
We
didn't
know,
what's
the
good
use
case
for
the
summary
type,
so
I
suspect
that
since
now
the
use
case
is
described
properly,
it's
likely
to
be
to
be
restored
in
the
protocol,
but
the
issue
is
in
the.
Let
me
find
that
I
can
post
a
link.
So
if
you.
H
I
I
saw
you
know
I
talked
to
alola
about
this
before
you
know
talking
to
you,
so
I
saw
her
proposal.
I
just
wanted
to
like
get
more
context,
because
you
know
outside
of
what
I
talk
to
other
folks
like
101.
I
didn't
really
have
a
good
understanding
so
that
this
this
you
know,
clarifies
a
lot
of
things.
H
D
H
Okay
thanks
the
the
other
question
that
I
had
was
like.
Is
there
any
like,
you
know,
thought
I
I've
seen
some
like
issues
or
some
thinking
around
this
like
multi-tenancy,
I
feel
like
you
know
this
may
come
up
as
a
long-term.
You
know
problem
like,
for
example,
for
kubernetes
or,
like
you
know,
deploying
the
collector
to
kubernetes
or
to
the
collectors
is
one
thing,
but
if
we
are
going
to
like
you
know,
deploy
this
on.
H
Lambda
like
in
you
know,
environments
where
you
have
actually
like
very
short-lived
like
processes-
and
you
know
you
just
want
to
have
like
one
collector
on
like
per
host
type
of
like
situation.
It
will
be
an
interesting
issue
in
the
long
shot.
So,
if
there's
any
work
going
on
there,
I
would
love
to.
You
know,
take
a
look.
Otherwise,
I
also
you
know,
can
do
some
work
and,
like
maybe
propose
something.
So
you
know
I
again.
D
H
Is
it
the
one,
though,
with
the
header
there's
only
a
header
that
you
pass
that
identifies,
I
guess
the
the
tenant
that
there's
no
isolation
but
there's
just
a
header
right.
G
Where
we
discussed
the
mutual
tendency
in
the
past,
it
is
currently
just
using
the
an
attribute
from
from
the
metadata
so
jrpc
or
http.
G
The
idea
is
that
there
would
be
something
running
before
this
processor
before
this
multi-tenancy
processor.
That
would
enhance
the
context
with
this
information.
G
G
Information
from
from
whoever
is
sending
the
data
so
now
there
is
authentication
there
as
well
my
my
goal,
so
we
have
this
this
requirement
for
at
red
hat
as
well.
To
have
you
know
much
attendant
collectors
and
the
our
idea
is
to
indeed
have
a
much
turned
solution.
You
know
right
now,
it's
just
pieces
scattered
around,
but
I
think
by
now
we
should
have
all
of
them
in
place.
So
it's
just
a
matter
of
you
know,
trying
it
out
and
and
seeing
if,
if
it,
if
it
works,.
G
G
Yeah
I
mean
we
have
this
idea,
I
think
at
the
very
end
of
of
the
proposal
I
I
did
add
a
like
one
question.
So
how
about
dynamic
configurations
right?
This
is.
G
H
G
H
G
It's
great
to
hear
it's
great
to
hear.
I
I
really
need
someone
as
well.
You
know
to
talk
multi-tenancy.
H
D
Okay,
what's
next,
so
we
discussed
that
convert
all
resource
attributes
to
metric.
D
D
H
This
is
a
use
case
that
I
am
interested.
If
you
know
we
want
to
provide
some
control
plane
for
collectors
like
so
one
of
the
ideas
is,
if
you
know
open,
telemetry
is
going
to
be
everywhere.
It's
going
to
be
collecting
all
the
like
telemetry
data.
You
know
it's
just
useful
for
user,
not
to
deal
with
the
configuration
by
you
know,
passing
their
own
configuration
file,
but
like
maybe
about
ui
or
some
other
control
plane,
so
they
can
go
and
like
configure
things,
you
know
dynamically
as
well.
I
talked
to
bogdan
about
this.
H
We
decided
like
hey.
Like
I
mean
we
would
just
do
a
roll
and
update
with
the
new
configuration.
Like
you
know,
we
don't
have
to
care
too
much
about
dynamic
configuration
related
capabilities
right
now,
because
in
the
worst
case
we
can
do
a
rolling
update.
We
just
need
to
sort
out
if
it's,
okay
to
you,
know,
gracefully
shut
down
and
have
like
some.
H
You
know
we
still
need
to
figure
out.
You
know
how
we
will
do
the
rolling
updates
to
be
honest,
because
if
you
just
shut
graceful
shutdown
and
start
the
process
again,
you
know
there's
some
time
window,
that
you
may
be
missing
telemetry
data,
but
you
can,
you
know,
figure
out
the
model
where
you
are
starting
another
collector
and
like
graceful,
shutting
down
the
other
one.
H
So
you
know
there
is
no
like
issue
of
availability
of
the
collector,
so
I
think
this
might
also
something
that
may
come
up
in
the
long
term
as
we
are
enabling
some
of
this,
like
you,
know,
conveniences
like
control
planes,
it's
something
to
thinker
about.
I'm
not
like
you
know,
assuming
that
we
will
have
an
answer
today.
H
It's
just
the
way
it
is
and
like
there
are
several
like
prometheus,
for
example,
does
reloading
of
configuration
like
to
enable
some
of
these
cases,
and
I
believe
that
it
will
be
a
difficult
problem
for
us
to
solve
if
we
want
to
have
this
feature,
so
maybe
it's
more
important
for
us
to
figure
out
how
to
do.
You
know
change
configuration
without
downtime
rather
than
addressing
reloading
yeah.
D
Yeah,
so
both
graceful
shutdown
and
pods
reloading
are
in
our
backup.
We
want
to
do
this,
it's
just
a
matter
of
having
the
time
to
do
that.
We
also
looked
into
the
remote
configurability,
where
you
have
a
configuration
server
to
which
the
collector
connects
to
and
catches
the
configuration
and
applies
it
on
the
fly.
D
We
do
want
to
do
all
of
this,
but
it's
a
matter
of
just
having
the
engineering
time
well,.
D
If
anybody
wants
to
do
that,
the
the
the
restarting
part,
the
in
the
components
are
intended
to
be
individually
responsible.
So
technically,
when
that.
H
D
D
So
if
you
have
a
change
in
the
configuration
you're
able
to
find
out
what
is
the
difference
rather
than
the
entire
configuration
being
different
right
and
if
you
can
just
stop
and
restart
the
components
which
are
different,
that
that
would
be
the
the
ideal
approach
right.
That's
at
least
the
interface
for
the
okay.
H
D
With
the
with
the
remote
configuration
there's,
also
the
the
issue
of
how
do
you
emerge
the
remotely
received
configuration
with
the
one
that
is
specified
locally,
so
we
need
to
come
up
with
some
sort
of
rules
for
doing
that.
The
overrides:
what's,
what's
what
takes
the
priority?
If
those
conflicts
are,
do
you
resolve
them
or
if
you
work
with
a
remote
configuration,
do
you
completely
ignore
the
local
one.
H
H
L
I
have
a
question
about
that
is
that
is
that
something
on
your
backlog,
where,
if
somebody
had
like
a
design
or
proposed
a
solution
that
we'd
have
time
to
look
at
it
as
a
group,
or
is
it
kind
of
a
distraction
before
ga
right
now?
It's.
D
I
highly
doubt
that
we
will
do
that
before
the
ga
most
likely.
This
is
something
we
would
want
to
do
for,
after
the
ga,
but
yeah
for
for
the
remote
configuration
in
particular
and
possibly
for
the
hot
reloading.
We
would
want
to
see
some
sort
of
design
document
that
explains
the
that
shows
the
whole
picture.
How
we
want
to
do
that
before
we
start
the
implementation
race
will
shut
down
on
its
own.
It's
probably
it's
likely
simply
just
an
implementation
method
right.
The
bits
are
there.
D
But
yes,
I
would
like
to
see
some
design
documents
and
then
most
likely
for
the
remote
configuration
and
for
the
whole
revolving
most
likely.
That
would
mean
that
we
would
want
to
implement
it
after.
H
M
M
Like
if
I
miss
like
so
if
you
went
over
my
question
here,
so
you
read
over
it,
but
I'm
not
sure
so.
Can
I
just
go
and
discuss
a
little
bit?
Yes,
yeah
the
one
is
like
a
resource
convert
all
resource
attributes
to
metric
levels.
So
I
think
you
also
saw
the
conversation
on
github.
Bogdan
gave
us
suggestion
and
I
am
working
to
write
a
consumer,
but
my
concern
is
like
so
this
consumer,
I
mean,
should
be
used
by
all
the
non
otlp
exporters.
So
I
was
just
wondering
like
so.
M
This
is
not
coming
from
configuration
like
how
this
can
be
used
by
all
the
exporters
like
where
to
put
it
like.
Is
there
any
other
things
which
is
being
utilized
by
all
the
exporters
so
that
I
can
have
a
look
and
implement
this
one?
Also?
I
am
not
sure
like
just
how
to
put-
maybe
I
haven't
looked
enough,
but
if
any
guidance
or
any
examples
we
are
using
today.
D
D
Somebody
has
a
proposal
on
how
to
do
that,
but
I
would
first
try
to
understand.
What's
what's
the,
what
does
the
code
look
like
right?
If
the,
if
we
have
the
implementation,
that
does
this
transformation,
how
do
we
implement
it
in
a
generic
way?
It's
not
entirely
clear
to
me.
Do
we
do
that
as
a
pre-translation
step
for
all
of
the
exporters,
then
maybe
that,
like
it's
like
a
sort
of
a
helper
that
other
exposures
can
use
before
they
do
their
own
translation?
D
I
don't
know,
I'm
not
sure.
What's
the
what's
about
to
like
approach,
you.
M
F
L
I
didn't
add
myself
to
the
gender.
Can
I
ask
a
question
if
we're
done
yeah
so
one
one
of
the
things
I
noticed,
there's
a
bunch
of
like
custom,
builds
of
the
open
telemetry
collector.
Is
there
a
like
a
canonical
like
you
have
to
have
this
in
your
collector
to
be
considered
open,
telemetry
collector
in
your
custom,
build
or
not
like?
Is
there?
L
Is
there
like
a
tck,
or
is
there
interest
in,
say
us
building
a
compliance
like
test
kit
to
say
if
you
want
to
call
yourself
the
openstack
telemetry
collector,
you
have
to
abide
by
this
set
of
config
files,
and
this
set
of
like
input
output.
You
know
requirements
right
is
that
is
that
something
that
exists
or
something
that
could
exist
there
are.
There
are
two.
D
Official
builds
of
open
claims
you
collected
today,
the
core
and
the
country.
The
country
is
a
superset
of
the
core.
It
includes
everything
that
core
has,
plus
everything
all
the
components
in
your
country
repository.
These
are
the
official
builds
they
are
built
on
circle
ci.
They
are
published
to
docker
hub
every
other
custom.
Build
is
just
that
right.
That's
that's!
A
custom
deal,
that's
not
considered
to
be
an
official
open,
telemetry
collector.
D
I
I
don't
know
if
we
want
to
have
any
definition
of
an
open,
telemetry
collector
other
than
saying
that
the
official
one
is
what
is
built
from
the
official
repository
and
published
to
the
docker
hardware
right
now.
What's
the
what
what
drives
the
interest?
What's
the
issue?
I'm
not
quite
sure
I
understand.
L
I'm
worried
about
users
finding
documentations
for
the
open,
telemetry
collector
and
it
not
being
the
canonical
collector
and
then
them
doing
things
in
like
a
custom,
build
of
a
collector
and
expecting
it
to
work
in
a
different
custom,
build
of
a
collector
and
kind
of
the
confusion
that
happens
there.
When,
if
you
read.
L
Yeah
effectively,
like
I
read
I
read
like
let's
say
I
read
the
google
cloud
docs
and
google
cloud
docs
are
like
here's,
our
build
of
the
collector
right
or
the
amazon
docs
or
here's
our
build
of
the
collector
right,
and
then
I
pick
one
of
these
up
and
start
using
it
and
they
they
call
it
the
open,
telemetry
collector,
but
it
doesn't
abide
by
the
same
set
of
docs
as
the
open,
telemetry
docs,
and
so
now
I
have
kind
of
confusion
between
these
things,
yeah,
so
yeah.
I
think.
D
That's
unfortunate
that
people
call
it
the
open,
telemetry
collector.
I
wouldn't
call
it
that
right.
That's
not
an
open
planetary,
collector,
that's
a
customized,
maybe
at
best
it's
a
customized
version
of
it.
I
I
don't
know
how
we
do
that,
because
it's
an
open
source
thing
everybody's
three
to
four-
can
do
whatever
they
want
to
do
with
it.
I
don't
know
if
there
is
a
good
solution
to
that
I
mean
I'm
open
to
suggestions.
I
understand
what
you're
saying
it's.
D
H
G
G
If
we
go
full
full
in
with
open
temperature
collector,
then
what
we
are
probably
going
to
do
is
we
are
going
to
publish
what
are
the
modules,
the
components
that
that
that
makes
our
distribution.
G
So
it
is
the
same
way
to
configure
as
the
open
geometry
collector
core
in
the
sense
that
you
specify
you
know
the
minsminus
config
flag
to
the
binary,
and
then
you
specify
a
configuration
file,
which
is
a
yaml
file,
which
is
a
pipeline
definition,
but
what
actually
goes
there,
which
which
receivers,
processors
and
exporters
and
extensions
are
available?
That's
to
our
discretion
all
right,
so
you
cannot
expect
all
the
the
components
that
you
see
on
contrib
to
exist
on
the
eager
distribution.
G
So
we
are
going
to
publish
a
if
we
go
just
on
this
path,
then
we
are
going
to
publish
a
list
of
components
that
are
that
are
within
our
distribution
and
the
same
with
one
internal
or
non-internal,
but
one
one
very
custom,
distribution
of
open
telemetry,
which
is
called
the
observatorium
open,
telemetry
collector,
and
that
was
the
the
main
reason
why
we
built
the
the
open
temperature
collector
builder.
So
the
builder,
with
builder,
specified
manifest
saying
I
want
this
processor
and
that
receiver
and
it
builds
a
distribution
frame
and
in
in
that
for
that
one.
G
So
I
guess
the
two
points
there
is:
one
users
are
familiar
with
the
way
of
configuring
things
so
like
yaml
file
when
all
the
all
the
flags
like
the
metrics
flag
and
health
board,
and
so
on
so
forth,
and
the
second
one
is
what
actually
is
composed
of
you
know,
which
components
are
there.
D
L
Actually,
so
I'd
actually
what
I
would
like
to
build
if,
if
it's
amenable
to
the
community,
is
a
a
tck,
if
you
will
like
a
test
compatibility
kit
for
the
open,
telemetry
collector
core,
where,
if
you
give
it
a
binary,
you
can
throw
yaml
configuration
at
it
and
it
should
make
sure
that,
like
things,
we
consider
core
in
the
collector
work
consistently
and
you
can
set
up.
L
You
know
pipelines
effectively
with
the
core
components
and
all
these
custom
distributions
need
some
minimum
of
open,
telemetry
kind
of
configuration,
support
in
it
to
kind
of
be
compatible
like
so
so.
L
There'd
be
like
a
tag
like
this
is
core
compatible
and
so
we'd
make
sure
you
know
when,
when
each
of
us
build
our
version
of
the
collector
with
extra
little
things
to
deal
with,
you
know
legacy
or
whatever,
which
I'm
sure
google
cloud
will
also
have
we
can
we
can
make
sure
that
we're
at
least
compliant
with
each
other
right,
so
that
users
have
a
consistent
set
of
expectations
that
will
always
work.
L
I
love
that
you're
putting
in
your
docs
exactly
which
components
are
included
like
we
can
do
the
same
and
we
can
make
that
be
a
community
convention
so
to
help
avoid
user
confusion.
So
what
I'm
proposing
to
build
is
this
compatibility
kit
of
if
you
build
a
custom,
build
of
the
collector,
you
run
this
tests
against
it
and
it
will
keep
all
of
them
kind
of
relatively
consistent
around
how
configuration
works
and
so
docs
around
the
core.
G
Is
that
for
the
case
where
I
want
to
move
my
collector
somewhere
else
and
keep
the
same
configuration
or
what
is
the
actual
use
case
for
that,
because
if
it
is
for
interrupt
between
between
hops
in
the
in
the
in
the
network,
then
that's
a
protocol
thing
right
or
otlp.
For
instance,
then
I
can
say
both
distributions
here
support
otlp.
So
they
should
talk
to
each
other,
no
matter
if
they're
actually
based
on
the
same
software
or
not.
G
L
So
so
this
would
be
if
the
the
use
case
is
more
around.
I'm
setting
up
a
collector
to
send
to
google
I'm
setting
up
a
collector
to
send
somewhere
else
right
or
like,
let's
say
I'm
using
both
google
and
aws
for
some
reason
right,
and
so
I
have
a
collector
that
sends
to
both
of
them
the
the
closer
the
comm
the
configuration
is
to
each
other.
G
Yeah,
it's
too
much
occurred
to
me.
If
you
are
talking
about
sending
data
or
operating
the
oper,
the
collector.
L
I'm
more
worried
about
sending
data,
but
I'm
also
concerned
about
operating
the
collector
like
I
so
as
a
user
do
I
think
I
can
drop
in
if
I'm,
if
I'm
gonna
try
out
the
amazon,
you
know
send
it
to
amazon
and
I
use
their
collector
and
I'm
like
okay.
Well
now,
I'm
gonna
try
out
somebody
else's
and
I
just
drop
it
in.
How
hard
is
that
for
me
to
do.
G
So
we
have
the
two
cases
there
and
for
end
users
they
at
least
the
case
that
we
know
they
don't
care.
What
is
actually
running
there
as
long
as
it's
receiving
or
ingesting
data
that
is
produced
by
the
eager
clients
right.
So
I
I'm
not
sure
it
is
jck
for
the
config
files
would
be
that
helpful.
I
mean
it's
a
nice
thing
to
do,
but
perhaps
I
don't
know,
certainly
you
know
people
care
more
about
the
the
data
that
is
being
sent.
L
Yeah,
I
think
the
the
only
reason
to
to
have
any
kind
of
configuration
in
there
is
to
be
able
to
consistently
configure
a
compatibility
kit.
But
you
to
me
again
the
fact
if
we
can
all
speak
the
same
protocol
to
each
other.
That's
the
most
important
thing
to
protect
here
across
all
the
different
distributions
right.
D
Yes,
I'm
not
sure
what
exactly
it
gives
us
right
if
I'm
using
the
google's
field,
for
example,
which
which
has
a
custom
component,
and
I
want
to
use
aws
build,
which
has
another
custom
component
that
I
need
to
use,
they
are
already
incompatible
right.
If
I'm
using
the
custom
component,
then
I
have
a
problem
already.
The
the
fact
that
they
both
implement
the
same
minimal
core
set
of
components
is
not
really
helping.
You.
L
D
D
Well,
yes,
I
guess,
if
you
have
a
receiver
for
the
protocol,
the
other
one
has
an
exporter
for
the
same
protocol.
Then
you
have
the
expectation
that
they
should
be
able
to
interoperate
how
you,
okay,.
G
Maybe
that's
perhaps
people
are
implementing
protobuf
directly.
They
they
they
do
not
have
the
same
binary,
but
you
know
they
just
create
a
jpc
service
for
ltlp
and
and
then
I
see
that
there
would
be
it
would
be
beneficial
to
have
a
tck
there
so
that
people
implementing
the
grpc
or
the
protobufs
you
know
to
be
compatible
with
your
tlp
endpoint.
G
They
could
then
assert
that
they
are
compatible
with
the
open,
telemetry
collector.
Then
I
see
some
value
in
the
tck
right,
but
on
the
validating
the
distributions
themselves,
I
I
I'm
not
sure
I
see
much
value
there.
D
L
That's
that's
the
focus
of
the
ask
the
only
the
only
reason
I
was
pushing
for
any
configuration
was
the
only
way
I
can
envision
implementing
this
tck
is
having
a
common
configuration
format
to
fire
at
a
binary.
To
then
like
do
a
network
test
of
shoving
things
through
it.
You
know,
because
I
can
configure
it
consistently.
That
would
be
the
only
reason
why
I
think
you
would
want
any
piece
of
configuration
there,
but
the
real
thing
here
is
to
test
that
these
things
can
talk
together.
That's
that's
what
I'm
looking
for.
G
Yeah
from
providers
that
that
aim
to
be
compatible,
what
you
can
request
or
require
is
a
container
that
starts
the
service
on
port
x.
And
then
you
don't
care
about
how
it's
set
up.
D
Yeah
yeah,
but
I
think
it's
useful
to
try
to
stay
away
from
the
configuration
portion
of
it.
Let
the
provider
do
whatever
they
want
to
configure
it's
up
and
running.
So
yes,
let's
test
against
it
right
at
whatever
port
is
advertised
to
be
accepting,
otlp
and
and
we'll
need
to
decide
whether
all
tlp
is
the
bare
minimum
that
we
need
to
make
part
of
that
tck
or
there
is
more
to
it
that
that
we
need
to
understand
right,
there's
also
the
it's
for
the
sending
and
receiving
site.
D
D
Yeah,
I
think
it
will
be
useful.
It's
just
we
need
to
understand.
What's
the
what's
the
mechanics
of
this
right,
where
do
the
tests
reside?
How
do
people
use
them
if
it's
a
custom,
build?
Do
you
consume
the
tests
and
then
run
in
your
own
repository
and
then
publish
the
results?
Do
you
label
them
in
some
sort
of
this?
These
are
questions.
D
G
The
cncf
has
a
cncf.ci
where
they
run
compatibility
tests
for
all
the
kubernetes
providers.
So
we
can,
you
can
see
how
they
are
doing,
perhaps
publish
the
results
there.
L
Yeah,
I'm
I'm
willing
to
throw
together
a
design
and
a
proposal.
I
just
wanted
to
see
if,
if
we
like,
if
that's
amenable
to
the
community-
and
I
can
put
in
the
legwork.
D
Yeah,
I
think
it's
useful.
I
advise
to
start
simple,
don't
spend
too
much
time
on
it,
let's
have
a
very
trivial
variation
of
it
and
then
we
can
improve
on
it
right.
I
I
would
hate
for
you.
If
you
spend
too
much
time
on
it,
and
then
we
find
that
it's
it's
not
really
what
we
wanted.
D
A
D
D
D
Okay,
let's
start
small
update
from
my
site.
We
we
now
have
the
experimental
or
unstable
build
of
the
collector.
So
all
the
pins
now
produce
two
two
executables
on
the
collector.
One
is
the
regular
one
that
we
had
before,
and
the
second
is
the
new.
H
D
All
unstable,
which
includes
the
new
experimental
features
right
and
the
first
one
that
we
added
to
the
build
is
the
stanza
receiver.
D
We
have
that
now
and
I'm
now
working
on
changing
slightly
the
test
bed
to
allow
writing
tests
and
plan
tests
against
the
unstable
executable
so
that
we
can
actually
test
for
the
for
the
stanza
receiver,
and
we
can
do
some
testing
with
the
log,
parallelograding
and
stuff
and
whatever
else
we
will
support
in
stanza
receiver,
so
yeah
I'm
working
on
that
right
now.
Hopefully,
we
have
a
solution
soon,.
D
You
don't
any
other
topics
other
than
the
the
presentation
from
now
now
before
we
do,
that,
does
anybody
have
anything
they
want
to
give
an
update
or
discuss.
N
I
have
two
quick
questions,
so
those
maybe
will
be
pretty
quick
once
so.
Maybe
we
could
address
those
so
the
first
one
is
when
I'm
looking
at
the
protocol.
The
maturity
level
for
logs
is
not
even
mentioned,
and
I'm
wondering
if
maybe
it
could
be
mentioned
as
alpha
right
now
or
maybe
it's
not
the
the
right
time
yet.
D
You
mean
in
the
protocol
repository
yes,
so
I
submitted
like
half
an
hour
ago,
a
pr
that
marks
it.
Okay,
very
good.
O
N
N
There
was
a
a
question
asked
by
alex,
I
think
that
he's
not
on
the
call,
but
he
was
asking
about
the
log
data
model
and
the
the
usage
for
gcp
and-
and
I'm
not
sure
I
don't
have
experience
with
gcp
and
what's
the
data
model
there
and
how
it
maps.
But
if,
if
someone
knows
the
answer,
I
can
look
at
that.
I
think
that
will
be
helpful
because
he
he
brings
a
a
point
with
multiple
log
names
present
and
how
this
affects
the
the
model
and
mapping.
N
N
However,
what
alex
finds
is
that
the
there
are
there's
a
list
of
log
names
coming
in
the
logs,
so
I
I'm
not
really
sure
why
it's
so
and
and
where
it
is
the
case,
because
when
I
was
looking
at
the
gcp
log
data
model,
there
was
a
single
record
with
a
single,
lock
name
entry,
but
alex
believes.
There's
a
list
of
them
which
would
mean
that
one
entry
can
have
several
ids
or
several
names,
but
there
might
be
some
confusion
there.
N
So
I
think
that
it
would
be
great
to
verify
first
if
this
is
a
valid
question
yeah.
The
mapping
example
for.
D
Google
cloud
logging
that
we
have
in
the
data
model
it
was
submitted
by
someone
from
google.
I
can't
remember
who
was
that,
but
it
was
them
they
they
did
this,
so
I
think
it's
probably
we
should
probably
connect
alex
to
whoever
was
the
was
the
person
who
did
this
added
this
table
for
the
good
cloud
logging
and
I
think
they
they
probably
should
discuss
this.
D
It's
best
for
them
to
do
that.
Right,
I
can.
I
can
find
I
can
try
to
find
the
original
discussion
on
the
all-tech
and
in
the
pr
I
remember
someone
else
submitted
that
I
just
started
it
tonight.
Okay,.
D
Okay,
now
we
want
maybe
to
tell
about
yourself
because
this
is,
I
think,
the
first
time
you're
attending
this
technique
and
you
can
go
on.
P
P
We
we
managed
the
part
of
well
collection
of
the
largest
telecommunication
companies
in
the
world,
so
with
a
couple
of
tens
of
bytes
a
day
which
is
not
a
lot,
I
know
for
for
most
of
you
guys,
but
for
us
it
was
a
lot
we
needed
to
find
a
very
good
granular
way
of
collecting
and
managing
our
logging
infrastructure
and
beforehand.
I
was
the
chief
information
security
officer
for
a
kind
another
multinational
company
called
orbotek,
as
you
can
probably
hear,
I'm
a
security
person
so
that
this
this
discussion
is
not
about
security.
P
We
as
security,
initiated
a
discussion
across
the
company,
but
honestly
you
would
probably
see
that
security
is
well
one
beneficiary
of
what
I'm
about
to
show
you
guys,
but
definitely
not
the
only
beneficiary.
P
So
let
me
know
when
you
can
see
my
actually.
Let
me
do
share
my
full
screen.
Let's
do
desktop
too.
Okay.
Let
me
know
when
you
can
see
my
screen.
We
see
it
cool,
okay,
cool
the
project
name
is
called
cornerstone.
I
am
aware
that
the
logo
is
of
a
single
block.
However,
cinder
blocks
can
also
be
used
for
these
cornerstones,
so
at
least
until
we'll
have
a
some
designer
doing
the
logo.
This
is
what
we
picked.
P
P
Every
team
developed
their
own
logging
framework.
So
essentially,
everyone
used
look
for
j
or
some
other
login
mechanism,
but
at
the
end
of
the
day,
the
logs
that
they've
written
was
didn't,
make
sense
or
they
made
sense
only
to
them
so
think
about
trying
to
do
a
root
cause
analysis
across
a
workflow
that
involves
six
or
seven
or
ten
different
microservices.
P
It
was
pretty
much
impossible
unless
you
had
10
different
developers
or
10
people
that
are
from
10
different
teams
and
trying
to
piece
up
the
pieces
together,
and
this
is
pretty
much
where
cornerstone
was
was
born
so
cornerstone.
Today
is
not
a
technology,
I'm
putting
it
up
front,
it's
a
standard.
It's
a
way
of
thinking
about
event,
management
and
event.
P
Logging
and
essentially
the
starting
point
for
coroner
stone,
was
to
define
a
unified,
inexpensive
and
expensive
sorry
expandable
specification
for
cyber
security,
operational
and
business
driven
events
any
type
of
event,
and
that
can
be
shared
across
multiple
teams
so
that,
at
the
end
of
the
day,
what
we're
essentially
getting
is
a
well
a
framework
that
will
enable
us
to
define
logic
once
and
have
it
applied
to
any
type
of
system.
That's
onboarded
and
actually
decided
to
leverage
that
the
cornerstone
project,
which
is
what
we're
driving
today
internally.
P
So
the
the
schema
itself
is
very
complex.
Today,
it's
already
a
hundred
and
fifty
plus
fields
more
than
19
different
contexts.
I
know
I've
seen
degree
and
shared
with
me
the
the
semantic
conversion
that
exists
for
a
for
open
telemetry.
P
It
is
the
log
project
and
we
are
aligned
with
that,
but
we
have
extended
it
quite
dramatically
and
we
have
built
a
very,
very
interesting,
in
my
view,
logic
driven
pattern
which
enables
us
to
say
you
can
be,
as
you
can
just
define
your
event
in
10
fields
or
you
can
define
an
event
with
30
fields.
P
It's
very
much
dependent
on
the
type
of
event
that
exists
in
the
system,
the
type
of
event
that
you
need
to
write
and
all
of
that
essentially
is
used
to
create
meaningfulness
across
all
of
our
platforms
and
in
our
services.
So
throughout
the
next
few
slides
I
will
run
through
it
very
quickly.
Of
course,
if
there
are
any
questions,
please
feel
free
to
ask
to
jump
in
and
then
I'll
show
you,
the
schema
and
I'll
show
you
the
spec
itself
and
talk
a
little
bit
about
the
future
of
of
cornerstone.
P
So,
first
of
all
the
contextuality,
so
we,
the
first
thing
that
we
have
decided
to
do,
is
to
create
a
unified
specification
that
is
actually
a
pre-execution
part
of
our
schema
of
our
event.
Schema
that
event
specification,
which
I
will
show
you
soon
enough-
helps
us
align
all
of
our
logging
systems.
We
are
using
a
multiple
stack.
We
have
splunk,
we
have
vlk
and,
and
and
well
we
have
others
also
in
place
depending
on
the
different
product
teams.
P
So
our
goal
was
to
try
and
create
a
unified
spec
that
every
team
can
then
build
accordingly
or
at
least
define
the
way
that
they
are
writing
their
logs
in
in
advance,
so
that
whenever
the
log
is
fed
into
the
system,
every
system
will
be
able
to
understand
exactly
what
it
is,
and
you
know
when
it's
coming
when
it
boils
down
to
what
it
actually
means.
You
know
there
we
are
defining
the
time
step
type.
P
So
if
it's
epoch
or
if
it's
a
e
or
free
or
if
it's
iso-
and
this
is
where
we're
defining,
if
it's
a
the
limited,
a
single
line,
type
of
event
or
a
json
type
of
a
multi-line
event,
it's
where
we
define
some
of
our
more
critical
patterns
like.
Is
it
a
multi-tenant,
driven
event?
Because
if
it's
multi-tenant
we'll
touch
about
the
contextuality
and
the
usage
logic,
it
means
that
you
need
to
provide
another
subset
field,
which
is
what
is
the
environment
id
and
what
is
the
customer
id
without
that?
P
We
won't
be
able
to
know
exactly
which
customer
invoked
that
event.
So
everything
is
defined
in
the
event
specification
that
is
again
pre-execution
and
the
outcome
of
that
pre-execution
essentially
part
is
a
unique
signature.
That
signature
is
now.
It
can
be
embedded
into
the
events
that
are
being
sent
to
the
system.
Every
product
team
can
have
one
a
dozen
50,
or
so
as
long
as
we're
maintaining
the
uniqueness
of
our
events,
we
are
essentially
for
us
at
least.
P
We
are
now
building
a
conversion
for
splunk
again,
one
of
our
systems
that
actually
converts
that
event
specifications
into
for
this
front
guys
here
into
inputs,
props
into
input,
scrum
and
props
cons
so
essentially
defines
exactly
how
that
event
should
be
attributed
inside
the
spanx
system
and
that
unique
signature
again
defines
application
everything.
On
top
of
that,
we
have
the
event
core,
so
that
is
actually
where
things
start
to
get
to
look
like
an
actual
event.
This
is
the
timestamp
itself.
What
happened?
You
know
who?
What?
When,
where?
P
How
the
very
basic
fundamentals
that
you
have
in
the
and
every
event
and
again,
I
will
touch
through
the
schema
and
show
you
what
how
we
have
decided
to
define
what
is
in
our
event
core.
But
this
is
pretty
much
a
standard
behavior
that
you
would
expect
from
any
any
event
being
generated
by
system,
and
then
we
are
throwing
into
the
mix
the
event
contextuality.
P
So
we
have
again
built
a
contextual
system
that
enables
a
developer
or
a
product
manager
to
define
the
way
that
they
are
running
their
events
and
then
the
development
team
can
actually
build
the
event
schema
or
build
the
event
based
on
those
contexts.
So
for
the
runtime
context,
we
have
serverless
process,
container,
kubernetes,
aust
or
cloud
for
network.
We
have
api
network
and
network
traffic
and
web
context,
and
for
data
we
have
everything.
Well,
fico
is
a
data
driven
company,
as
you
probably
know
so,
for
us,
data
security
and
data
in
general
is
very
important.
P
P
First
of
all,
we
have
split
up
our
events
into
field
requirements,
so
every
field
in
our
spec
as
a
flag
is
it
mandatory.
Is
it
conditional
and
conditional?
There
is
an
example
below
in
a
second.
Essentially,
if
something
happens,
if
multi-tenant
is
true,
then
I
am
expecting
to
see
a
customer
id
as
an
example.
Is
it
optional
and
again
that
provides
the
product
in
the
option
of
either
putting
it
in
or
not
and
mandatory,
if
applicable?
P
So
if
there
is
a
user
driven
event
in
the
system
as
an
example,
someone
that
permissions
and
again
I'm
sorry
that
I'm
taking
it
to
the
security
side
of
things,
that's
just
my
profession
and
then
you
will
need
to
put
the
user
context.
Essentially
the
role
context,
what
changed?
Who
is
the
impacting
user,
etc?
So
that's
mandatory.
If
applicable,
then
we
have
the
different
field
types
now
here
is
where
I
will
be
a
little
bit.
P
P
I
our
goal
was
to
try
and
remove
the
vagueness
as
much
as
possible
into
what
needs
to
be
an
event
in
an
event
and
what
and
how
it
should
be
written.
So
yeah
string
is
always
a
part
of
an
event
and
string
is
very
much
up
to
the
developer
and
out
to
the
event
that
is
happening
in
order
to
define
what
is
written.
However,
we
have
defined
a
multiple
or
well
at
least
three
dozen
different
fields
that
require
you
to
put
an
option
in.
P
So
if
you
chose
as
an
example,
if
you
chose
a
well
we'll
see
in
the
schema,
but
there
are
quite
a
few
strings
with
options
again,
if
you
have
either
it's
a.
If
you
are
using
an
api,
is
it
a
post?
Is
it
a
get,
etc?
You
cannot
really
choose.
You
can't
create
write
posts
with
a
capital
and
then
write
posts
in
another
way,
and
we
just
don't
allow
that
right.
Of
course
we
have
integers,
booleans
and
arrays.
P
Sorry,
okay
and
now,
let's
say,
look
as
an
example
into
a
use
case.
So
one
of
the
things
that
we
we
are
talking
about
is
multi-tenancy.
So
let's
assume
that
the
event
is
driven
by
multi-tenant
products,
not
all
products
for
azure
multi-tenant,
but
if
it
is,
then
it
does
mean
that
customer
uuid
becomes.
It
was
conditional.
It
is
now
mandatory
because
multi-tenant
is
true
and
environment.
Uid
is
still
optional,
because
not
all
environments.
D
Yeah,
I
think
we
we
have
interesting
analogies
here.
We
we
have
slightly
different
taxonomy
of
things
in
open
telemetry,
but
I
can,
I
can
see
the
parallels
here.
Maybe
it
would
be
good
to
to
also
see
some
examples.
Of
course,
I'll
probably
have
some
comments
and
yeah.
Definitely.
I
would
like
to
understand
better
the
conditional
part
of
this
how
this
works.
P
Okay,
so
a
couple
of
examples
into
the
elasticity
of
the
of
the
framework.
So,
as
I
mentioned,
we
have
the
event
specification.
The
event
specification
defines
the
format
defines
the
structure,
defines
the
timestamp
and
again
multidensity
and
and
some
other
key
variables.
What
is
the
application
name,
the
application
version,
what
schema
version
you
are
using
and
some
other
things
that
all
of
that
generates
an
event
signature
and
at
the
end
of
the
day,
when
you
have
an
event,
then
you
can
you
can.
P
If
you,
if
you
chose
a
json
of
course,
if
you
throw
csv,
then
you
need
to
put
into
the
limiter
and
then
you
can
pass
it
through
with
the
limit
and
but
this
is
an
example
of
what
it
would
look
like
if
it
was
a
json
type
of
event
that
was
submitted
to
the
system
and
if
it
is
a
syslog
variable
of
of
an
event,
you
can
see
that
there
is
only
the
metadata
here
and
it
is
only
you
know.
It
is
deleted
here.
P
So
every
product
team
can
define
a
different
subset
of
variables
and
a
different
way
of
of
attributing
them
to
some
extent
right
as
long
as
it
is
in
the
schema
or
in
the
spec
that
you
have
sent,
then
we
know
exactly
what
is
the
order
of
things
and
we
know
how
to
consume
it
inside
our
system.
So
we
will
be
able
to
make
the
right
conversions
as
it's
inputted
or
as
it
comes
into
the
ingress
of
our
logging
system
to
some
extent.
P
So
that
is,
you
know
fundamentally
much
well,
I
want
to
say,
in
terms
of
resource
consumption,
easier
to
write,
faster,
to
submit
and
and
lower
in
latency
and
and
in
the
cost
of
of
storage,
of
course.
So
this
is
a
very,
very
simple.
You
know
a
view
of
of
the
meaningfulness
that
we
are
talking
about
so,
and
I
will
also
show
you
again
examples.
P
You
can
you
can
blame
me
for
that,
but
we
have
started
building
rules
in
our
security
systems
that
we
are
describing
once
and
they
are
suddenly
applicable
to
the
dozen
systems,
so
any
onboarding
of
any
new
micro
service,
any
new
product,
any
application
that
is
homegrown,
and
actually
this
is
a
critical
point.
We
are
using
this
for
ongoing
applications.
P
P
We
are
providing
a
much
better
root
cause
analysis
mechanism
for
our
different
product
teams,
because
we
are
enabling
multiple
productions
to
see
and
observe
the
data
in
a
similar
way.
So
when
we
have
trained
a
developer
or
a
site,
reliability
engineer
or
a
dev
devops
engineer,
we
have
trained
him
on
the
model
once
and
he
has
the
spec.
He
can
understand
any
product
it
doesn't.
He
doesn't
need
to
understand
the
product
or
the
business
of
the
product
in
order
to
get
meaningful
data
out
of
the
events,
and
we
have
a
detection
uniformity.
P
We
are
now
considering
starting
to
use
that
as
a
metering
function,
because
we
can
start.
We
can
evaluate
the
state
of
things
through
those
events,
so
we
can
identify
how
much
compute
we
are
investing,
how
much
runtime
it
takes,
etc.
So
that's
something
that
we're
evaluating
and
last
but
not
least,
provide
support
and
and
product
support
comes
in
two
ways.
One
of
the
things
that
that
we
are
enabling
these
days
in
in
fico
is
to
our
enterprise
customers
to
consume
logs
that
are
associated
with
their
environments.
P
Now
it
might
science
sound
silly,
because
most
kind
of
stuff
companies
already
provide
that
we
traditionally
haven't
for
our
managed
services.
So
it
is
a
big
undertaking
for,
on
our
part
and
with
the
new
you
know,
with
a
cornerstone,
we
are
able
to
carve
out
our
customers
data.
We
are
able
to
provide
them
with
very
straightforward
api
specs
to
collect
the
data,
and
that
is
the
future
of
cornerstone.
P
So
I
have
two
ways
of
showing
you
guys
the
schema
today
and
it's
very
much
depending
on
and
you
can.
You
can
see
error
table
right
now,
right
just
to
make
sure
yes,
okay,
so
I
will
show
it
in
an
airtable
format,
just
because
it's
easier
to
view
and
look
at
and
then
we'll
jump
to
the
code,
and
I
will
also
show
you
how
the
schema
looks
in
yaml
format,
but
essentially
what
you
see
here
is
the
advanced
specifications.
So
these
are
the
event.
P
These
are
the
fields
that
we
are
asking
in
order
to
generate
a
unique
identifier
for
an
event
essentially
or
for
an
event
stream.
It
can
be
multiple
events,
so
we
are
asking
for
a
time
stamp
again.
We
are
mandating
different
functions
here.
You
won't
see
that
you
won't
see
conditions
in
the
event
spec,
you
will
see
them
in
the
event
fields,
and
you
can
see
the
structure.
You
can
see
the
verbosity
level
they
can.
That
is
actually
optional.
They
can
define
it
or
not.
P
If
it's
single
line
or
not
if
it's
delimited,
what
is
the
delimiter
if
it's
multi-tenant
or
not?
What
is
the
application
name?
The
module
name,
the
version
name
and
then,
if
you
are
looking
to
define
the
structure
of
your
events,
because
you
are
sending
it
over
syslog
or
you
are
sending
it
in
a
variable
format
rather
than
full
spec.
Where
you
have
a
key
value
pairs.
Essentially,
then
you
are
defining
in
event
variables.
P
You
are
defining
the
list
of
variables
in
the
order,
that
is,
for
the
event
specification,
jumping
to
the
event
fields,
and
I
will
show
you
a
couple
of
different
views
of
that,
so
it
will
make
a
little
bit
more
sense,
but
in
the
event
fields
were
essentially,
this
is
our
entire
schema
and
everything
is
essentially
contextualized.
So
container
means
that
it
is
a
holster
for
a
different
field.
P
And
again
I
will
show
you
how
it's
built
in
a
second,
but
let's
run
through
real
quick
into
the
context
view
sorry
into
so
these
are
the
contexts
that
we
are
currently
collecting
or
enabling
users
or
developers
to
build
accordingly.
Accordingly,
today
now
some
of
them
are
nested
into
each
other.
So
it's
not
that
they
are
all
on
the
root
level.
We
have
a.
We
have
a
structure
for
it,
but
you
can
see
here
that
we
have
the
user
context
and
under
user
we
have
initiated
user,
an
impacting
user.
P
We
have
cloud
context,
web
context
and
all
of
those
type
of
contexts
and
if
we
are
actually
jumping
into
context
grouping,
then
as
an
example,
if
I'm
looking
into
anti-malware
context,
then
we
are
asking
if
the
file
was
the
file
scanned
or
not
and
by
the
way.
Anti-Malware
context
is
a
good
and
is
a
good
use
case,
because
if
a
file
event
has
occurred,
then
anti-malware
context
is
conditionally
required,
so
anti-malware
is
required
if
there
is
an
event
driven
if
event
context
is
being
called
out
as
a
part
of
the
event
structure.
P
So
what
is
the
anti-malware
engine,
the
detection
results,
etc?
The
recommended
action
induction
taken.
Let
me
take
another
example:
let's
do
data
import,
so
what
is
the
type
of
imports
if
it's
again
as
an
example,
web
upload,
email,
api,
local
storage,
remote
storage,
method
of
import,
if
it's
automated
or
not?
And
what
is
the
import
name
or
the
user
id
given
to
the
export?
Oh,
it's
alright
to
the
import.
That
is
a
bug.
Let
me
fix
that.
P
Okay,
that
will
be
pushed
in
next
time,
any
specific
context
that
you
guys
might
want
to
see.
P
I
can
definitely
jump
into
each
and
every
one
of
them,
and
I
will
say
tigran
that
I
have
taken
the
liberty
of
aligning
that
into
the
structure
into
the
semantic
conversion
of
open
telemetry,
so
everything
you
have
defined
you
guys
have
defined
in
the
open,
telemetry
spec
is
already
annotated
here
in
in
this
schema,
so
it
is
beyond
what
is
in
open
telemetry,
but
it
is
not
a
it
is
not
I
mean
it
has
everything
that
open
telemetry
has
today
for
the
loading
parts
and
jumping
into
the
requirements
view,
and-
and
that
is
an
interesting
view
in
my
view.
P
First
of
all,
what
is
mandatory
so,
as
you
can
see
in
terms
of
what
is
mandatory,
actually,
we
only
have
eight
fields
that
are
mentally.
So
what
is
the
ue
idea
of
the
event?
What
is
the
signature
again?
We
need
to
know
the
signature
in
order
to
know
what,
how
the
time
stamp
is
written
if
it's
single
line,
multi
line,
etc
and
the
time
step
itself,
the
summary
the
type
and
the
subtype.
So
we
have,
I
want
to
say,
pretty
much
split
the
world
of
events
into
two
event:
types,
an
activity
and
a
change.
P
If
and
by
the
way,
I
would
be
very
happy
for
peer
reviews
and
and
agreement
or
disagreement
on
that,
but
every
type
of
event
we
have
observed
is
either
an
activity
or
a
change,
and
we
haven't
found
a
is
a
common
denominator.
Essentially,
so
those
are
the
event
types
that
you
can
choose
from
and
then,
of
course
you
have
the
site
top.
So
it's
just
crud.
P
If
it's
a
successful
failure,
the
subvert
is
based
on
the
syslox
ability,
so
we
haven't
reinvented
the
wheel
and
then
we
are
jumping
into
the
conditional
fields
which
tigran
you
found
interesting
or
you
wanna
deep
dive
into
a
little
bit.
So
every
conditional
field
as
an
example,
failure
reason
is
conditioned
upon
event,
results
being
a
failure.
P
A
unique
identifier,
customer,
unique
identifier,
identifies
conditioned
upon
multi-tenant
equals
true
in
the
signature
of
the
event
and
quarantine
file
path
is
conditioned
upon.
Malware
action
taken
was
quarantined,
so
we
have
taken.
We
have
taken
as
much
as
we
could,
but
in
order
to
reduce
the
the
noise
to
some
extent
and
only
condition
it
upon
things
that
are
happening,
we
have
other
types
of
conditional
activity
or
conditional
requirements
and
let
me
jump
through
it
real
quick,
so
we
have
a
well.
P
We
have
manato
if
applicable,
as
mentioned,
if
the
user,
id
or
user
behaviors
are
required
and
they
are
mandatory,
we
have
mandatory
if
chosen,
and
that
is
essentially,
if
you
have
chosen
to
address
to
the
data
export
context,
you
must
address
to
these
type
of
fields
and
you
cannot
well.
You
can't
bypass
those
essentially
again
if
you
chose
oh
sorry
about
that.
P
Let's
do
auto-hide,
okay,
so
those
are
mandatory
chosen,
let's
just
collapse
all
so,
if
you're,
looking
at
the
the
deviation
or
or
the
amount
of
fields
that
that
we
have
in
our
spec,
you
will
see
that
the
majority
of
them
are
optional.
We
are
not
well,
we
are
not.
You
know
we
are
not
going
to
our
developers
of
our
product
teams
and
we
are
twisting
their
arms
and
saying
you
need
to
write
everything
into
the
event.
Most
of
the
things
are
optional.
P
It
is
for
it
is
at
the
discretion
of
the
protein
to
actually
enable
or
to
choose,
to
use
those
or
not,
and
actually
that
is
that
is
a
male
behavior
on
my
end,
because
it's
optional,
but
the
context
is
mandatory
if
needed.
So
I
will
need
to
fix
that
in
our
next
iteration
of
this.
But
again
you
know.
If
you
are
looking
into
the
cloud,
then
you
can
put
it
you
cannot,
you
can
put,
you
cannot
put
it
it's
very
much
dependent
on
how
the
prior
team
or
the
developer.
P
That
is
writing
the
log
needs
to
needs
to
have
that
data.
The
next
piece
is
the
event
examples.
So
what
we
are
actually
working
a
lot
on
is
how
do
we
create
uniformity
across
all
the
events,
and
this
is
an
example.
So
today
a
product
manager
can
come
and
say:
okay,
I
know
that
I
need
to
log
user
authentication
event.
P
What
do
I
need
and
then
essentially
we
are
already
creating
all
of
those
fields
and
saying:
okay,
look
here
are
all
the
fields
that
you
will
need
to
put
in
the
event
in
order
to
make
sure
the
user
authentication
event
is
being
addressed.
And
again
this
is
a
work
in
progress,
so
you
will
not
see
a
lot
of
those,
but
this
is
just
an
example.
P
The
next
step
afterwards
is
to
build
the
rules
around
and
again
we
are
security,
people
so
consider
these
security
rules,
but
definitely
they
will
also
be
operational
rules
and
business
tools
as
we
progress,
we
have
different
type
of
rules
that
are
associated
with
events,
so
that
now
we
know
the
fact
that
if
we
want
to
detect
a
brute
force
attack-
and
we
are
associating
an
event-
and
we
will
have
the
exact
analogy
and
the
exact
logic
of
all
of
the
products,
so
every
product
that
actually
leveraged
this
event,
which
is
nested-
you
know,
which
is
a
a
subset
of
the
events-
will
be
able
to
be
tracked.
D
P
Yeah,
that's
a
good
question.
So
the
way
the
way
we
are
currently
doing
it
is
that
the
product
team
actually
builds
or
defines
the
event
specification
and
we
are
providing
them
with
a
signature.
So
we
are
taking
that
signature
and
then
we
are
today
again
manually.
It
is
not
something
that
would
be
manual
in
the
very
near
future
and
we
will
have
a
conversion
mechanism
that
will
convert
those
into
ingestion
rules
for
the
different
logging
systems
that
we
have.
P
So
the
team
can
just
post
a
request
via
api
saying
we
have
a
new
event
signature,
and
this
is
the
this
is
the
structure
and
we
are
essentially
building
the
again
for
our
splunk
instance.
As
an
example,
we
are,
we
are
adding
the
inputs
and
we're
adding
the
inputs
conf
and
we're
writing
the
props
in
advance.
So
whenever
they
use
that
event
signature,
we
are
just
we're
just
you
know
we're
already
at
that
in
the
system
in
the
future
of
cornerstone.
P
We
are
looking
to
have
a
dedicated
microservice
to
do
that
as
a
whole.
Also,
to
make
the
access
to
the
logs
and
manage
you
know:
customer
access
to
the
loads,
internal
access
to
the
loads
rotation
off
everything,
but
that
is
in
the
future,
and
I
will
talk
about
it.
A
little
bit
farther
down
the
line
and
the.
D
Events,
the
events
themselves
refer
to
the
signature.
P
P
Our
goal
is
to
migrate
as
much
as
we
can
from
the
event
structure
into
the
event
specification
if
we
are
able
to
so,
we
want
to
reduce
the
size
of
events,
and
we
are,
we
will
do
whatever
we
can
in
order
to
make
sure
that
we
are
using
the
side
of
the
size
of
events
as
an
example.
The
deployment
environment,
that's
a
very
good
example.
This
is
something
that
can
actually
be
in
the
spec.
You
don't
need
to
send
it
every
time,
because
if
the
event
signature
is
x,
then
we
know
it's
production.
Q
So
how
is
it
that
our
so
presumably
you've
got
one
version
of
this?
That's
encoding
this
in
in
inside,
whatever
process
is
logging
and
then
you've
got
this
on
the
decoding
side
as
well.
How
are
you
managing
different
schemas
on
either
side,
because
as
that,
the
schema
evolves
right?
You're,
gonna,
you're
gonna
drift
on
that.
P
Yeah
definitely
so
we
do
have
the
schema
version
in
the
signature,
so
we
are
tracking
and
we
will
have
backward
compatibility
and
our
current
goal
within
within
fico
is
to
support
n
minus
one.
So
if
you,
you
know,
you
will
need
to
amend
whatever
you
need
to
amend,
but
you
will
have
just
like
any
other
type
of
of
a
version
tracking
and
you
will
have
enhancements
and
you
will
need
to
you
know.
The
teams
will
need
just
need
to
implement
them
as
they
grow
as
they
use
the
schema.
P
If
that
makes
sense
so
a
little
bit
into
the
into
the
structure,
and
then
I
will
show
you
an
example.
I
will
apologize
in
advance
that
the
example
is
from
an
older
version
of
the
schema.
I
have
pushed
a
new,
a
set
of
examples,
but
or
the
team
pushed
a
new
set
of
examples,
but
I
haven't,
I
haven't
pulled
them
yet
so
the
schema
itself-
and
this
is
pretty
much-
how
it
looks.
P
Let
me
just
shut
down
things
so,
just
as
you've
seen
in
air
table,
everything
is
nested
one
within
the
other
right
and
and
if
you
are
choosing
to
let's
do,
if
you
have
file,
then
you
will
need
to
use
the
file
context,
and
then
you
have
the
different
fields.
Again,
not
everything
is
mandatory.
P
Some
will
be
optional,
some
will
be
conditional,
but
that
is
pretty
much
what
we're
enabling
the
teams
to
do
and
again
it
is
very
scary
at
first
because
it
has
a
lot
of
data,
but
I
I,
when
I
will
show
you
the
example.
I
can
assure
you
that
this
is
just
the
schema
itself.
We
found
that
most
of
the
time
you
know,
20
30
are
actually
being
used.
It's
not
that
everyone
needs
all
of
the
data
at
any
point
in
time.
P
So
let
me
show
you
an
example,
so
you
will
need
to
forgive
me,
but
this
is
an
internal
example
again,
given
that
it's
an
intellect
project
at
this
point,
but
one
of
our
systems
is
a
system
that
generates
solutions.
The
solution
is
actually
an
encapsulation
of
multiple
micro
services
and
this
is
an
example
of
a
a
creation
solution.
P
So
a
social
creation
event.
Sorry,
so
we
have
the
event
uid
again,
that's
a
single,
that's
a
unit,
that's
a
unique
identifier
for
the
event.
We
have
the
event
signature
that
event.
Signature
is
sorry
that
event.
Signature
ties
back
to
a
an
event
spec,
which
I
don't
have
again:
apologies,
but
that
eventbrite
says
that
it's
a
full
json,
etc.
We
have
the
we
have
the
multi-tenant
flag
on,
so
we
have
the
environment
id.
We
have
the
customer
uid.
We
have
the
session
id.
P
We
have
the
timestamp
again
with
we
defined
in
the
spec
that
it
was.
I
it
was
iso
and
we
have
the
event
type.
So
it's
change
and
the
event
site
tab
is
create,
and
then
we
have
the
activity
summary
so
who?
What?
When,
where
the
standard
piece
you
know,
and
so,
as
an
example,
you
see
that
the
activity
results
here
is
success,
so
you
won't
see
failure,
reason
within
the
event,
because
it's
not
needed
it,
wasn't
a
failure
and
the
status
incomplete.
P
What
is
the
duration
tags
is
something
that
we
added
due
to
popular
demand.
People
want
the
tags
in
their
events,
to
be
able
to
run
their
own
analytics
on
the
events,
and
then
we
have
the
context,
so
the
initiating
user
for
for
that
user
generation
was
app
scc
customer
a
the
user
assumed
was
an
administrator.
Is
it
admin
user?
The
true
it's
true
and
the
user
type
was
a
service
account
and
then
the
web
the
web
context.
So
it
was
an
api
request.
P
P
We
have
the
different
http
parameters
that
were
sent
and
we
have
data
context
because
it
pushed
also
he
imported
the
configuration
through
api
and
that
that
information
essentially
ran
a
stolen
procedure
for
the
this
database
name,
this
instance
name
and
what
was
the
event
again
not
mandatory
at
all.
But
if
we
wanted
to,
we
could
have
checked
it
and
the
file
context
the
file
that
he
pushed
through
api.
So
the
file
was
a
series
customer
solution.
Config
the
file
size
and
k
by
nkb
was
a
3k.
A
file
type
was
raw.
P
The
file
extension
was
conf.
This
is
the
hash
of
the
file.
What
is
the
location
of
it
everywhere
in
malware
scanning?
Yes,
we
use
virus
solo,
we
haven't
found
malware
and
if
we
scan
for
classified
data,
yes-
and
we
haven't
found
classified
data,
so
that
is
an
example
of
an
event,
and
this
is
actually
a
fairly
complex
event,
because
you
can
see
here
that
it
involves
a
lot
of
contexts
and
it's
definitely
not
across
the
board
that
we
will
see
across
all
events.
But
this
is
the
level
of
meaningfulness.
D
Since
you
looked
into
the
open,
telemetry's
data
model,
do
you
see?
Do
you
see
anything
there
that
would
make
the
representation
of
this
data
no
complementary
model
ambiguous
right
in
a
way
that
or
unrepresentable?
D
P
P
The
problem
with
you
know
so
the
problem
I
have
experienced
in
my
I
don't
know
how
many
years
in
in
the
business
is
that
if
you
leave
it
to
anyone
to
the
side,
however,
you
want
you
take
two
people.
You
will
have
three
different
ideas
on
how
to
execute
something
and
you
will
have
five
different
implementation
methods
and,
at
the
end
of
the
day,
the
reason
we
built
cornerstone
was
for
the
uniformity
we
are.
P
We
have
not
so
as
a
general
statement,
we
we
are
looking
for
open
telemetry,
and
actually
this
is
something
that
has
a
discussion
with
our
chief
architect
on
very
recently.
We
will
be
looking
to
adopt
open
telemetry
as
a
project
for
our
telemetry
data.
However,
even
with
a
technological
foundation
like
open,
telemetry
and
the
specular
provides,
we
want
to
make
sure
that
there
is
uniformity
across
our
product
of
our
products,
so
I
know
I
I,
if
I
remember
correctly,
granular
from
splunk,
so
I
will
take
cis
pancream
as
a
good
example.
P
Spam
crm
is
a
great
type
of
common
denominator
for
events,
but
it
doesn't
tell
you
what
to
write.
It
tells
you
if
you
wanted
to.
This
is
how
how
you
can
send
the
data,
or
this
is
how
we'll
translate
inside
our
system,
which
is
nice
it's
great
to
have,
but
it
doesn't
solve
for
the
problem
that
everyone
creates
rights,
logs
ships,
different
data
right,
and
I
feel
that
that
is
the
core
core
problem
that
we
are
trying
to
solve.
We
are
not
looking
to
replace
log4j.
We
are
not
looking
to
place
open
telemetry.
P
D
Yeah
yeah,
but
that
makes
sense
this
is
part
of
it
is
the
motivation
behind
the
semantic
conventions.
It
addresses
also
part
of
the
problems
that
you
solve
here
you
solve
a
bit
more
here,
yeah
really
than
just
the
semantic
conventions
and
yeah
it's
it's
clear.
I
mean
the
the
problem
is
clear.
I
I
completely
agree
with
you
and
we
I.
I
would
also
be
interested
in
understanding
the
the
implications
of
the
approach
that
you
have
with
the
schema
specification
right.
D
You
haven't
been
using
this
for
10
years
and
so
that
you
can
tell
us
what
you
learned
so
far,
but
I
would
be
very
interested
in
understanding
that
right,
if
you
do
this,
then
what's
what
what
happens?
What
are
the
complications
that
arise
as
a
result
of
that
we
we
wanted
to
have
a
schema
specification
as
part
of
the
open,
telemetry
data
model
as
well,
but
that
was
the.
P
Yeah
yeah,
so
I
I
think
what
I
I
can
share
a
couple.
Maybe
before
I
touch
the
future,
so
I
can
a
teacher.
I
can
share
a
little
bit
what
we
have
learned
today.
We
have
learned
a
couple
of
things,
at
least
at
least
in
our
you
know,
in
our
journey,
because
essentially
we
are
also
driving
and
selling
it
to
our
productions,
because
it
does
require
some
evidence
for
them
to
accept
and
adopt
right.
P
So
we
have
learned
a
couple
of
things
first,
if
we
tell
them
very,
very
broadly
speaking,
if
we
provide
this
total
packs
or
if
we
tell
them.
These
are
the
events
that
you
should
log.
This
is
the
structure
that
we
want
to
see.
They
are
very
good
at
adopting
it.
If
we
are
dumping,
150
fields
on
them
and
say,
go
figure
it
out,
they
they
are
getting
lost
in
in
the
pool
of
of
data
right
to
some
extent,
so
we.
P
P
We
with
the
product
managers
are
essentially
building
the
event,
specifications
or
building
how
events
should
be
looking
at
or
looked
at,
and
then
having
the
developers,
the
you
know,
describe
or
to
some
extent
build
the
jira
requirements
with
all
the
different
fields
and
with
the
context
of
an
event,
rather
than
here's
a
schema
and
go
understand.
You
know
how
you're
implementing
it.
So
I
think
that
is
the
first
and
very
big
difference.
You
know
or
thing
that
we
have
learned,
but,
as
you
mentioned,
we
are
very
much
in
the
early
days
of
it.
P
P
We
are
building
a
platform
in
fico
and
there
is
a
platform
team
and
then
there
are
a
lot
of
products
that
are
running
on
top
of
the
platform,
so
you
can
think
about
building
aws
like
type
of
ecosystem
and
the
amount
of
of
challenges
that
the
different
product
owners
have
when
they
are
leveraging
micro
services
that
are
not
developed
by
them,
but
they
are
leveraging
it.
I'm
sure
you
guys
understand
it's
a
complete
chaos
to
some
extent,
and
everyone
agrees-
and
this
is
another
thing
to
learned-
everyone
agrees
and
understands
that
we
need
cornerstone.
P
R
Preemptively
unmuted
myself,
hey
there
might
as
well
throw
the
question
then
so
everybody
agrees,
but
how
much
that
that
cornerstone
is
needed,
but
internally
kind
of
you
know
how
much
of
a
rat
hole.
Have
you
seen
the
discussions
about
the
actual
semantics
to
be?
Because
that's
usually
where
you
know,
as
you
said
yourself,
you
know
you
got
two
people,
you
got
five
opinions
yeah.
I
know
this
from
my
own
for
my
own
work
and
you
know
etcetera
on
your
business.
This
is
just
this.
R
Is
that's
really
the
hard
part
like
syntax
people
can
usually
just
croc
somehow
or
maybe
get
on
board
with
right,
but
you
know
like
what
this
particular
word
means
and
what
it
means
in
this
context.
So
or
you
know,
should
it
be
there?
Should
it
not
be
or
you're
missing
something?
P
It
is
still
a
struggle.
I
I
can't
say
that
it's
coming
down
easy
and
there
are
still
a
lot
of
questions,
but
I
can
say
that
through
the
usage
of
a
less
again
through
the
well
some
extent
foundation,
understanding
that
we
need
less
vagueness
and
more
very
strict
approach.
If
it's
multitasking
xyz,
if
it's
x,
you
need
y
you.
We
expect
to
see
this
and
that
in
this
format
it
reduces
the
amount
of
complexity
and
the
amount
of
issues
that
the
people
are
experiencing
because
the
end
of
the
day.
P
R
P
Okay,
so
a
little
bit
into
the
future
of
cornerstone.
So
on
the
non-technical
side,
the
first
thing
we
will
be
looking
to
do
is
to
open
source
it.
I
I
don't,
you
know
at
the
end
of
the
day,
we
are
doing
something,
but
we
definitely
want
to
open
it
to
the
community.
First,
you
know
to
to
get
a
community
around
it
to
build
more
insights.
At
the
end
of
the
day,
we
don't
hold
all
the
knowledge,
I'm
sure
that
many
people
will
come
and
innovate
with
more
contexts
that
may
be
also
needed
by
us.
P
So
definitely
you
know.
Sharing.
That
is
a
as
a
goal
is
is
important
for
us
and
honestly,
I
will
say
that
one
of
the
reasons
I'm
here
today
is
is
because
I
approached
morgan
and
and
tigran
and
to
see
where
it
makes
sense
to
you
know
to
put
it
as
as
a
part
of
of
open
telemetry,
because
at
the
end
of
the
day
it
is
a
schema.
It's
not
a
technology.
The
technology
can
be,
you
know,
logic
will
still
be
a
technology,
given
that
we
are
evaluating
a
open.
P
Telemetry
is
across
the
board
for
our
organization.
It
only
makes
sense,
you
know
if
we
could,
if
we
could
converge
on
those.
So
definitely
that
is
something
that
I
would
be
excited
to
to
explore,
but
regardless
we
we
will
be
looking
to
open
source
a
cornerstone
as
a
schema,
and
the
second
thing
that
we
will
do
you
have
already
seen
a
glimpse
of.
It
is
the
wool
framework,
so
we
are
calling
it
motor,
which
is
essentially,
we
have
the
brick
and
mortar.
So
the
unified
wool
flamework
will
be
the
event
structures
and
the
different
tools.
P
We
will
open
source
those
as
well,
because
if
we
already
built
you
know
solutions
to
different
attack,
vectors
or
to
different
business
cases
and
again
things
that
are
not
directly
populated
or
interesting
to
fico.
And
then
then
we
will
be
looking
to
you
to
expose
that
as
well,
but
will
be
the
unified
rule
framework
and,
as
we
discussed
about
the
adoption,
we
will
have
starter
packs
and
those
starter
packs
will
essentially
be
the
events
that
you
have
seen.
P
Every
product
needs
to
have
at
least
authentication
authorization
change,
controls,
service
goes
up,
service
goes
down,
etc.
That
starter
pack
will
be
manifested
in
multiple
ways,
one
of
which
through
well.
It
will
be
opened
as
jira
tickets
or
at
least
a
csv
files
that
can
be
directly
imported
into
jira
so
that
you
have
a
new
product.
You
can
just
import
those
and
you
will
have
the
schema
and
the
spec
and
the
need
and
exactly
for
the
events
for
your
product
some
extent.
That's
on
the
non-tech
side.
P
On
the
technological
side,
we
will
do,
we
will
start
building
soon
enough
in
the
future.
The
service,
vocal,
the
service
blocker,
will
answer
to
t
guns,
question
about
the
management
of
the
event
signatures
and
and
how
it's
being,
how
it's
being
managed.
So
actually
we
we
haven't,
we've
only
started
designing
it,
but
the
end
goal
for
us
is
to
create
a
service
blocker
that
will
do
two
things
first
of
all.
P
Well,
it
will
sit
upon
a
float
bit
because
we
are
also
using
flood
b
today
as
our
logging
infrastructure
for
for
our
kubernetes
environments
and
for
our
applications.
So
we
will
essentially
collect
the
data
and
we
will
enable
access
to
the
data
to
our
customers,
so
we
will
manage
apis
and
key
and
access
keys,
etc,
and
we
will
also
manage
access
to
internal
individuals,
so
we
will
be
able
to
as
an
example.
You
are
a
support
engineer
and
you
need
to
see
only
a
subset
of
the
event.
P
So
you
need
to
see
a
subset
of
the
schema
of
the
events.
Let's
assume
that
the
schema
might
expose
some
sensitive
data
and
you
only
need
to
get
access
to
some
specific
subset
of
the
schema
in
the
event
format.
Then
the
service
broker
will
be
the
api
endpoint
for
you
to
get
that,
so
it
will
be
the
full
role-based
access,
control,
authentication,
authorization
for
all
the
apis
to
and
from
a
cornerstone,
and
we
will
release
an
event
modeler
and
a
linter
so
for
our
developers
to
use
that
in
their
id.
P
So,
whenever
you
are
starting
to
use,
you
know
a
cornerstone,
you
will
also
when
you
onboard
one
field,
the
system
will
already
let
you
know
that
you
will
need
to
onboard
another
and-
and
we
will
also
release
an
event
model,
so
it
will
be
a
content,
ui
components
for
our
product
managers,
just
as
a
drag
and
drop.
So
it
will
be.
You
know
they
will
be
able
to
drag
and
drop
the
different
fields
and
it
will
just
build
them.
P
P
So
yeah,
that's
a
pretty
much
a
cornerstone,
and
so
that
I
took
too
much
of
your
time.
Guys
I
see
it.
We
are
near
then.
D
Thank
you.
This
is
very
useful.
One
comment
here:
the
work
that
you're
doing
here,
it's,
I
guess
equally
applicable
to
the
traces
as
well.
Almost
everything
that
you
would
want
to
record
for
for
log
records
likely
is
applicable
to
a
trace
span
right.
We
today
we
open
telemetry's
approach
today
in
what
you
record.
What
data
you
record
in
a
span
or
a
log
record,
is
dictated
by
the
conventions.
We
call
them
conventions
right,
it's
semantic
conventions.
D
D
I
I
am
not
sure,
maybe
maybe
that's
the
the
evolution
of
what
we're
doing
at
open
telemetry,
but
I
definitely
can
see
that
this
makes
sense
for
the
traces
as
well.
So
I
would
not
limit
what
you're
doing
to
logs
only
they
they
are
properly
applicable.
In
my
opinion,
through
the
spans
as
well
yeah.
That's
that's
the
comment
I
had.
R
R
You
know
on
on
email
now,
but
maybe
maybe
briefly
had
you
guy,
where
you
guys
familiar
with
with
the
elastic
common
schema
before
you
started,
designing
this
and
and
if
you
wear
or
now,
even
if
you're,
not
you
know,
comparing
it,
you
know,
does
it
does
it?
Does
it
make
sense?
You
know
to
have
a
different
one
versus
extending.
You
know
that
one.
P
That's
so
I
am
not
aware
that
this
was
evaluated,
but
definitely
it
is
interesting
and
I
will
I
will
look
at
that
as
a
as
a
potential
way
for
us
to
learn,
increase
our
posture
or
and
converge
with
that.
P
Well,
for
us
it
was,
it
was
a
very
good
way
of
reducing
the
size
of
the
events
initially
and
also
being
able
to
create
a
more
systematic
approach
into
understanding
the
events
so
you're
only
defining
it
once
right.
If
there
are
things
that
are
repeatable,
you
shouldn't
repeat
them.
That's
that's
pretty
much
the
logic
that
we
took.
R
I
think
that's
fair,
you
know
it's
obviously
more
powerful.
If
you
have
that
sort
of
you
know
basically
semantic
metadata
on
top
of
it.
I
guess
what
I'm
wondering
is
if
I
were
to
receive
these
things,
but
I
would
not
be
able
to
refer
back
to
the
signatures,
because
maybe
that
schema
program
isn't
there
yet
or
maybe
it's
not
working
or
whatever
right.
You
know,
or
maybe
I
just
I
just
get
the
data,
and
I
don't
have
this
other
thing.
It
feels
that
at
least
at
a
json
representation
you
can
still.
R
P
Essentially,
then,
then
you
have
the
full
context
inside
the
event,
there
are
a
couple
of
issues
that
we
have
identified
if,
if
you
are
using
the
json
without
a
specific
signature,
but
definitely
that's
optional,
I
think
that
the
challenge
that
we
have
met
is
that
there
is
a
problem
with
the
size
of
the
events
when
you're
using
the
full
json
stun
structure,
you're,
essentially
driving
very
big
events,
and
that
is
something
that
that
is
again
one
of
the
drivers
for
the
signature.
R
Yeah,
I
think
they
usually
compress
out
fairly
well
on
the
wire
and
then
depending
on
how
the
back-end
stores
that
you,
but
but
but
yes,
they
could,
you
know
like
in
their
like
json
representation,
the
textual
representation.
That
very
verbose
is
definitely
true.
Okay,
I
think
we're
running
out
of
time.
I
might
have
some
more
questions
I'll
just
hit
you
up
on
email.