►
From YouTube: 2021-11-17 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Street
journal
about
how
everyone
knows
how
like
zillow
offers
blew
up
recently
and
they
had
to
shut
the
whole
thing
down.
They
were
buying
houses
that
they
they
wanted
to
buy
more
to
keep
up
with
their
competitors,
so
they'd
hard-coded
their
offer
thing
to
just
add
seven
percent
to
the
price
they're
offering
for
every
home
and
now
they're
selling
a
lot
of
homes
at
an
exactly
seven
percent
loss
like
they
blame
their
algorithm.
But
the
problem
was,
they
literally
was
hardcoded
in
like
algorithm
output.
B
A
So
let
me
share
my
screen.
Hi
everybody
I'd
like
to
suggest
that
just
interrupt
with
questions
whenever
you
have
them
I'll
pause
a
couple
of
times
to
solicit
questions,
but
you
know
this
is
this
should
be
more
of.
I
hope
this
would
lead
to
a
discussion,
but
also,
if
there's
something,
that's
not
clear,
just
interrupt
okay.
So
I
want
to
tell
you
today
how
the
edpf,
logs,
how
npm.
A
Uses
structured
logs
a
little
bit
about
me.
I've
worked
for
several
years
for
government
where
I
built
some
components
that
allowed
us
to
extract
telemetry
from
large
scale
deployments
in
the
field
and
operate
them.
A
My
phd
was
around
extreme
monitoring
systems
I,
so
this
is
type
of
systems
where
you
can
get
in
a
data
center
cluster
you
can.
You
can
have
your
monitoring
system
get
metadata
on
every
packet,
that's
being
transmitted
the
system
analyzes
it
makes
decision
and
then
affects
the
the
the
nodes
in
the
cluster,
so
kind
of
systems
that
can
process
massive
amounts
of
telemetry
with
low
overhead.
A
I
founded
the
phlomo,
which
was
a
company
to
do
network
performance
monitoring,
so
give
visibility
into
networks
using
some
of
the
principles
that
we
developed
during
research
like
the
research
at
mit
and
we
recently
joined
splunk.
I
think
it's
almost
a
year
ago
now.
A
So
I'd
like
to
start
with
kind
of
some
with
a
viewpoint
into
what
are
the
types
of
use
cases
you
can
have
for
structured
logs.
The
first
one
would
be.
I
just
want
to
get
logs
so
textual
logs,
but
I
want
them
to
be
low
overhead.
A
I
think
higher
up
is
where
you
want
to
have
your
query.
Systems
for
your
logs
have
richer
semantics.
So,
for
example,
you
want
your
users
to
to
tell
you
the
field
names,
so
I
want
to
filter
all
the
all
the
events
that
happened
for
this
user,
and
maybe
you
know
some
values
for
application
specific.
You
know
some
application,
specific
values,
like
cart,
value,.
A
On
top
of
that,
though,
I
think
there
there
are
further
use
cases
that
that
we
should
enable
we
should
think,
as
we
design
a
standardized
structured
logs
format
that
are,
you
know,
upcoming
and
npm
kind
of
the
the
evpf
collector
is
one
example
of
so
one
of
them.
Is
you
want
your
components
to
be
able
to
query
state
not
just
retrieve
logs,
but
maybe
maintain
some
state?
So,
for
example,
I
created
this.
A
This
example
where
maybe
you
have
log
lines
for
every
process
that
is
being
created
where
you
get
the
process
id
and
the
path
of
the
executable
of
the
binary?
That's
running
it,
so
you
want
to
create
a
view
of
all
the
processes
and
their
binaries
so
that
when
you
have
some
file,
I
o
logs,
saying
you
know
this
was
rejected,
or
this
was
accepted.
You
can
get
and
see.
What
is
the
name
of
the
binary
that
made
the
the
call
to
the
file
made
the
file
operation
so
that
you
can.
A
And
I
think,
once
you
have
this
ability
to
to
keep
state
and
to
extract
straight
and
relationships
between
between
entities
that
you
get
these
structured
logs,
these
events
on
you
can
run
more
semantic
analysis.
You
can
by
understanding
what
what
is
what
how
the
system
is
structured.
You
can
create
these
analyses
on
on
the
system.
So,
for
example,
you
know
security
system
might
count
the
number
of
new
sockets
that
a
process
is
opening
to
detect
a
port
scan.
So
you
can
say:
okay
if
there.
A
If
this
process
is
sending
a
lot
of
sockets,
then
I'm
going
to
increase
I'm
going
to
go.
Look
at
the
process.
Look
at
what
is
the
user
that
is
running
the
process
and
increase
the
risk
level
in
in
my
state
to
say
this
is
a
high
risk
user
for
being
compromised
right.
A
So
you
want
to
build
systems
that
not
only
enable
you
to
query
the
logs
per
se,
but
kind
of
create
kind
of
stateful
analysis
on
top
and
and
if
you
think
about
these
very
complex,
distributed
systems
that
that
organizations
have
been
building
a
lot
of
these
more
subtle
bugs
that
occur
in
these
distributed
systems.
You
really
want
to
check
by
creating
some
logic
where
you
check
invariant
right
like
I.
I
expect
this
to
always
have
happened
just
before
that
other
event
happened
and
how
do
we
facilitate
those
for
users?
C
A
Order
from
from
your
structured
logs
system
in
order
to
get
each
one
of
those
so
for
efficient
overheads,
you
need
kind
of
encoding
decoding
and
and
the
encoding
itself
to
to
to
create
small
size,
structured
logs.
You
need
schemas,
so
I'm
seconds
like
second
level,
so
you
need
schemas
to
to
get
state.
You
need
a
way
to
represent
kind
of
entities,
the
the
objects
that
you're
you're
collecting
state
for
and
the
relationships
between
them
and
to
create
more
systems.
A
You
need
a
programming
model
and
I'm
going
to
I
I
know
kind
of
you
invited
me
here,
I'm
going
to
talk
about
kind
of
the
encoding
decoding
and
the
schema
more
today.
But
I
think
this
is
a
good
point
to
just
pause
for
a
moment
for
questions.
A
A
Network
performance
monitoring,
so
it's
network
monitoring,
okay,
that
you
do
through
the
operating
system.
Edpf
is
an
is
an
operating
system,
linux
technology
that
allows
allows
programs
to
actually
extract
events
right
look,
so
you
have
these
kind
of
events
or
structured
logs
that
arrive
all
the
way
from
the
operating
system
up
and
how
you
make
sense
out
of
it
and
create
a
picture
of
networking
in
a
a
cloud
setting
or
you
know,
a
hybrid
hybrid
cloud
setting.
A
So
what
kind
of
data
do
you
get
from
from
the
networking
stack
and
from
the
operating
system
so
that
you
can
to
get
this
type
of
visibility?
Well,
first,
you
can
look
at
individual
sockets.
You
can
get
events
whenever
a
socket
starts
whenever
stock
socket
closes
and
you
can
get
statistics
when
packets
are
sent
or
received
right.
So
you
have
these
socket
stats.
You
can
get!
You
can
look
inside
payloads
and
get
http
stats
from
the
kernel
so
but
per
se.
A
This
data
is
not
as
useful
without
context,
so
really
to
get
a
useful,
a
useful
picture
of
what's
happening
in
you
know
your
kubernetes
cluster.
A
You
also
want
to
know
what
process
each
socket
belongs
to
what
container
holds
that
process
and
what
host
it's
running
on
so
and
if
you
get
this
event,
these
event
streams
to
tell
you
how
kind
of
how
each
of
those
subsystems
and
the
kernel
behaves.
You
get
these
types
of
events
for
hosts.
It's
all
about
context.
How
do
you
build
kind
of
an
entity
model,
so
I'm
kind
of
trying
to
motivate
kind
of
having
these
having
like
a
system
that
can
process
these
events
and
make
sense
of
them
so
for
host?
A
You
want
to
know
kind
of
get
information
from
your
cloud
provider
for
containers.
You
want
to
get
information
from
docker
at
your
kubernetes
pod
for
sockets.
There's
network
address
translation
after
you
do
network
address
translation.
You
can
get
information
from
your
cloud
provider
on
what?
Where
are
where
your
databases
are,
where
your
load
balancers
are,
so
you
can,
you
can
have
useful
naming
for
for
endpoints
and
there's
also
dns
right.
So
there's
it's
a
pretty
rich
data
set
that
that
you
need
to
collect
and
make
sense
out
of
all
of
that.
A
A
I'm
not
gonna
mention
these.
So
kind
of
point
that
I
wanted
to
make
here.
Is
that
really
you
need
to
analyze
like
this
corpus
of
of
structured
events
in
order
to
make
sense
and
really
the
context
is
what
what
drives
value?
A
A
Other
events,
based
on
those
you
do
kind
of
higher
level
analysis
based
on
the
low
level
events
and
the
kind
of
the
ability
to
to
reason
about
to
have
like
named
fields
that
have
semantics
and
to
be
able
to
link
like
this
socket,
is
under
this
process
id
and
and
this
process
id
has
this
binary
right,
like
these
links
of
across
events
is
what
enables
modularization
here,
because
you
want
to
have
you,
you
want
to
create
collectors
for
the
socket
subsystem
for
the
process,
subsystem
for
the
container
subsystem
in
the
kernel
and
then
be
able
to
merge
those
events
together.
A
So
you
know
the
structure,
the
structure
here,
the
the
having
kind
of
these
named
fields
and
the
semantics
of
how
they're,
linked
together
kind
of
just
enables
this
entire
system.
You
don't
want
to
have
it's.
A
You
know
it's
too
complex
to
build
an
observability
solution
that
that
denormalizes
all
the
data
right,
like
so
kind
of
I'm-
I
don't
know
if
I
want
to
make
an
analogy
to
to
other
you
know,
applications
that
are
doing
logs
but,
like
you,
cannot
just
extract
all
the
information
from
all
the
components
to
create
one
log
line
that
that
just
makes
sense
out
of
the
box.
You
really
want
to
have
something
where
you
can
analyze
it.
A
So
structured
lots
are
really
important,
and
the
last
point
is
that,
by
having
a
good
system
for
structured
logs
and
being
able
to
to
track
links,
you
can
have
huge
advantages
in
not
repeating
information
across
structured
events.
So,
for
example,
we've
had
these
measurements
where,
for
every
container,
these
are
measurements
from
live
systems.
A
Usually
you
see
kind
of
many
thousands
of
processes
and
tens
of
thousands
of
sockets
waiting
for
the
for
this
to
come
up.
So
you
know
more
than
ten
thousand
sockets
per
container
that
you
see
in
real
data
and
then
the
the
socket
statistics
you
can
have
hundreds
of
thousands
per
container
so
for
every
every
message
of
one
type.
A
You
get
a
lot
of
the
lower
level
messages
so
having
a
system
where
you
report
each
one
if
each
thing
once
and
create
links
between
them
is
critically
important
for
for
getting
good
visibility.
A
So
with
with
that,
so
I
give
you
an
over.
I
think
I
give
maybe
questions
on
on
npm
in
the
use
case.
A
So
with
that
today
I
want
to
focus
on
encoding
and
decoding
and
the
schema
and
less
about
kind
of
the
programming
model
and
the
entities
and
relationships.
But
you
know
these
are
kind
of
having
a
good
schema
and
efficient
encoding
and
decoding
is
critical
to
enabling
even
this
application
like
the
volume
of
logs.
I
have
some
data
later,
but
just
the
volume
of
events
that
you
get
out
of
these
types
of
systems.
Really
you
need
to
have
extremely
low
overhead
in
order
to
support
support
this
type
of
solution.
A
So
talking
about
overhead.
This
is
this
is
a
benchmark
that
flat
buffers
did
and
it's
on
the
it's
on
the
flat
buffers
website.
I
I
didn't
reproduce
it,
I'm
I'm
I'm
showing
you
this
verbatim
from
the
website.
This
was
early
motivation
for
our
solution
for
a
structured
log.
A
What
you
can
see
here
are
some
ventral
vertebrates.
Let
me
go
through
some
of
the
numbers
here,
so
if
you
want
to
go
with
raw
structures,
you
can
get
extremely
efficient
decoding,
but
this
line
is
decoding,
so
you
can
decode
one
million
messages
in
20
milliseconds
on
on.
You
know
with
that
benchmark
hardware.
A
And
you
know,
protocol
buffers
is
great
because
it
has
great
semantics.
It's
forwards
and
backwards
compatible.
It's
easy
to.
You
know
it's
a
very
mature
ecosystem.
It
has
a
mature
schema
language.
So
it's
it's
relatively
easy
to
use,
but
it's
you.
You
have
to
pay
the
cost.
If
you
look
at
the
coding,
for
example,
that's
the
next
box
here,
I
don't
know
why
here
it
is
then
comparing
protobuf
to
just
a
raw
struct,
just
dumping
the
data
onto
the
wire.
You
get
three
orders
of
magnitude.
A
You
know
more
overhead
with
critical
buffers.
This
is
cpu
overhead
to
encode
one
million
one
million
records.
A
The
wire
format
for
protobuf
is
more
efficient.
It
wins
here.
But
if
you
look
at
the
number
after
compression,
so
the
174
versus
187,
you
can
get
pretty
much
compare.
You
know
relatively
comparable
wire
format
with
significantly
lower
lower
encoding
and
decoding
overhead
with
raw
struct.
E
So
I
guess
this
is
not
really
an
apples
to
apples,
comparison
right,
the
raw
structs
you,
you
can't
do
anything
except
read
yourself,
you
send
it
anywhere
and
you
have
a
problem
right,
except
if
the
reader
is
exact
copy
of
yourself,
which,
which
kind
of
kills
the
interoperability,
which
is
the
purpose
of
sending
something
somewhere.
Most
of
the
cases.
A
Amazing,
so
thank
you
for
that.
So
the
question
is
that
we
had
when
we
started
npm
is:
can
we
have
this
type
of
interoperability
experience
like
protobuf
and
the
great
programming
experience
and
get
the
performance
of
prospects?
So
can
we
get
the
benefits
of
both
worlds?
What
we
did
to
solve
this
is
we
developed
a
system
called
render
internal
name.
Maybe
we
can
find
a
more
snazzy
name
for
it,
but
this
is
what
we
called
it.
A
A
It
has
a
schema
where
the
developer
writes
the
schema
and
the
framework
generates
the
struct
descriptors,
which
I'll
mention
in
a
moment
encoding
and
decoding
libraries
and
receiver
functions
that
that
developers
can
then
implement
in
order
to
to
process
the
to
process
these
structured
logs.
For
now
the
render
framework
also
generates
open,
telemetry
logs
serialization.
So
you
can
you
can
what
we
do
is,
and
we
have
you
know
the
library
sends.
A
B
Of
how
how
to
make
it
work
really
dumb
question,
because
I
I
haven't
written
a
lot
of
code
that
does
this.
I
I've
heard
of
like
frameworks
like
like
say,
like
captain
proto,
which
I
think
is
like
a
more
binary
style
protocol
like
grpc,
alternative
effect
or
sorry.
Protobuf
alternative
is
that
something
that
is
like
it's
render
sort
of
comparable
or
similar
to
that
and
its
capabilities,
and
it's
and
and
what
it
does.
C
A
Great
the
you
can,
I
think
that
I
haven't
looked
in
it
in
a
while,
like
you
know,
maybe
a
few
years,
I
think
captain
proto
is
I
I
don't
want
to
say
you
know.
Maybe
I
shouldn't
I
shouldn't
have
made
claims
on
on
individual
technology,
but
what
you
can
have
is
either
the
the
technology
is
self-describing.
So
every
message
encodes
all
the
information
you
need
in
order
to
do
forwards
and
backwards
compatibility
or
or
you
have
this
type
of
descriptor
we
we
took
the
descriptor
route.
A
I
think
we're
we
might
be
the
you
know,
I'm
I'm
only
I
I
don't
remember
another
standard
that
uses
descriptors
like
we
do
so
kind
of
that
is
what
changed
kind
of
we
trade
off.
We
trade
off
doing
this
kind
of
protocol
negotiation
kind
of
with
descriptors
to
message
sizes
so
that
you
don't
have
to
convey
kind
of
your
format
on
every
message
so
I'll
get
to
it.
In
a
moment,
you'll
hear
it
can.
C
I
ask
a
couple
of
questions,
may
be
the
wrong
time,
but
you
can,
let
me
know
so.
I
I've
heard
about
this
system
called
apache
avro,
which
is
a
self-describing
format,
and
I
wonder
if
you
could
comment
on
that
sort
of
self-description,
I'm
not
very
familiar
with
it.
I
just
I'm
curious
about
this.
C
The
second
part
of
my
question
is
more
practical:
there's
a
proposal
for
the
apache
arrow
protocol
to
be
used
that
protocol
data
format
system
to
be
used
to
represent
column
data
which
we
could
benefit
from
in
both
metrics
and
choices,
but
probably
also
in
logs,
and
I
wonder,
if
there's
a
world
where
they
coexist
or
whether
they
there's
a
relationship
there.
I
don't
actually
know
anything
about
this.
A
So
I
think
maybe
we
should
leave
those
to
the
end
of
the
talk,
because
those
are
kind
of
bigger
for,
but
for
avro
you
could.
I
am
not
sure
how,
if
you
try
to
encode
some
of
these
messages
in
columnar
format,
it
means
you're
batching,
a
bunch
of
messages.
So
the
approach
that
we've
taken
with
npm
that
we
can
change
is
is
to
treat
each
event
individually
so
that
you
don't
have
to
worry
about
batching.
A
There
are
some
ordering
constraints
where
you
know
you
want
to
have
a
report
for
a
process
before
the
socket
arrives.
So
if
you
batch
all
the
process,
all
the
process
logs
together,
so
that
you
can
have
like
this
nice
columnar
format,
then
you
lose.
The
ordering
of
you
know,
process
before
socket,
but
but
you
can
use
you
know.
Columnar
encoding
is
an
extremely
valid
way.
If
you
can
sustain
batching
to
get
low,
you
know
encoding
gains,
but
so.
A
Okay,
I'll
blaze
on
ahead.
So
let's
talk
about
encoding
for
a
minute,
so
I'm
going
to
go
top
to
bottom
left
to
right.
First,
let
me
see
if
I
can
get
a
pointer,
so
the
schema
it
should
be
should
be
familiar
from
to
you
from
protobuf
rift
like
languages.
You
have
a
message
id
a
name
for
the
message
and
then
fields.
So
each
field
has
a
field
id
of
the
type
and
the
name
relatively
simple
schema
and
the
the
system.
A
The
framework
and
render
framework
creates
these
structs
from
those.
So
you
can
see
it
has
the
id
of
the
message
and
then
encodes
these
the
the
fields
into
this
product.
So
you
can
imagine
kind
of
encoding.
This
is
it's
really
fast.
You
can
encode
directly
on
the
message
or
the
framework
provides
serialization
functions.
A
This
is
an
evpf
function,
so
it's
like
c
plus
it's
very
it's
geared
to
run
inside
the
linux
kernel,
so
not
as
as
easy
to
to
read
to
read
or
use
as
the
as
a
c
plus
plus,
but
still
like
you,
you
get
like
a
function
where
you
can
say
like.
I
want
to
write
a
message
with
these
fields
and
the
framework
serializes
so
far.
E
A
So
how
do
you
decode
this,
and
this
is
where
you
have
two
types
of
structures?
This
is
the
wire
struct,
with
the
b,
the
buffer
that
you
sent
to
the
wire
and
then
serve,
which
is
the
what
the
server
encodes
in
memory
when
it
parses
the
message
naming
can
change
right
but
like
just
so,
you
know
like
there's
two
types
of
structs,
one
of
them
in
the
receiver
and
one
of
them
in
the
center.
A
You
can
see
the
sender
struct
is
packed
to
get
minimal
size.
So
you
can
see
the
two
two
byte
fields
are
grouped
together
and
then
there's
a
32
byte
field.
So
this
whole
thing
is
64
bit
aligned.
So
then
you
put
the
64-bit
field,
then
the
32-bit
field
and
then
the
the
string
of
bytes
that
are
that
can
that
you
know
that
doesn't
have
to
be
aligned
to
a
64-bit
boundary
right.
A
So
it
does
kind
of
you,
the
the
framework
packs
all
the
fields
together,
so
that
you
don't
have
spaces
that
you
get
when
you
do
in-memory
and
then
for
variable
length
field,
for
example,
there's
a
string
which
is
the
command
line.
So
you
can
see
it
here,
the
command
line,
so
this
is
encoded
in
the
framework.
Sorry
about
that,
this
is
encoded
just
after
the
struct,
so
you
can
think
about
you
can
have
it.
I
don't
know
whatever
it
is:
20
bytes
of
struct
and
then
afterwards
you
have.
A
You
have
the
command
line
and
this
length
argument
this
length
field
tells
you
when
that
ends.
So
you
know
all
that
all
those
number
of
bytes
are
the
command
line
so
and
in
general,
if
you
have
multiple
variable
length
fields,
those
are
encoded
in
these
offsets
here
in
this
in
the
fixed
struct,
so
relatively
simple
format,
you
get
you
get
a
fixed
size,
struct
then
variable
sized
fields
and
the
fixed
size.
Struct.
Has
these
offsets
to
tell
you
where
each
field
starts
and
ends?
A
So
this
is
the
wire
format,
the
the
parsed
struct
that
you
get
in
the
receiver
here.
This
is
this.
Is
this
is
built
for
convenience,
so
here
this
command
line
has
pointers.
Kind
of
it
looks
like
a
string.
A
It
has
pointers
into
like
the
offsets
of
the
original
message
so
that
you
can
manipulate
variable
length,
variable
length
parts
easily
so
and
in
general
you
can
build
accessors
or
you
can
build
functions
that
that
allow
you
to
access
certain
fields
so
that
it
really
feels
like
more
like
protocol
underlying
its
struts.
It's
just
a
handful
of
pointers
and
fields
and
then
the
handler
method
that
this
the
framework
provides.
Is
it
creates
these
these
stubs
for
you
that
you
can
inherit
from
and
implement
here?
E
E
B
A
One
reason
to
create
these
parts:
drugs,
which
is
perhaps
not
the
major
reason-
is
to
have
convenience
of
access.
So
you
want
your
your
variable
length,
parts
and
strings
to
just
have
pointers
and
you
know
pointers
into
kind
of
the
original
format.
So
you
create
these
this
parse
struct
that
points
into
memory.
You
don't
have
to
copy
the
variable
length
field,
but
at
least
you
have
these
pointers,
so
you
keep
the
the
wire
struct
and
the
parse
struct,
but
it's
much
easier
to
access
strings
like
we
found
that
kind
of.
A
Otherwise
we
didn't
want
to
copy
all
this.
You
know
all
the
strings
in
case
they're
big
or
we
don't
need
them,
but
it
was
so
much
more
convenient
to
access
them
kind
of
when
you
have
these
these
views
into
them.
But
that
is
not
the
major
reason
to
have
this
translation
so
like
the
ordering
like
you
can
keep
the
same.
So
it's
just
code
and
it's
local
to
the
receiver.
So
it's
kind
of
almost
arbitrary
you
can
select
whatever
you
want
in
the
framework.
I
guess
I
found
these
fields.
A
I
you
know,
having
researching
this
presentation,
I
found
you
can
you
can
just
look
at
this
message.
This
is
these
are
real
messages
from
the
udps
collector
and
you
can
see
that
it
reorders
the
fields
and
it's
not
really
necessary.
I
think
because,
because
this
length
field
turned
into
kind
of
pointers,
it
changed
its
alignment,
so
it
just
packed
it
differently
in
memory,
but
the
major
reason
is
forwards
and
backwards
compatibility
and
this
tier
and
was
your
question
before
you
know
what
happens
kind
of
using
struct.
A
You
don't
want
to
lose
all
of
your
all
of
the
nice
semantics
of
interoperability.
What
if
the
sender
has
a
different
version
of
the
schema
than
the
receiver?
A
And
and
so
what
can
happen
is
by
doing
this
translation
you
can
accommodate
new
fields,
removed
fields
and,
to
some
extent,
kind
of
changes
in
sizes
of
fields
like
if
you
want
to
have
like
the
like.
A
number
was
encoded
in
one
byte,
but
you
know
the
new
version
encodes
it
in
two
bytes.
Maybe
you
can
accommodate
that
right.
So
kind
of
this
is,
you
know
the
you
want
to
have
this
translation
for
forwards
and
backwards
compatibility,
and
you
know
this
is
what
I
want
to
talk
about
in
the
next
couple.
Slides.
A
Cool,
so
how?
How
does
render
do
this
translation?
And
the
answer
is
first,
you
know
descriptors
and
then
translators.
So,
let's
start
with
descriptors,
so
descriptors
are
compact
representations
of
a
message
content.
I
I
mentioned
before
that
some
serialization
formats
are
self-describing,
so
they
tell
you
for
each
field
that
you're
going
to
encode,
for
example,
protobuf
has
this
variable
in
that
it
encodes
before
each
field
that
tells
you
the
field
id
and
the
type
of
the
field
that
is
going
to
follow.
A
What
we've
done
is
we've
taken
all
this
and
put
it
into
a
descriptor
that
you
send
kind
of
almost
you
know
out
of
band
or
when
you
start
the
communication
or
just
before
you
send
a
message
of
each
type.
That
tells
you
how
the
message
is
arranged
it.
It's
a
super,
simple
format.
It
kind
of
has
the
id
of
the
message,
the
number
of
the
fields
that
it
has
and
the
number
of
arrays-
and
you
can
see
here.
This
is
message
number
546.
A
It
has
five
fields
and
one
array,
then
you
have
five
two
byte
numbers
that
give
you
information
about
each
of
the
fields,
their
types
and
so
on,
and
then
the
length
of
the
array
16
right.
So
this
is
like
then,
so
this
tells
you
the
structure
of
the
message.
Where
are
all
the
offsets
you
can
reconstruct
from
this
descriptor,
all
the
offsets
of
all
the
fields
and
all
the
field
ids?
So
you
know
the
format
of
the
message
you're
going
to
get.
A
With
these
descriptors,
you
can
transform
between
versions,
so
I've
shown
you
kind
of
you
have
you
can
either
transform
between
wire
format
and
wire
format
like
old
wire
format,
a
new
wire
format,
new
wire
format
and
old
wire
format
or
wire
to
parse
them
different
versions.
Whatever
you
can
do
like
these
descriptors
tell
you
where
all
the
fields
are.
So
what
you
do
is
you
if
you
have
a
message-
and
you
can
look
at
the
example
here
for
a
moment?
A
So
let's
say
the
source
has
fields
number
one:
three
and
five
and
my
version
of
the
of
the
parser
yeah.
I
know
about
fields,
one
five,
six
and
seven.
Then
I
need
to
only
copy
fields,
one
and
five,
the
common
fields,
and
what
you
do
is
you
look
at
the
offsets
of
the
source
for
fields,
one
and
five
and
copy
it
into
the
opposites
of
the
destination
fields,
one
and
five.
A
So
relatively
simple
for
if,
if
you're
doing
like
the,
if
you
have
like
variable
length
fields-
and
you
have
these
offsets,
then
you
need
to
do
some
pointer
arithmetic
in
order
to
point
to
from
the
destination
to
the
source.
But
it's
all
pointer,
arithmetic,
it's
it's
relatively
simple,
so
it's
like
take
the
offset
of
the
source
plus,
you
know
whatever
is
written
in
the
offset
field,
and
that
is
your
pointer
to
to
the
you
know
variable
length
string
for
example.
A
So
it's
all
like
super.
You
know
if
you
have
like
a
handful
of
fields
in
your
message.
Maybe
you
know
messages
that
npm
encodes
have
between.
You
know
3
and
15
fields.
So
you
know
what
we
found
does.
Usually,
if
you
don't,
if,
if
you
have
more
than
15
fields,
you
probably
are
better
off,
you
know
the
semantics
mean
that
it's
probably
two
messages.
I
think
there's
no
one
message
that
has
so
many
fields
and
so
like.
A
If
you
think
about
like
the
translation,
is
super
super
simple:
you
do
some
pointer
arithmetic,
you
do
some
copies
and
you
can
do
it
with
just
like
a
handful
of
reads
and
handfuls
right
and
you
want
the
you
know,
you
want
the
message
decoding.
A
This
is
message
decoding
you
want
it
to
be
really
really
fast,
and
so
what
we've
done
with
render
is
we've
jit
compiled
this
translation
function
between
source
and
destination
using
llvm,
and
so,
if
you
look
at
kind
of
how
how
our
code
is
structured,
you
have
from
descriptor,
you
have
a
two
descriptor,
and
this
function
returns
a
function,
an
lvm
compiled
function
that
translates
between
the
source
and
the
destination.
A
It
also
has
some
metadata
so
that
you
know
you
have
a
function
that
just
like
moves
bytes
around,
so
you
want
to,
you,
know,
do
bounce
checking
and,
and
that
sort
of
thing,
so
that
kind
of
you
know
an
attacker
cannot
force
you
to
just
overwrite
pieces
of
memory.
But,
like
you
do
all
of
that,
the
descriptors
have
enough
information
to
do
this
safely.
There
was
a
question.
E
I
have
a
question,
so
this
compiling
happens
at
runtime
when
you
first
see
the
descriptor
right
you
when,
when
you
receive
the
descriptor,
I
guess
that
happens
when
the
connection
is
established.
The
first
time.
That's
when
you
need
to
do
the
compiling
to
have
this
translation
function
from
the
source
to
to
whatever
is
your
internal
descriptor?
That
you're
using?
Is
that
correct.
A
Yes,
and
we
have
a
caching
mechanism
where
the
first
time
you
see
a
descriptor,
you
put,
you
put
it
into
a
hash
with
the
compiled
function,
so
you
don't
have
to
recompile
when
you
have
like
the
same
message,
because
you
know
if
you
have
a
handful
of
formats,
how
many
versions
are
in
the
wild?
You
have
50
100
versions
in
the
wild,
so
you
just
compile
all
of
these
and
keep
it
right,
yeah,
okay,
cool!
A
A
A
A
So
some
of
the
limitations
I
was
able
to
think
about
was
first
kind
of
we've
made
this
decision
to
limit
messages,
sizes
to
64,
kilobytes,
and
we
were
thinking
of
you
know,
will
have
you
know:
you're
you're
reporting
on
things
that
happened
in
the
operating
system.
How
big
can
things
be?
A
You
know
if
you
want
to
report
multiple
things
and
then
these
things
should
you
know
each
one
is
less
than
64
kilobytes
and
just
report
multiple
things,
but
you
know
in
hindsight
we
should
have
made
the
the
offsets
and
length
32
bits
instead
of
16
bits,
because
you
know
later
on,
we
had
added
lz4
compression
on
top
of
of
everything
and
that
just
tricks
everything
back
to
you
know
you
get
you
get
the
savings
back,
so
I'm
not
as
worried
about
about
an
increase
in
size.
A
If
you
used
larger
offsets.
The
other
kind
of
you
know
lack
of
features
that
it
has.
That
has
no
support
for
nested
structs.
We
could
have
added
nested
structs,
but
it's
more
complexity.
We
just
hadn't
gotten
around
to
it
and
I'm
you
know
the
the
question
is:
do
you
really
need
nested
structs?
This
is
not
a
general
purpose,
messaging
coding
and
decoding
as
it's
built
today.
So
do
you
really
need
like
complex
kind
of
hierarchies
of
structures?
I
don't
know-
maybe
maybe
it's
used
for
some,
but
anyway
this
could
be
added.
A
I
am
it's
not
not
not
a
huge
problem.
The
current
implementation
is
only
in
c
plus,
nothing
go.
It
doesn't
have
a
java
implementation.
You
know
protobuf,
for
example,
you
know
almost
you
know
it's
very
easy
to
be
able
to
decode
everything
in
python.
So
what
we've
done
to
work
around
this
is:
we've
created
output
of
messages
into
json
and
the
open,
telemetry
logs
format.
So,
like
you
can
get
you
if
you,
if
you
want
like
more
flexibility,
you
can
get
it
that
way,
but
really
kind
of
there's.
A
No
reason
why
this
shouldn't
have
a
go
back
end.
You
can
port
the
code,
but
it's
it's
not
there.
Yet
and
while
we
have
used
the
jit
transformer
for
transforming
between
versions
kind
of
overwhelmingly,
our
use
case
is
between
you
know
the
same
version
of
the
same
person.
It's
filled,
buyer
message
and
parse
message,
but
but
like
the
maturity
of
code,
kind
of
the
the
testing
of
code
was
mostly
for
for
kind
of
same
persons.
D
For
the
translation,
I
have
a
question
about
that.
If
the
majority
of
the
cases
are
for
the
same
version,
the
same
version
was
there
ever
a
consideration
of.
Maybe
we
just
have
a
number
of
collectors
for
individual
versions
of
your
wire
format
so
that
you
don't
have
to
have
an
extra
transform
layer
on
top
of
it,
where
you
can
pre-compile
that
at
any
point
in
time,
or
just
some
alternative
to
having
that
less
used
code
path
in
there.
A
Yeah,
I
mean,
I
think
it
would
be
used.
We
just
haven't
exercised
this
functionality
and
you
know
we
had
considered.
You
know
you
have
the
you
have
a
compiler,
the
compiler
can
create
parsers,
you
can
qualify
it
with
the
schema
version
or
a
hash
of
the
of
the
schema,
and
then
you
have
like
a
bunch
of
these.
You
see
those
plus
files
that
parse
each
version,
and
then
you
just
compile
all
of
them
together.
Every
time
you
create,
like
the
back
end,
the
back
end
pipeline.
A
So
we
considered
that,
but
you
have
to
kind
of
keep
state.
You
can
have
to
keep
kind
of
all
the
versions
of
your
schema
and
you
know,
create
a
database
and
and
somehow
every
time
like
have
a
release,
process
and
save
that
file
and
compile
it.
It's
just
like
you
can
do
it.
D
I
mean
in
the
worst
case
scenario
you
can,
but
I
mean
most
most
systems
that
I've
used
with
large
scale
volumes
have
five
or
six
of
any
particular
message
at
most
in
play
at
one
time,
and
you
don't
really
need
to
be
converting
from
everything
forever.
History,
you've
eventually
deprecated
some
messages,
but.
A
Agree
and
with
the
slight
caveat,
which
I
don't
think
is
a
deal
breaker
of
forwards
compatibility.
So
can
you
have
a
newer
version
of
the
code,
send
a
message
to
your
older
analysis
pipeline
and
so
like?
If
you,
if
you've,
never
seen
the
schema,
do
you
have
a
parser
for
it
or
you
just
throw
it
out?
If
you
have
a
descriptor
format,
then
you
can
still
kind
of
make
sense
of
it
out
of
it.
But
I
you
know
that's
depending
on
the
use
case.
It
might
not
be
a
big
deal.
A
Okay,
so
the
scale
that
this
runs
at
just
just
to
get
an
order
of
magnitude
is
how
many
events,
this
processes
processes,
so
that
we
have
these
are
all
benchmark.
You
know
by
now
they're
mature
benchmarks
that
we've
run
at
several
customers
and
you
can
see
there.
You
have
this
distribution
of
different
message
types.
Even
each
category
has
different
types
of
messages.
So
there's,
like
you,
know,
probably
30
or
40
messages
that
are
exchanged
in
the
system
and
the
systems
generate.
A
You
know
every
collector
generates
thousands
of
events
per
second
so
like.
If
you
run
on
thousand
node
cluster,
which
we've
we
have
several
of
those
customers
very,
very
large,
very
large
clusters,
then
you
can
have
millions
of
events
per
second,
I
mean
several
millions
of
events
per
second
and
with
the
typical
framework
we've
been
able
able
to
handle.
You
know
several
thousands
of
nodes
on
very,
very
modestly
value
deployment
time.
I
I
don't.
A
I
haven't
authorized
sharing
those
numbers
of
you
know
yes,
exact
overhead,
like
we
should
run
benchmarks
on
like
we
should
have
like
publicly
available
benchmarks
at
some.
You
know
I
hope
to
to
have
that,
but
you
know
you
can
this
is
I
I
think
volumes
that
at
least
I've
heard
are
extremely
high
kind
of
millions
of
events
per
second
we've
been
able
to
handle
them.
A
So
main
ideas
and
takeaways,
the
first
main
idea
is
use
a
schema
have
descriptors,
and
then
you
can
have
a
very
simple
encoding
decoding
format.
A
D
A
Don't
need
field
ideas
ids
in
your
messages
even-
and
this
makes
the
messages
smaller,
but
also
your
decoding
logic
must
much
more
efficient.
You
don't
have
to
have,
as
many
ifs
as
you're
parsing
the
code
and
just
these
the
descriptors
are
enough
to
translate
between
versions
for
backgrounds
and
phones.
Compatibly,
so
like
the
first
idea
is
use
descriptors.
A
Now
it
doesn't
have
to
be
this
like
efficient,
binary
format
with
two
plates
like
we
can
have
a
protobuf
for
descriptors,
it's
like,
but
I
think
the
idea
of
using
descriptors
separate
from
the
messages
is
one
of
the
ideas
we've
leveraged
the
other.
The
second
idea
is
having
a
wire,
struct
and
application
layer
parts
what
we
call
parse
direct
so
that
application
can
have
a
convenient
interface.
You
can
also
have
the
parse
struct
implement
some
methods
on
them.
A
If
you
want
for
your
indian
programming
language,
the
nice
thing
about
having
concise
descriptors
is
that
you
don't
need
to
worry
about
the
entire
schema
language
and
and
the
names
of
the
fields
in
order
to
to
do
the
translation
between
two
descriptors.
So
the
scriptures
are
super
efficient
to
send
over
the
buyer
and
also
kind
of
their
good
interface.
A
To
reason
about,
when
you
do
the
translation
and
you
can
do
the
translations
using
jit
and
that
as
that,
you
know
in
systems
where
you
have
right
like
four
or
five
types
of
each
message
in
flight
at
any
given
time
it
and-
and
you
know
many-
you
know
many
repetitions
of
each
message-
types
person
type
per
second
in
your
system-
it
makes
sense
to
do
to
to
front
load
handling
into
jit
overall
kind
of
I've
shown
you
a
benchmark.
A
That
shows
you
can
get
three
to
four
orders
of
magnitude,
better
lower
cpu
overhead
and
similar
size
to
protobuf,
but
it
needs
more
benchmarking.
I
haven't
run
the
benchmarks
and
it's
not
the
render
benchmark.
So
those
are.
A
And
overall,
so
just
like
to
to
finish,
though,
like
this
was
like
probably
a
lot,
but
I
think
like
as
a
community
we
should
we
should.
We
should
lean
towards
standardizing
formats
that
allow
that
have,
like
the
you
know,
very
low
overhead,
I
wouldn't
say
the
lowest
overhead
possible,
but
can
we
go
low
overhead
because
this
first,
let's
let's
not
burn,
you
know,
let's
not
release
as
much
carbon
into
the
atmosphere.
A
Let's,
let's
be
you
know,
save
on
resources,
provide
cost
savings
to
the
users,
and
also
this
really
enables
new
use
cases.
Ebps
npm.
If
we
had
10
times
more
serialization
deserialization
overhead
would
probably
not
be
a
viable
product,
as
is
I
guess,
we've
implemented
it.
So
it's
like
the
this
is.
This
is
important
to
to
enable
new
use
cases.
A
There
might
be.
You
know,
morgan.
I
think
you
mentioned
captain
proto.
There
might
be
other
frameworks
that
are
available
when
we
started
this
kind
of
the.
We
wanted
more
control
over
the
format.
A
So
we
didn't
mind
the
effort
and
we
had
you
know
we
could
minimize
the
overhead
integrated
with
other
parts
of
the
system.
But
if
we're
standardizing,
maybe
there's
another
format,
so
I'm
not.
A
This
is
the
the
you
know,
the
end
of
the
road
and
we
should
definitely
adopt
render,
but
so
if
we
you
know,
if
there
is
another
more
mature
library,
we
should
use
it,
but
anyway
render
is
part
of
open
telemetry.
At
this
point,
it's
part
of
the
edpf
contribution,
so
it's
available
to
us.
B
If
we
want
to
standardize
using
it,
captain
proto
is
the
only
one
that
comes
to
mind
for
me
for
something
it
does
like
like
extraordinarily
performant
effectively.
Just
I
wouldn't
even
call
it
serialization,
but
it's
recording
data
data
objects
from
memory
into
a
wire
format
and
then
re-materializing
them
somewhere
else.
E
A
E
A
E
Yeah
when,
when
I
was
designing
otp,
I
did
look
both
at
copy
proto
and
flat
buffers
and
protobuf
didn't
win,
because
it
was
the
the
most
performant
it
won,
primarily
because
it
was
the
most
available
one
in
different
languages
and
the
most
mature
one.
So
I'm
easily
convinced
that
there
are
more
performance
solutions
available.
E
Now
I
guess
the
the
important
question
for
me
would
be
what
would
be
actionable
for
us
for
this
seek
for
log
sig,
since
I
guess
this
implementation
of
render
is
available
in
c
plus,
plus
only
it
cannot
be
possibly
the
protocol
for
open,
telemetry
right.
We
need
to
be
available
everywhere
for
all
of
the
languages.
Could
this
be
maybe
a
specialized
protocol
for
some
some
specific
use
cases
like
ebpf
right.
So
what
is
actionable
for
us?
What
we?
What
can
we
do
here,
and
I
guess
another
question
I
have
for
you?
E
Jonathan-
is
whether
setting
a
site
performance
for
a
moment
whether
you
saw
any
limitations
of
the
data
model
itself
of
open
telemetry's
data
model.
Whether
something
doesn't
fit
well
is
not
is
not
nicely
modeled,
with
the
current
data
model
of
the
data
that
vpf
needed
to
represent
that
sort
of
feedback.
I
think,
would
be
very
useful
for
this
thing
as
well.
A
Let
me
let
me
answer
your
second
questions,
because
this
is
the
second
question,
because
I
think
that
that
is.
Maybe
there
is
this
one
gap
in
the
open
floor,
which
we
did
a
model
that
as
but
I
don't
know,
if
you
know,
maybe
it
should
be
the
specs
egg
or
another.
A
But
if,
if
you,
if
you
granted
me
one
wish
on
open
telemetry,
I
think
it
would
be
better
handling
of
stateful
connections
in
the
sense
of
there's
a
lot
of
state
in
the
operating
system
and
what
we've
found
I've
shown
you
the
slide.
Where
every
container
there
could
be
hundreds
of
thousands
of
socket
reports
that
you
have
to
send.
A
What
what
we've
been
relying
on
before
going
open
telemetry
is
having
a
persistent
connection
from
a
collector
to
the
back
end,
and
so
the
back
end
and
the
collector
can
rely
on
shared
state
and
that
shared
state
would
be
like
okay
container
id
number
uuid
has
like
a
bunch
of
these
properties
that
you
don't
want
to
repeat
on
every
socket
event.
Okay,
so
that
when,
when
you
get
a
new
socket
event,
you
can
just
look
it
up
in
memory
in
in
the
back
end
and
not
not
relay
not
relay
all
the
information.
A
But
I
I
don't
know
if
it's
relevant,
but
it's
like
my
number
one
concern
for
you
know
an
integration
with
open
country,
yeah.
E
Let's
say
sent
over
like
a
deltas
of
the
dictionary
sent
right
through
the
connection,
so
yeah
yeah,
I
guess,
but
by
again,
like
we're
not
going
to
delete
all
tlp
at
this
stage
right.
It
already
exists.
It's
accepted
as
a
standard
for
traces
and
metrics.
So
we're
at
this
point
looking
at
how
does
all
tlp
represent
logs
right
so
that
network
needs
to
be
needs
to
be
completed?
We
will
do
that,
but
I
guess
what
you
are
saying
probably
is
a
good
feedback
for
the
next
version
of
the
protocol.
E
E
When
I
was
looking
at
that,
like
almost
almost
three
years
ago,
they
weren't-
maybe
they
are
now
like.
We
can
maybe
use
plot
buffers,
maybe
we're
brave
enough,
and
we
we
do
our
own
right.
We
we
have
the
resources
now.
Maybe
we
do
that
with
render,
but
I
think
at
this
point
I
short
term
short-term
goals,
for
this
thing
is
to
to
finish
the
data
model
specification
for
for
logs
and
finish
the
definition
of
the
otlp
for
logs.
So
I'm
I
guess
I'm
primarily
focused
on
on
on
that.
E
E
A
Yeah
and
again
I
I
didn't
need
to.
Oh
I'm
sorry,
if
I,
if
this
was
overwhelming,
I
I
kind
of
I
was
asked
to
give
a
you
know
a
viewpoint
into
structured
logs
and
I
hope
it
wasn't
completely
off
topic.
C
Can
I
offer
a
thought
experiment?
So
we
have
this
otlp
protocol
for
spans
and
the
sort
of
standard
interaction
is
that
the
the
sdk
puts
together
a
bunch
of
spans
with
a
resource
record
and
that
resource
record
is
going
to
be
sent
on
every
single
request
from
now
until
infinity,
and
it
seems
like
you
know,
obviously,
you've
just
shown
us
a
way
that
we
could
have
done
this.
C
That
was,
you
know,
stateful,
so
that
you
transmit
your
resource
when
you
start
up
and
then
you
refer
to
it
and
it'd
be
nice
to
see
an
incremental
way
that
we
could
add
that
support
to
open
telemetry.
First
of
all,
for
the
reasons
that
you've
given
like
compression
is
obviously
going
to
win,
but
there
are
applications
that
are
sort
of
more
semantic
in
nature,
which
I
think
are
worth
thinking
about
for
you
and
everyone
here
like.
C
I
would
like
to
be
able
to
have
an
identity
that
I
don't
actually
have
all
my
attributes,
for
example,
so
I'm
going
to
publish
my
resource
but
like
promise
me
you're
going
to
reconstitute
or
rehydrate
my
resource
downstream.
So
I'm
only
going
to
I'm
only
going
to
record
my
identifier
and
then
some
other
external
system
is
going
to
publish
the
rest
of
the
information
about
me
because
I
don't
always
know
about
myself.
C
This
is
how
prometheus
has
essentially
captured
the
monitoring
world
for
metrics,
because
they're
the
only
system
that
knows
how
to
like
reach
out
to
kubernetes
and
all
the
other
things
to
figure
out
what
are
your
actual
set
of
resources,
because,
usually
in
that
monitoring
environment,
the
clients
don't
know
about
themselves.
So
it'd
be
nice
to
see
an
incremental
proposal
that
could
let
us
keep
what
we
have
today,
but
but
move
in
the
direction
of
recording
resources
either
out
of
band
or
once
at
the
start
of
your
process.
E
They
force
the
state
on
all
nodes
which
need
to
deal
with
the
data
in
any
meaningful
way
like
if
you
want
to
have
a
collector
which
understands
the
data
and
can
process
it.
The
collector
has
to
keep
the
state
as
well,
which
may
be
a
major
problem
right.
You
see
how
that
can
become
a
problem.
Collector
is
an
intermediary
between
thousands
of
nodes
and
the
back
end.
E
Suddenly,
your
collector
is
forced
to
have
all
that
same
status
as
your
backhand
has
to
keep
for
all
those
sources
right
otherwise,
like
I,
I
want
to
replace
the
resource
attribute
or
rewrite
it
or
reduct
it.
I
I
can't,
because
I
don't
have
it
right
or
or
I
want
to
change
the
value
of
an
attribute
based
on
resource
attribute,
but
I
do
no
longer
have
resource
attribute,
because
I
didn't
keep
the
resource,
because
that's
the
state
that
is
no
longer
sent
over
and
over
right.
E
So
stateful
connections
are
very
nice
when
you
have
to
keep
the
state
anyway
and
that
you
do
it
when
you
have
a
back
end,
which
is
the
final
destination.
That
is
not
usually
an
additional
requirement,
but
when
you
have
intermediaries,
you
force
that
state
into
the
intermediaries.
Unfortunately,
and
that
is
a
problem
often.
E
So
again,
my
my
view
is
that
this
stateful
protocols
can
be
very,
very
nice
in
specialized
cases
like
the
last
lake
right
last
mile
from
the
last
collector
to
the
back
end
or
things
like
that,
maybe
or
when
you
don't
have
an
intermediary.
You
don't
have
a
collector
when
you
go
straight
from
your
application
to
the
back
end
in
more
generic,
like
applications
like
when
you
have
wider
wider
ways
of
deploying
things
you
have
to
take
into
account
different
topologies
different
kinds
of
infrastructure.
E
C
One
of
the
reasons
why
the
ot,
the
apache
aero
project,
has
come
up
as
a
proposal
for
column.
I
o
is
that
it
lets
you
batch
data
and
get
many
of
the
benefits
without
having
state.
So
it
seems,
like
you
know,
if
the.
If
the
protocol
offered
flexibility,
we
could
end
up
in
a
deployment
where
you
get
the
state
and
compression
you
want.
C
Maybe,
but
we
would
definitely
like
to
have
that
last
mile
from
an
hotel,
collector
to
say
lightstep,
giving
us
column
compressed
data
rather
than
sending
up
every
span
compressed
in
yeah,
and
you
know,
or
so
you
optimize
for
compression
on
last
mile
right
and.
C
And-
and
there's
been
a
discussion
about
this
is
obviously
a
technology
question,
but
like
for
metrics,
do
we
really
want
to
copy
every
resource
attribute
to
every
metric
as
it
comes
in,
or
would
we
rather
have
like
some
entity
id
on
our
metrics
as
a
single
attribute
and
then
essentially
rehydrate
or
reconstitute
those
in
the
query
path
so
that
we
don't
have
to
expand
the
data?
The
way
jonathan
was
describing
earlier,
which
I
guess
brings.