►
From YouTube: 2021-12-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yeah
partially,
we
were
there
for
a
few
days
how
to
talk
there
so.
A
B
It's
so
crowded,
like
I
feel
like
kubecon,
was
at
least
for
our
it's
a
little
bit
like
more
digestible,
because
it's
a
little
bit
more
focused
and
the
scale
of
these
things
is
pretty
high.
A
Yeah
I
was
going
to
to
maybe
give
an
update
on
the
meeting
with
the
log
sig.
I
I
I
give
a
short
talk:
okay,
oh,
but
no
other
agenda
unless
other
people
have
well
we'll
give.
You
know
two
three
minutes
for
people
to
write
down
if
they
have
an
agenda
on
the
meeting
now.
Yeah
absolutely
here.
A
A
We're
almost
four
or
five
minutes
in
and
I
don't
know
if
nobody
else
is
joining,
let's
I'll,
just
give
you
a
short
update
on
the
log
sig.
A
So
if
you
remember
a
few
weeks
back,
there
were
a
couple
members
from
the
log
sig
invited
you
know
asked
for
for
some
detail
on
how
the
kind
of
flow
mill
collector
now
kind
of
the
npm
hotel,
npm
evpf
collector
how's
that
how
how
that
serializes
data?
How
do
we
think
about
structured
logs?
So
I
went
to
the
log
sig
a
couple
weeks
back
and
gave
a
talk
about
serialization,
that
is
in
the
open
source
repo.
A
What
we
have
today
is
you
know
the
the
contribution
has
a
compiler
that
takes
an
interface
description,
language,
which
we
call
the
render
language
and
compiles
it
into
what
you
can
think
about
as
protobuf
right
compiles
it
into
serialization
and
deserialization
logic.
It
has
c
support,
so
you
can
use
it
from
ebtf,
so
we
have
protobuf
for
ebpf
in
the
contribution.
So
I
talked
a
bit
about
that
and
we
use
the
same
data
format.
It's
like
very
simple.
A
It
creates
these
structs
that
you
can
then
use
to
serialize
data
over
the
wire,
and
so
it's
a
relatively
simple
binary
protocol.
We
talked
a
little
bit
about
forwards
and
backwards
compatibility.
A
So
there's
this
concept
of
descriptors,
where
you
encode
what
fields
are
in
each
message,
because
it's
struct,
it's
kind
of,
has
no
field
identifiers
on
each
field.
It's
just
a
bunch
of
data,
but
so
what?
What
we've
done
with
the
collector
is
create
this
format
for
for
specifying
kind
of
what
each
field
means
right
like
mapping
the
byte
offsets
to
the
fields
that
you
expect
in
each
message
and
and
so
that
enables
forwards
and
backwards
compatibility
anyway.
So
we
we
talked
about
that.
A
It
seems
that
the
log
sig
is
currently
still
in
the
process
of
standardizing
otlp
based
the
ltlp
based
protocol.
So
protobuf
based
and
the
question
was:
do
we
need
to
think
about
v2
of
the
serialization
protocol?
Because
that
would
be
you
know
it
would
be
more
efficient
with
struct
or
should
we
keep
keep
protobuf
and
so
an
open
question.
A
I've
had
a
conversation
with
tigran
later,
an
open
question
is:
is
protobuf
sufficient
for
serializing
network
data
and
evps
data?
Is
there
so
much
volume
that
we
should
start
thinking
about
a
new
encoding
format?
You
know
that
the
destructing
coding
is
basically
it's
there
in
memory,
you
dump
it
it's
there's
no
kind
of
encoding
or
decoding
logic.
No,
no,
not
significant.
A
And
so
we're
thinking
about
how
to
benchmark
to
understand
the
implications
of
protobuf
versus
kind
of
render
or
flat
buffers,
or
you
know
there
are
other
you
know,
captain
proto
is
another
standard
out
there
or
like
implementation.
A
So
if
you
know
of
someone
who
is
interested
in
doing
that
kind
of
benchmarking,
then
send
them
send
them
this
way.
The
team
would
probably
the
the
splunk
team
would
probably
get
get
to
it
at
some
point.
But
if
there's
somebody
interested
we'd
like
I'm
happy
to
work
with
them
doing
it.
B
The
same
thing
like
in
our
experience,
proto
buffs,
are
like
super
efficient.
I
know
you
can
go
a
little
bit
more,
there's
like
a
little
bit
of
encoding
in
there,
but
it's
fairly
minimal,
so
unless
you're
really
trying
to
optimize
the
last
kind
of
a
little
bit
out
of
it,
I
feel
like
proto
buffs
are
a
pretty
simple
and
straightforward
serialization
format,
not
too
far
off.
Instructs
honestly
yeah
agreed.
A
A
Buffers
was
developed
by
a
google
team
that
which
I
believe
I
think
it's
the
the
fun
lab
or
something
and
they
developed
games
gaming
based
on
you
know,
you
know
gaming
at
google
and
they
needed
serialization
protocols
for
that,
and
and
so
they
ran
these
benchmarks,
comparing
here's
raw
struct
which
do
not
have
this.
You
know
that
is
practically
what
we
implemented
in
render
to
protocol
buffers
lite,
not
even
the
full
protocol
buffers
and
their
implementation
flat
buffers,
and
you
can
see
kind
of
encoding,
a
million
messages.
A
Sorry,
this
is
decoding
so
encoding.
A
million
messages
took
them
3.2
cpu
seconds
on
flat
buffers
and
it
takes
protobuf
185
cpu
seconds.
So,
if
you
want
to
send
you
know
ten
thousand
000
messages
per
second,
then
you're
going
to
pay
like
1.8
cpu
cores.
It
seems
right,
like
I'm
sorry
if
I
I'm
I'm
doing
the
math
on
the
fly,
but
I
think
that's
what
it
turns
out
too,
which
is
kind
of
expensive.
A
So
if
you're
doing
a
thousand
cpu
cores,
you
should
a
thousand
messages
per
second,
you
should
expect
to
send
to
spend
180
milli
milli
cores,
so
it's
not
great
on
on
protocol
buffers
and
so
like
you
can.
You
can
get
much
lower,
and
this
is
why
I
I
want
to
you
know:
I
think
we
should
benchmark
on
real
data
to
see.
Do
you
really
do
you
really
need?
B
I
mean
one
interesting
thing
I
think
is:
what's
our
data
types
are
also
trying
to
transmit?
Like
I
see
up
there,
it
says
10
objects
containing
an
array,
four
strings,
a
large
variety
of
inflow,
scalar
values
of
all
sizes.
B
A
Yep,
this
is
why
also
we
need
to
benchmark.
So
we
have
the
mix
we,
you
know
all
the
message
specification
is
in
the
repo
a
lot
of
it
is
kind
of
these
structs
that
give
you
the
socket
id
and
socket
id,
and
then
you
know,
number
of
bytes
number
of
number
of
drops.
Latency
raises
like
these
into
arrays
that
you
send,
which
is
most
most
of
the
volume.
A
Then
you
have
metadata
metadata
could
be
you
know,
process
names
and
you
know
container,
ids
and
and
and
image
names
and
such
so.
Those
are
the
string
type
I
yeah,
so
the
micro
benchmarks
are,
you
know,
there's
lies,
lies
and
micro
benchmarks.
A
You
know
the
phrase,
so
a
kind
of
the
the
nice
thing
about
the
phloem
collector,
the
edp,
the
the
the
ebps
npm
in
in
open
film
tree
has
a
protobuf
back
and
that
sends
to
to
the
otel
collector
and
it
still
has
the
render
protocol
with
destruct.
So
we
can
just
run
the
two
and
benchmark,
and
you
know
I
was
thinking
of
maybe
implementing
kind
of
these
other.
A
You
know,
there's
you
know
flat
buffers
or
avro
a
driver
for
that,
and
it
should
be
relatively
straightforward
because
it's
all
code
generation,
so
you
can
have
like
code
generation
for
for
the
avro
drivers.
A
B
We
do
use
flat
buff,
like
we've,
used
flat
buffers
where
it's
like
becomes
very
very
like
we
really
are
trying
to
optimize
so
yeah
if
you're
in
that
territory,
where
you
think
it's
actually
gonna
make
a
big
impact,
I
mean
flatbuffers,
I
mean
the
data
you
showed
was
was
pretty
much
showing
the
same
thing
we
kind
of
have
come
to
the
conclusion
of
which
is
flat.
Buffers
can
be
a
lot
more
efficient,
so
pixie
did
you
use
flat
buffers
for
forever?
No.
B
Like
in
the
internals
of
our
stuff,
like
we
use
some
arrow
inspired
stuff
and
for
internal
kind
of
data
tables
and
everything
so,
and
so
that
goes
over
the
flat
buffers
yeah.
So
we
do
have
some
some
use
of
flat
buffers
there.
B
Like
transmitting
data
like
across
the
network
or
anything
like
that,
it's
kind
of
more
for
we're
just
using
flat
buffers
internally,
so
not
exactly
the
same
use
case,
but
I'm
just
saying
we
were
really
trying
to
optimize
we've.
We
have
looked
at
those
as
well
and
I
think
they're
pretty
they're
pretty
cool
as
well
very
straightforward
and
makes
sense
as
well.
B
Nothing
nothing
right
now.
I
think
we
should,
if
we
don't
have
anything
slated
for
the
next
few
weeks
I
mean
we're
gonna
start
hitting
the
winter
holiday
time
and
everything
right,
so
I
think
it's
gonna
be
fairly
slow,
but
maybe
in
the
new
year
we
should
have
a
session
where
we
kind
of
figure
out
what
we
want
to
do
as
a
group.
If
there's
like
what
our
objective
is
and
which
directions
we
want
to
go.