►
From YouTube: 2021-10-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah
I'm
calling
like
went
surfing
so
so
it's
on
and
off
with
the
rain
you
you
went
surfing.
He
went
yeah
the
one
of
my
colleagues,
oh.
C
B
Oh
yeah
morgan
was
playing
around
with
not
playing
around
with
it.
He
was
fixing
zoom
the
the
recording
copying
to
youtube,
so
they're
refactoring
the
accounts.
So
you
have
to
change
the
link.
C
How
did
dash
go?
Is
it
done
it's
done?
It
was
done
yeah,
tuesday
and
wednesday.
This
week
it
was
good.
It
was
intense.
There's
a
lot
between
that
and
kubecon
sort
of
back
to
back
and
re
invent
coming
up,
it's
sort
of
a
channel
like
a
solid
three
four
week
period
for
us
right.
C
I
saw
a
couple
people
I
didn't.
I
didn't
know
to
look
out
for
you,
but
but
there
are
a
lot
of
us
there.
Actually
I
saw
ted
and
janna.
Was
there.
C
C
B
Great,
it's
five
minutes
past
the
meeting
start
time
and
I
think
I
have
a
lot
of
slides
I'll
try
to
keep
it
minimal.
You
should
interrupt
me
and
I
think,
if
we
only
cover
modularity,
I
think
we'll
be
in
a
good
spot.
D
Hi
everyone,
I'm
kelsey
from
app
dynamics,
just
going
to
be
joining
these
meetings
from
now
on,
because
we're
really
interested
in
all
the
the
work
that's
being
done
with
ebpf
and
hotel.
B
Yep,
okay,
good
so
collector
architecture,
so
you've
seen
this
slide
before
I'm
not
going
to
go
through
this
in
the
interest
of
time,
but
I'll
go
through
separate
components,
so
kind
of
we
are
going
to
talk
about
kind
of
this
but
different
aspects
of
it.
This
is
kind
of
the
super
high
level
image.
So
I
think
you
know
a
lot
of
what
we've
been
talking
about
is:
how
do
you
handle
modularity
in
an
evpf
based
collector
right?
B
How
do
you
allow
multiple
modules,
so
the
npm
product
collects
sorry
collect,
collect
information
about
these
different
entities
to
compose
this
map
of
what's
happening
in
your
network
right,
you
want
to
have
all
the
information,
so
first
thing
is
collecting
kind
of
socket
information,
so
you
can
get
kind
of.
This
is
the
most
basic.
You
know
the
the
core
data
that
you
think
about
when
you
speak,
think
about
npm
is
socket
data,
but
what
we
found
is
that
you'd
really
want
to
enrich
it
with
further
context.
B
You
want
to
have
information
about
processes
about
the
containers
about
the
hosts
that
each
flow
is
running
in
and
really
you
want
this
enriched
even
further.
You
want
to
get
information
from
instance
metadata
you
want,
occur,
information
information
about
kubernetes
for
sockets.
It
really
matters
kind
of
if
there's
address,
translation
and
kubernetes
does
a
lot
of
it.
B
Docker
does
some
of
it
and
you
want
that
and
so
that
you
can
get
the
ip
addresses
that
you
can
match
from
both
sides
and
understand
really
what
is
where
connections
are
going
through
to
specific
containers
and
and
outside
entities
like
like
managed
services,
and
you
want
other
drive.
B
You
know
other
modules
to
give
you
information
about
about
dns,
for
example
right
like
how
are
what
what
is
the
application
thinking
it's
connecting
to,
and
you
know,
we've
mentioned
kind
of
a
couple
of
other
applications,
there's
cpu
memory
and
io,
and
from
in
module
a
cpu
memory
and
io
module
that
you
could
write
that
kind
of
the
flow
mill.
Npm
contribution
already
has
kind
of
a
what
I
call
a
beta
implementation
of.
B
You
can
have
information
about
statistics
for
the
mix
about
you
know
queueing
and
waiting
times
for
for
individual
nics.
We
mentioned
profiling
looking
at
unencrypted
payloads
before
they
go
through
tls
we're
talking
about
security
through
kind
of
a
file
access
system
calls
there's
there's
a
handful
and
jonah.
I
think
you
had
a
couple.
I
didn't
copy
from
from
that
document
that
we
had
that
we
composed,
but
we've
been
talking
about
these,
so
you
really
want
reusability
so
motivating
these
mod.
B
You
know
the
concept
of
modules
and
I
think
we're
I
think,
we're
all
on
board
there.
I
think
omid
was
also
driving
there.
So
just
this,
this
is
important.
This
is
not
you
know
we
want.
We
really
want
to
have
this.
A
Yeah,
the
one
other
thing
to
mention
here
is
like
just
context
wise,
like
we
have
to
figure
out
how
to
correlate
or
relate
other
signals
using
this
data
right
so
logs,
metrics
and
traces,
obviously
like
tracing,
is
a
really
interesting
one
and
how
we
can
potentially
relate
network
or
infrastructure
with
traces
is
like
something
that
would
be
really
interesting.
If
we
can
figure
out
how
to
do
it
once
again,
a
lot
of
it
depends
on
like
how
we
can
access
encrypted
data
from
ebpf.
A
A
Yeah
yeah
those
are
easier
to
get
yeah,
but
ideally
getting
it
to
the
trace
level
would
be
really
powerful.
I
think.
B
B
Don't
want
to
distract
you
too
much
from
this,
so
what
we
found
is
so
another
reason
to
have
modules.
This
way
is
to
control
overhead
kind
of
I've
shown.
You
know
collected
numbers
for
the
number
of
messages
that
a
backend
system
would
see
kind
of
that
we've
seen
in
real
production
deployments
kind
of
aggregates
in
that
that
we've
seen
in
production
deployments,
and
you
can
see
that
for
every
piece
of
container
metadata
there
could
be.
You
know
the
in
the
statistics.
B
You
know
hundreds
of
thousands
of
statistics
on
individual
payloads
that
you
see.
So
you
know
if
you
try
to
get
information
about
the
container
every
time
you
report
something
about
a
socket,
you
would
do.
Oh,
you
would
have
a
lot
of
wasted
work,
high,
cpu
utilization
and
it's
not
modular
and
you'd,
be
repeating
yourself
in
a
lot
of
the
code.
Right
and
kind
of
the
ebtf
code
is
runs
in
line
with
with
the
transactions
where
the
customer
trends
you
know,
the
the
workload
transaction
so
you'd
actually
be
perturbing.
B
The
workload
more
than
you'd
want
to
right
more
than
you
need
to
so.
Another
reason
to
have
modules
is
to
save
on
overhead.
Maybe
do
all
of
this
work
of
correlating
kind
of
sockets
to
containers
in
user
space
right.
So
how
do
we
kind
of
modules
are
important
to
carry
carry
the
data
to
user
space,
so
user
space
can
do
this
matching
and
you
don't
have
to
do
everything
in
line
with
a
transaction
where
you
incur
cost
for
your
customer
workloads.
So.
D
C
B
Yeah-
and
I
I
think
this
was
one
of
the
critical
kind
of
design
points
for
for
this
collector
that
enabled
the
this
to
get
to
kind
of
quarter
percent,
cpu
overhead
right,
like
otherwise
you'd,
be
like
running
and
doing
you
know,
container
and
process
work
for
every
sockets
that
that
would
become
very
expensive.
You
know
more
expensive,
I
don't
know
very
nice,
okay,
so.
A
B
Sorry,
yes,
you
asked
this
like
the
similar
question
on
the
document.
This
is
throughput,
not
latency,
so
it.
A
B
Right
this
is
resource
utilization
and
kind
of
the
reason
why
you
want
you
know
you'd
want
to
reason
about.
It
is
if
you're
shuffling
this
data
across
availability
zones
or
regions.
So
if
you're,
if
you
have
your
monitoring
system,
be
remote
to
your
workload,
is
where
you
really
care
about
this
network
overhead?
Okay.
B
So
how
do
modules
work
in
practice?
So
when
you,
you
know
the
way
the
npm
collector
has
has
implemented
modules.
Is
each
module
tracks
the
underlying
entity
in
the
linux
kernel?
So
you
have
a
process
module
that
tracks
all
the
processes,
so
you
have,
in
user
space
a
consistent
image
of
what's
happening
in
the
kernel
in
terms
of
processes.
B
This
means
that
a
module
in
user
space
can
keep
state,
can
keep
information
about
the
underlying
entities
and
can
answer
queries
from
other
modules.
So,
for
example,
your
process
module
can
hold
information
about
all
the
processes
so
that
when
you
have
a
socket
that
wants
to
read
information
about
processes
that
information
is
available
to
that
module.
So
how
do
you?
How
do
you
create?
You
know
a
framework
where
modules
can
track
state?
B
I'm
sorry
the
the
other
piece
of
this
is
like
the
third
bullet
point
here.
Module
can
aggregate
so,
like
the
the
other
thing
is
you
can
aggregate
small
pieces
of
telemetry
in
user
space
from
sockets,
for
example.
So
you
have
an
entity
per
socket.
So
then
you
have
like
lots
of
bytes
or
packet
drops
that
are
reported
from
edpf,
maybe
kind
of
the
evpf
instrumentation
that
you
have
is
more
fine-grained
than
what
you
want
to
report.
B
Then
you
can
have
a
point
of
aggregation
in
user
space
and
in
fact,
kind
of
the
collector
implements
that
for
sockets.
Okay.
So
how
do
you
do
that?
So
this
is
the
diagram
on
the
right
hand,
side.
So.
E
Question
sorry
and
I
joined
late,
so
I
apologize
if
you
already
covered
this,
but
the
user
space
code.
How
often
like
I
assume
that
your
vpf
code
is
like
tracking
the
processes
every
time
a
cross
is
created,
destroyed
all
that
sort
of
stuff
and
you're
collecting
that
information
into
a
perf
buffer
or
some
sort
of
map
yep.
How
often
is
the
user
space
coming
in
and
kind
of
updating
its
view
of
everything
that
ebpf
has
collected.
B
It's
like
a
hundred
milliseconds,
yes,
100
milliseconds
is
the
the
first
order
of
transfer,
but
there's
more
it's
more
subtle.
But
oh
yeah,
it's
two
seconds
two
slides
from
here,
but.
B
You
didn't
miss
it
all
right,
cool
thanks.
So
how
do
you
track
entities?
So
all
of
the
modules
have
this
most
of
the
modules
that
that
npm
has
this
pattern.
The
instrument
you
know
k,
probe
the
kernel
event
where
the
entity
goes
away.
You
know,
so
a
process
is
killed
right.
So
you
free
up
the
tasks
right.
B
Then
you
instrument
the
you
instrument,
the
start
of
all
the
processes
so
between
this
avoids
avoids
races.
So
you
have
end
so
now
you
get
all
the
cleanup
messages,
so
you
never
you
never
keep
state
around
without
you
know
that
something
start.
You
know
that
a
process
started
and
you
didn't
see
the
that
it
was
that
it
died
right.
So
you
see
that
you
get.
You
start
you
instrument.
The
end
event.
You
instrument
the
start
event
then
instrument
ebpf
code
for
some
for
some
way
for
user
space
to
scan
the
kernel.
B
So,
for
example,
the
code
adds
instrumentation
on
proc
file
system
reads
and
then
user
space
goes
and
scans
proc.
So
this
is
how
you
reconstruct
an
image
of
all
the
process.
All
the
live
processes,
all
the
live
sockets
on
the
system.
All
the
live
containers
on
the
system
by
instrumenting
kind
of
the
underlying
kernel
implementation
and
then
hitting
the
kernel
implementation
from
user
space,
and
this
allows
for
a
consistent
view
through
the
rings
of
of
the
state
inside
the
kernel.
B
You
don't
have
races
between
kind
of
your
user
space
program,
kind
of
interacting
with
kernel
apis
and
in
the
middle.
Some
you
know,
there's
some
updates
to
your
rings,
so
you
have
everything
flowing
through
the
same
set
of
rings.
B
Once
that's
done
so
at
the
point
I
should
get
a
point.
Maybe
I'll
get
a
pointer.
So
so
at
this
point
here
when
you're
done-
or
I
guess
this
point
here
when
you're
done
scanning
all
the
all
the
existing
entities
at
this
point,
you
can
start
getting
telemetry
and
you
know
there's
an
invariant
right
like
for
every
point
of
telemetry
that
you
get
like
every
socket
stat.
You
already
have
the
entity
for
the
socket
already
already
in
the
module,
because
kind
of
the
first
part
is
consistent.
B
Right
like
it
gives
you
it
gives
you
all
of
the
sockets
that
are
live
at
that
point
or
all
the
processes
or
all
the
containers.
Right.
E
B
Correct
yes,
so
at
this
point,
you're
going
to
get
you
get
all
the
new
entities
you're
going
to
get
get
events
for,
but
you
don't
know
of
any
of
the
old
sockets
or
old
old
processes
or
containers
right,
like
all
the
old
entities,
yeah.
E
B
Cool
great,
so
what
do
you
need
in
order
to
make
this
work
so
to
make
this
work?
You
need
the
ability
to
load
these
load,
these
probes,
incrementally,
so
how
do
models
interact
with
the
framework?
That's
the
question
right,
like
you
need
a
way
for
the
modules
to
to
be
able
to
load
several
modules
and
then
interact
with
the
epf.
While
this
is
happening,
you
need
to
be
able
to
read
the
perferings.
B
While
this
is
happening
so
as
you're
scanning
the
system,
you
don't
want
to
fill
up
your
rings
with
information
about,
like
all
the
sockets
in
the
system,
all
the
nat
entries
all
the
processes.
So
you
need
to
have
your
modules
interact
with
your
your
rings
and
read
data
from
them.
B
You
actually
want
all
of
the
system
to
to
refresh
right
like
because,
if
you're,
if
you're,
if
you
have
process
instrumentation
and
it
interacts
with
the
container
module,
you're
instrumenting
processes
right
now-
you
want
the
container
module
to
to
refresh
as
you're
building
the
process
images
so
that
you
can
query
the
container
module.
So
kind
of
this
system
makes
it
happen.
You
first
load
the
container
module.
B
Then
you
load
the
process
module
and,
as
the
process
module
goes
through
and
scans
it,
it
goes
through
the
ring.
So
it's
all
consistent
with
the
container
messages
and
it's
able
to
ask
the
con
to
have
the
system
refresh
state
as
it's
scanning
so
that
you
don't
fill
up
the
rings
and
you
get
you
get
this.
You
know
live
live
liveness
property
on
the
ranks.
Right
like
you,
keep
keep
working,
and
so
the
third
thing
you
need
is
to
have
the
messages
from
the
rings
strictly
ordered
based
on.
B
You
know
causality,
but
you
know
if
you
can
get
ordering
based
on
the
you
know,
you
can
get
what's
called
linear
consistency
right
where
you
have
the
messages
be
ordered
by
the
actual
time
where,
when
they
happened,
that
would
be
best,
but
otherwise
kind
of
you
just
don't
want
messages
for
the
same
process
or
messages
for
a
container
that
relates
to
the
process
to
be
reordered
with
the
process
right,
and
so
you
need
the
framework
to
supply
that
for
you,
so
you
can
reading
the
rings.
B
You
need
a
way
to
to
be
consistent
in
time,
so
this
is
like
an
individual
module,
so
then
kind
of
I'm
motivating
why
you
need
why
you
want
to
have
like
a
thing
like
a
framework
that
manages
your
rings.
Basically.
B
B
It's
just
a
method
on
your
class
that
you
know
the
socket
module
can
call
into
the
nat
module,
then
that
module
can
call
into
the
socket
you
just
engineer
it
whatever
you
want,
however,
you
want
and
and
what
what
you
want,
the
framework
to
supply
to
modules
is
a
way
for
to
to
have
this
for
one
module
to
support
these
invariants
to
another
module
right.
B
So
you
want,
for
example,
your
process
module
to
say
when
you
receive
a
socket
event
there
will
the
process
for
the
socket
will
have
already
been
ex,
will
have
already
been
processed
and
exists
in
the
process
module
right.
So
you
want
this
cross
module
consistency,
and
in
order
to
do
that,
you
need
you
need
consistency
of
the
events
flowing
between
the
different
modules
right.
So
the
reason
why
I'm
so
you're
asking
yourselves
like-
why
am
I
even
talking
about
this?
B
So
if
you
had
different
modules
and
each
one
of
them
had
its
own
set
of
rings,
then,
and
and
you'd
update
one
module
from
one
set
of
rings
and
another
module
from
another
set
of
rings,
then
you'd
have
problems
of
how
do
you
synchronize
the
two
rings?
So
how
do
you
make
sure
that
kind
of
there's
the
the
process,
the
process
module,
hasn't
read
more
into
the
future
than
the
sockets
I'm
receiving
and
has
already
cleaned
up
the
process
belonging
to
the
socket
right?
B
You
need
this
consistency
across
the
modules,
so
the
way
npm
the
npm
implementation
does
handles.
This
is
run
all
of
the
messages
through
the
same
set
of
rings,
so
that
they're
consistent
right.
So
you
have
like
you,
have
a
single
timestamp
across
well,
I'm
running
ahead
of
this
slide,
but
so
these
environment
invariants
are
very
useful
and
now
I'll
switch
to
the
slide.
B
Where
we
talk
about
kind
of
how
do
how
we
implement
this
so
just
to
I'll,
take
a
step
back
and
and
reiterate
why
we're
doing
this,
so
we
want
to
have
a
consistent
event
ordering
one
so
that
we
can
track
entities
and
so
that
we
can
keep
state
aggregate
and
answer
queries
and
so
that
we
can
compose
different
modules
and
have
these
queries
where
kind
of
a
socket.
Always
you
know
you
don't
have
to
have
your
ebtf
instrumentation,
get
the
process,
information
and
container
information.
B
You
can
have
the
process
mod
you'll,
get
that
and
then
the
socket
module
would
query
the
process
module,
which
is
how
it's,
how
npm
implemented
as
a
side.
Note
kind
of
michael
you.
You
said
earlier
that
you
know.
If
you
can
shuffle
a
lot
of
this
joinable
data
down
to
back-end
systems,
you
can
have
processing
and
back-end
systems.
So
definitely
this
also
helps
there
right.
If
you
have
consistent
event
ordering
you
expose
that
to
a
back-end,
you
can
do
this
processing
in
the
back
end.
B
If
it's
more
complex
right-
and
you
don't
have
to
do
all
of
it
in
in
the
code,
you
can
do
it
in
in
the
collector
as
well
like
the
hotel
collector
right.
So
keeping
these
in
of
a
socket
is
reported
always
between
the
process
start
and
the
end
of
the
same
socket
is
very
useful,
and
this
is
this
also
goes
to
profiles
right
like
if
you
have
a
profiler
and
you
have
like
a
stack
trace.
B
You
want
the
process
that
includes
this,
that
the
stack
trace
game
came
from
to
always
be
like
between,
like
the
start
of
the
process
and
the
end
of
the
process.
Okay.
So
how
do
we
do?
How?
How
does
the
the
npm
framework
handle
that
so
first,
we
decided
that
all
the
modules
have
a
single
format
that
I'll
I'll
mention
in
like
two
slides
from
now
right,
the
but
the
format
you
can
think
about.
B
It
has
a
message
id
and
it
has
a
timestamp
critical
to
these
two
fields,
message
id
in
order
to
sorry
timestamp
in
order
to
order
messages
across
scores
and
and
message
ids
in
order
to
dispatch
the
messages
to
the
different
modules.
B
And
we
found
that
the
easiest
way
to
you
know
sorry,
an
easy
thread
model
to
reason
about
is
when
you
have
serialized
processing
for
all
the
messages,
so
you
think
about
all
these
messages.
They're
coming
from
kind
of
a
distributed,
arguably
a
distributed
system.
It's
like
a
multi-core
system
where
events
happen
whenever
they
happen
right,
they
have
like
these.
You
have
processes
interacting
with
the
operating
system.
You
have
your
hardware,
like
networking
hardware,
push
push
interrupts
into
the
operating
system.
The
operating
system
handles
some
some.
B
You
know
networking
state
or
other
state
unit,
disc,
io
state.
So
this
you
know
these
events
happen
asynchronously,
but
the
way
the
collector
should
read
them.
B
What
the
approach
we've
taken
is
to
serialize
that
stream
of
messages
order
all
of
the
messages
and
then
have
a
single
stream
of
messages
that
all
the
modules
consume,
and
this
makes
it
really
easy
to
reason
about
synchronization
right
like
there's,
no
locking
it's
like
a
single
thread,
execution
model-
and
you
know
I
told
you
that
we
were
able
to
get
it
this
down
to
a
quarter
of
a
percent
cpu.
So
there's
a
lot
of
place
to
grow
within
that
single
thread
still,
but
I
hope
we
can
keep
it
keep.
E
E
It's
the
same
process,
creations
process,
deaths,
socket
creations,
socket
transmit.
All
these
sort
of
events
is:
are
they
individually.
E
B
Yes
and
and
for
example,
kind
of
the
current
implementation,
so
everything's
tunable
right,
because
so
just
you
know,
you
know
everything.
I
say
like
I
say
like
a
constant,
then
like
it's,
we
can
change
all
of
those,
but
currently
the
kernel
reports
socket
information
at
a
10
millisecond
granularity.
So
the
number
of
bytes
round
trip
time
pack
drops
10
milliseconds,
then
per
socket.
That
is
active
right.
B
B
E
E
B
Okay,
so
so
kind
of
about
like
the
ring,
and
then
the
multiplexer,
I
think,
like
a
framework,
should
offer
a
code
that
does
that,
for
you,
you
don't
want
each
module
to
implement
their
own
and
the
reason
is
kind
of
engineering
like
there's
a
lot
of
engineering
that
goes
into
kind
of
some
of
these
demultiplexers.
For
example,
we
found
that
I,
I
think
it's
the
default
implementation,
where
a
ring
signals
on
on
the
file
descriptor
associated
with
the
ring
on
every
enqueue,
and
that
becomes
super
super
expensive.
B
I
think
just
that
signaling
can
can
bring
your
overhead
up
by
like
a
whole
percentage
right.
Percents
of
cpu,
like
it's
huge
and
so
kind
of
the
framework
kind
of
the
npm
framework
already
implements
polling.
So
you
pull
the
rings
every
hundred
milliseconds
instead
of
having
the
rings
push
events
into
into
the
collector
right.
But
when
there
is
very
high
load
on
the
system
you
could
you
could
have.
You
could
have
like
a
burst
of
events
in
100
milliseconds
or
you
know
consistent
bursts
of
events
into
one
ring.
B
You
know
might
overflow
that
ring.
So
what
what
npm
implements
is
like
this?
Hybrid
approach,
the
perfrings
have
this
this
concept
of
high
watermark,
where
they
signal
to
user
space.
Whenever
you
pass
some
threshold
of
bytes
occupied
in
the
ring,
so
we
set
the
threshold
at
half
the
ring
size,
and
so
usually
you
pull
every
100
milliseconds.
But
if
there's
like
a
burst
of
really
really
high
high
event
volume,
then
the
collector
pulls
that
event
so
kind
of
what
it
does.
B
It
sets
a
timeout
but
also
pulls
the
file
descriptors,
but
we
have
ebps
only
do
this
expensive
notification
on
the
file
descriptors
when
the
rings
are
half
full
or
more
so
kind
of
that
engineering
went
in.
I
think
I
mean
that's
not
kind
of
really
special
to
the
framework.
It's
not
like
the
module
architecture
but
like
we
should
have
that.
I
think
we
should
offer
that
to
users.
B
So
what
we've
done
is
we've
implemented
this
this
perf
container?
Oh
my
we're
we're
over
time
already,
but
this
was
the
important
bit
where
you
can
dequeue
very
efficiently.
So
I
think
we're
on
time.
So
I
don't
know
what
I
can
say
a
little
longer,
but
you
have
to
go
yeah.
B
On
this,
probably
like
I
can
finish,
I
mean
there's
one
more
slide.
I
would
have
wanted
to
show
on
on
the
format
of.
E
Like
if
you
want,
you
can
take
half
the
time
next
time
and
then
I
can
start
the
pixie
stuff
after
and
spill
over.
We
have
a
session
afterwards
right,
it's
two
weeks
from
now
that's
empty,
so.
B
B
Thanks,
but
I
think
that
you
know
overall
kind
of
that
is
kind
of
the
important
bits
right
like
we
wanted
to
talk
about
modularity
and-
and
I
think
we
kind
of
so
we're
able
to
cover
that
great.
So
we'll
see
you
all
next
week,
yeah
cool
thanks
jonathan.