►
Description
Media Streaming Mesh is a new concept for enabling real-time applications in Kubernetes. Most real-time applications (such as multiplayer FPS games, multi-party video-conferencing, and CCTV streaming.
A
So
I'm
going
to
present
this
project
media
streaming
mesh,
which
is
an
open
source
project.
We
we
recently
got
permission
to
open
source
it
from
Cisco.
But
of
course
our
hope
is
that
other
people
are
going
to
participate
in
this
and
I'll
hopefully
come
on
to
how
we're
going
to
try
and
enable
that
apologies
for
my
slightly
coldly
voice
post
last
week's
covered
right,
so
I
wanted
to
run
through.
Why
would
we
want
to
do
this,
I
mean
and
by
media
streaming?
I'll
come
on
to
what
I
really
mean,
but
I'm
talking
real-time
stuff.
A
So
not
not.
The
kind
of
delayed
HTTP
live
streaming
stuff,
but
actual
real-time
media
streaming,
and
why
do
we
need
to
solve
that
in
kubernetes?
What
are
the
challenges?
What
are
the
benefits
that
we
think
our
approach
has?
What
are
some
of
the
use
cases
we
want
to
address,
then?
How
are
we
building
it?
I'll
do
a
quick
demo
which,
hopefully
the
network
is
going
to
be
fine
for
that
and
then
a
call
to
action
at
the
end
it's
open
source.
A
So
we
want
action
so
in
terms
of
applications,
I
guess
my
contention
would
be
that
kubernetes
is
very
much
focused
on
web
applications
up
to
now
and
it's
a
bit
of
a
chicken
and
egg
in
terms
of
is
that
because
that's
all
anybody
cares
about,
which
is
why
that's
all
there
is
or
is
that
because
that's
what
the
infrastructure
is
built
for
and
again,
my
content
would
be
to
prove
more
of
the
latter
and,
if
you're,
to
look
at
applications
and
divide
them
into
different
categories.
A
Using
the
the
famous
two
by
two
Matrix,
that's
beloved
of
mbas.
Apparently
you
can
divide
them
into
non-real
time,
apps
or
real-time
apps,
and
then
you
can
divide
them
by
their
semantics
into
whether
they're
interactive
so
request
response
semantics
or
streaming
so
Pub
sub.
A
So
web
applications
are
very
much
in
this
top
left
corner
and
that's
not
what
we're
trying
to
solve
with
this.
So
if
you
want
to
do
that,
you
know
istio
Envoy
service
mesh,
that's
that's
there,
and
so
this
is
trying
to
address
really
that
real-time
space,
initially
I
I,
was
looking
at
doing
both
the
kind
of
request
response
stuff
which
could
be
games.
Though,
actually,
when
you
dig
into
real-time
games,
you
find
this
it's
actually
quite
close
to
Pub
Sub.
A
In
some
ways
you
know
the
updates
that
get
sent
out
on
the
tick
to
each
player,
which
again
is
where
this
kind
of
fuzzy
idea
came
from
that
it's
not
it's
not
always
just
cut
and
dried
as
as
Pub
sub,
that's
interactive,
and
certainly
when
we
come
onto
live
media,
which
is
where
we're
focusing
down
now,
you
know
if
back
to
the
football
example
of
this,
not
being
your
HTTP
live
streaming.
A
But
you
know
clearly:
HTTP
live
stream
does
not
work
in
terms
of
real
time,
because
if
I'm
watching
a
football
match
on
cable
and
my
friends
watching
it
over
the
top
I
have
to
be
careful
not
to
say
wasn't
that
a
great
goal,
because
he
might
be
kind
of
pissed
off
with
me
and
so
so
yeah
there's,
you
know
how
live
is
live
back
back
again
to
fuzziness
on
realtimeness
one
thing
to
say
quickly
on
online
games,
I
mean
I,
say
initially:
I
was
focusing
there
as
well
and
I
figured
probably
the
focus
was
a
bit
Broad
and
by
narrowing
down
on
live
media
we
can
go
anything
initially.
A
At
least
that
uses
the
RTP
protocol
for
those
who
know
who
know
that
which
is
the
basis
for
a
lot
of
real-time
media.
Also,
there
are
some
good
projects
out
already
on
gaming,
so
Google
for
games
has
the
quilkin
proxy,
which
I
took
a
look
at,
and
it
did
a
bit
of
work
on
about
a
year
ago,
and
that
seems
pretty
nice
for
doing
doing
online
games.
A
So
looking
at
media
connectivity
and
if
you
so,
if
you
want
to
do
real-time,
media
and
kubernetes,
you
know
how
might
you
do
it?
Well,
you
know
the
traditional
models
using
you
know
service
meshes
so
istio,
Envoy,
Etc
or
Linker
D.
You
can
do
TCP,
apps,
obviously
very
good
for
web
apps,
because
you'll
do
the
URL
routing
and
you
can
do
all
the
layer
7
stuff
there.
A
Obviously
no
support
for
UDP
that
they're
kind
of
adding
that
but
I
think
that's
been
driven
more
by
the
desire
to
do
quick
than
it
is
from
real-time
apps.
It's
the
impression
I
get
and,
of
course
real-time
media
itself
is,
is
not
supported.
A
A
They
use
RTP
for
their
data
plane.
However,
they
also
have
a
control
plane
and
a
control
plane
typically
runs
over
TCP
and
that
control
plane.
One
of
the
things
it
does.
Is
it
hands
out
port
numbers
for
the
data
plane,
so
the
moment
you're
using
something
like
Cube
proxy.
You
can
proxy
that
control
plane,
but
you
don't
see
those
data
plane,
ports
being
negotiated,
and
so
so
it
breaks
that.
A
So
that
was
the
sort
of
realization
that
we
had
to
do
something.
One
approach
I've
seen
people
try,
is
just
to
use
host
networking
so
get
away
from
Q
proxy
by
just
running
everything
in
the
host
namespace
and
that
works
absolutely
fantastically.
A
So
why
am
I
doing
that
is
a
problem
only
works
if
you're
doing
one
poll
on
each
node
and
I've
seen
people
do
that
literally
on
Virtual
machines,
where
they
size
the
virtual
machines
down
and
have
one
media
part
on
each
node.
But
if
you
want
to
run
a
bare
metal
like
the
speaker,
just
now
was
talking
about
provisioning
kubernetes
on
bare
metal.
You
probably
don't
want
one
pod
per
node,
so
that
was
where
we
came
out
with
this
and
said:
okay,
let's
focus
on
real-time
media,
but
let's
have
something
that
can
also.
A
So
what
are
we
trying
to
achieve
and
I
guess
without
reading
through
the
slide?
What
I
would
say
a
kind
of
a
10
000
foot
view
is
we're
trying
to
do
exactly
what
istio
and
Envoy
do
for
web
applications,
but
do
that
for
these
real-time
apps?
So
all
of
that
observability
and
security
that
you'd
expect.
So
you
should
be
able
to
encrypt
your
traffic
without
the
app
sending
encrypted
traffic.
A
You
should
be
able
to
watch
what's
going
on,
but
we
want
low
latency
and
we
also
wants
to
have
a
very
light
footprint,
particularly
in
some
of
the
edge
deployments
where
you're
deploying
on
pretty
constrained
Hardware.
You
really
don't
want
to
be
adding
that
whatever
it
is,
40
megabytes
per
Envoy
proxy.
A
So
what
are
those
use
cases?
So
we
as
I
say
we
kind
of
focus
down
on
anything
RTP
based
for
now,
so
it
could
be
a
sort
of
contribution,
videos
of
stuff
coming
from
cameras
into
Studios
being
mixed
and
sent
out
the
video
distribution,
so
the
actual,
pushing
it
out
whether
to
antennas
or
to
caches,
possibly
even
to
the
end
user
which
we'll
come
on
to,
but
where
we
actually
started.
Was
this
whole
retail
and
Industrial
Edge
kind
of
use
case
and
I?
A
Come
on
to
what
that
that's
about
in
a
moment
and
then
there's
a
real-time
collab,
so
I
work
for
Cisco,
there's
no
secret.
We
do
WebEx,
but
I
mean
this
generically.
It's
webrtc,
it
could
be
WebEx,
it
could
be
Zoom
anything,
that's
that's
real
time
and
I.
Think
again.
If
you
look
at
those
platforms,
they
tend
to
use
RTP
with
often
proprietary
control
planes,
but
obviously
webrtc
being
more
standard.
A
But
what
we've
say
what
we've
kind
of
moved
away
from
is
some
of
these
other
non-rcp
protocols,
sale
or
applications.
So
you
know
gaming
looks
a
lot
like
media,
but
it
doesn't
use
RTP.
Today,
people
typically
roll
their
own
protocols
on
top
of
UDP
Finance
as
much
the
same
people
tend
to
roll
their
own
protocols.
A
You
know
mobile
back,
will
again
it's
tunnel
over
UDP,
but
it's
not
RTP,
but
we
have
seen
some
scope
for
RCP,
so
I've
seen,
for
example,
a
couple
of
drafts
out
of
colleagues
at
Cisco
internet
drafts
from
colleagues
at
Cisco
that
have
nothing
at
all
to
do
with
me,
but
use
RTP.
One
was
doing
a
high
order,
time,
division,
multiplexing
traffic,
so
literally
gigabits
of
of
low
latency
traffic
over
an
IP
network,
and
that
has
an
RTP
layer
in
it.
A
Somebody
else
on
the
collab
side,
looking
at
doing
gaming,
but
particularly
things
like
metaverse,
whatever
that
is,
and
his
draft
goes
into
things
like
how
you
model
a
hand-
and
you
know
the
knuckles
and
all
that
stuff,
but
that
again
streams
over
RTP,
so
I'm,
not
a
video
person,
I'm
a
network
person.
So
this
is
my
sort
of
view
of
how
video
probably
is
so
I'm
guessing.
A
So
having
got
that,
you
don't
need
to
encode
that
into
your
different
formats
for
for
broadcast.
So
it
could
be
4K
1080
Etc.
A
There
might
be
different
protection
mechanisms
also,
particularly
if
you're
doing
stuff
over
the
top
feed
it
out
to
CDN
caches
and
antennas
Etc,
and
for
that
there
might
be
some
nice
things
we
could
do
with
things
like
diverse
feeds
to
make
sure
that
if
we
lose
one
network
path,
we
don't
lose
that
feed,
because
clearly
this
is
high
value
stuff
to
the
consumer,
I
mean
and
then
the
end.
A
So
in
terms
of
where
we
could
apply
this
technology,
I
think
that
that
very
front
end
is
tricky
because
there's
a
lot
of
dedic
I
mean
I
mean
look
here.
There's
a
there's,
a
physical
camera.
There
there's
a
physical
mixing
desk
here
so
running
those
things
in
a
kubernetes
cluster
might
be
a
little
bit
challenging,
but
then
encoders,
if
you
think
about
it,
it
seems
like
a
very,
very
much
compute
problem
that
we
can,
we
can
solve.
Feeding
stuff
out
again
should
be
very
doable.
A
The
final
drop
to
clients,
if
you
want
to
use
something
like
RTP,
the
challenge
is:
how
do
we
do
that?
So
we've
done
demos,
for
example,
of
RCP
over
quick,
using
a
draft
that
somebody
at
the
BBC
came
up
with
The
Challenge.
There
is
going
to
be
well.
I
could
probably
get
that
to
a
browser
in
future,
but
can
I
get
it
to
my
smart
TV,
whether
software
upgrade
Cycles
very
slow.
A
We
have
these
encoded
streams,
which
we
can
now
pick
up
to
feed
out
to
caches
Etc,
and
this
is
where
we
might
want
to
do
things
like
live,
live
over
diverse
paths.
Add
error,
correction,
those
sort
of
things
so
say
we
demo.
We
did
a
demo
of
this
RCP
of
quick
client
got
it
working.
The
thing
is
it's
at
the
moment.
This
whole
Space
is
very
much
up
in
the
air,
so
that's
a
draft
summer
the
BBC
had
written.
A
They
were
using
quick
datagrams,
so
I
don't
know
how
much
we've
been
about
quick
as
the
HTTP
version.
3
runs
on
top
of
quick.
Normally,
you
do
streams,
but
there
is
an
option
to
have
datagrams,
but
they
are
literally
datagrams,
there's
no
flow
ID
or
stream
ID
or
anything.
So
you
have
to
have
the
reverse
drafts
on
how
to
add
that
stuff
back
in
so
you
can
have
more
than
one
video
stream
and
as
I
say,
getting
it
supported
could
be
tricky.
A
And
ultimately
you
know
if
we
could
get
to
that
sort
of
model.
I
guess
the
goal
is
then
that
I
don't
have
to
worry
about
texting.
My
mate
to
say
the
girl's
been
scored
because
he's
going
to
be
seeing
it
roughly
the
same
time
as
me
and
I.
You
know
I
guess
the
thing
is
we
know
this
is
doable
because
things
like
zoom
and
WebEx
are
a
thing
and
they're
real
time
right.
So
it
must
be
doable.
The
question
I
guess
is
going
to
be:
what's
the
cost
for
doing
it?
A
A
If
I
was
a
Gambling
Man
which
I'm
not
and
I
was
doing
in
play,
betting
I
probably
wouldn't
want
a
feed
that
was
running
30
seconds
late
because
that
would
suck
but
we'd
have
to
modify
the
control
plane
because
we're
not
handing
out
UDP
ports.
Now
we're
handing
out
these
flow
IDs
we've
had
to
invent.
We
want
to
potentially
put
things
like
congestion
control
in
so
we'll
come
on
to
that,
but
the
interesting
space
there's
an
area
or
a
buff
in
the
ITF
called
meteor
over
quick
and
I
guess.
A
The
discussion
here
is:
should
we
do
things
like
RTP
over
quick
for
media
on
quick,
or
should
we
effectively
sort
of
reinvent
something
that's
more
suited
to
quick?
Because
if
you
think
about
it,
quick
already
gives
you
things
like
encryption.
So
why
not
leverage
the
things
that
quick
does
and
add
in
the
things
that
we
need
for
live
media?
So
there's
a
proposal
from
a
colleague
at
Cisco
called
quicker
I've
already
spelled
it
wrong.
A
A
So
then
you
don't
get
head
of
line
blocking
across
multiple
video
frames,
so
you
might
get
one
video
frame
delayed,
but
the
next
video
frame
would
arrive.
So
you
can
just
skip
that
one
and
move
on
so
there's
there's
a
lot
of
I
guess
stuff
going
on
there,
where
we're
not
yet
sure
how
that's
going
to
crystallize,
so
I
guess
I'm
kind
of
thinking.
Well,
let's
plow,
on
with
the
current
RTP
focus
and
wait
and
see
what
happens
there
right
so
video
monitoring.
This
is
where
I
started
and
actually
said.
A
You
know
a
few,
a
few
cameras
for
sites
in
small
sites,
so
the
classic
sort
of
you're
in
a
Starbucks
coffee
shop.
They
have
thousands
of
shops,
one
or
two
cameras
each,
but
at
The
Other
Extreme
things
like
airports,
factories
I
was
in
Las
Vegas
for
the
Cisco
conference
week
before
last
and
I
hadn't
realized
just
how
many
cameras
there
are
in
casinos.
I
took
a
look
at
the
ceiling
of
the
casino.
It
was
like
this,
but
they
were
cameras.
A
You
know
everywhere
you
go,
you
are
watched
and
of
course
the
thing
is
cameras
now
they're
cheap,
but
humans
haven't
got
any
cheaper.
So
now
we've
got
you
know.
Perhaps
100
000
cameras
in
some
venues,
I
mean
insane
numbers
like
that.
We
can't
employ
10
000
people
to
watch
10
feeds
each
because
that's
way
too
expensive.
A
So
maybe
we
can
run
some
kind
of
machine
learning
spot
something
happening
that
we
think
shouldn't
be
happening,
whether
it's
a
fight,
whether
it's
somebody
stealing
money
in
a
casino,
whatever
an
alert
that
that's
a
possibility
for
human
today
how
these
things
get
deployed
literally
each
camera,
typically,
because
you
have
multiple
applications
or
humans
watching
each
camera.
Each
camera
typically
gets
an
IP
multicast
group
and
if
anyone
here
I
think
there's
any
networking
people
in
here,
you
don't
want
a
hundred
thousand
IP
multicast
groups
in
your
network.
A
That's
probably
not
a
great
idea,
so
what
we
can
do
is
put
some
proxies
near
the
cameras
put
proxies
near
the
viewers
or
the
apps
that
are
viewing,
and
so
traffic
can
flow
through
them.
In
this
example
of
having
perhaps
local
analytics,
you
can
be
looking
to
see
what's
happening
the
moment
you
see
something
you
don't
like,
you
could
alert
practice
via
you
know,
mqtt
or
Kafka,
or
something
somebody
at
Central
site
gets
alerted
and
starts
watching
and,
of
course,
the
proxies
inherently
give
you
that
fan
out.
A
So
you
can
have
multiple
subscribers
watching
each
thing.
So,
each
time
you
join
a
stream
to
the
proxy,
the
proxy
is
going
to
say
is:
do
I
already
have
that
stream
and
if
I
do
I'm
just
going
to
replicate
and
forward
rather
than
rejoining
towards
the
source.
So
how
are
we
building
it?
So
the
kubernetes
cluster
level?
We
have
a
control
plane,
so
we
can
run
a
control
plane
instance
per
protocol,
rtsp
Etc,
but
we'll
come
on
to
how
we
are
at
the
voice.
A
How
we
make
as
much
of
that
as
possible
common
the
stub
injectors
just
to
to
insert
this
little
stab
both
into
our
Gateway
pods
and
into
the
application
pods,
which
is
just
there
to
intercept
the
control
plane
as
demon
sets.
We
have
cni
plugins,
it's
a
chain
cni
and
we
have
on
each
node
we'll
have
that
data
plane,
proxy
and
I.
Couldn't
start
this
little
piece
of
code,
that's
going
to
run
inside
each
application
pod,
that's
an
order
of
magnitude
smaller
than
something
like
an
onboard
proxy
So.
A
Currently
it
builds
as
about
two
and
a
half
megabytes
I.
Think
if
there's
something
about
40
for
Envoy
and
we'll
have
IP
tables
rules
potentially
to
intercept
traffic
or
abpf,
which
is
the
other
kind
of
interesting
thing
at
the
moment
that
everyone
seems
to
be
talking
about
in
the
kubernetes
networking
space.
A
So
the
stub
injector
this
when
I
started,
writing
this
and
had
an
original
demo
using
a
sort
of
unitary,
sidecar
proxy.
The
problem
was
I,
didn't
know
enough
about
kubernetes
and
I
was
literally
writing
yaml
files
that
would
put
the
stub
into
each
pod
and
then
I
was
having
to
run
the
pods
privileged
and
that
sort
of
thing,
so
what
the
stub
injector
does
is
it
just
adds
that
in
for
me,
so
I
had
obviously
had
to
get
developers
in
in
your
kubernetes
was
to
do
this.
A
For
me,
the
cni
plugin
is
responsible,
then,
for
the
these
IP
tables
rules.
So
just
inserts
a
rule
that
makes
for
anything
typically
safe
for
rcsp
will
be
Port.
554
gets
redirected
into
the
stub
and
again
the
way
both
of
these
work
is.
It
looks
for
a
label
that
gets
attached
to
a
deployment
saying
if
we
have
a
pod
that
matches
that
label,
then
we'll
we'll
put
those
iptables
rules
in
and
we'll
put
the
stuff
in,
so
the
control
plane,
so
one
per
class
six.
A
A
We
again
things
it's
open
source,
it's
fairly
early,
we
haven't
yet
figured
out.
Do
we
put
that
inside
the
control
plane?
Do
we
have
another
separate
service?
Do
we
use
an
XCS
API
like
Envoy,
so
opinions
would
be
really
good
here.
I
guess.
The
benefit
of
a
separate
API,
then,
is
that
we
don't
have
to
tie
the
actual
control
planes
kubernetes
and
then
it
uses
grpc
to
talk
to
the
stub
and
to
the
data
plane
proxy
we've
written
it
in
golang
the
great
thing
there
is
a
whole
bunch
of
existing
libraries.
A
We
can
use
so
the
rtsp
implementation
we
have
at
the
moment
uses
I,
think
the
Allure
nines
the
guys
handle
on
GitHub,
and
he
has
a
library
for
for
that.
So
we
just
picked
up
that
Library
for
webrtc
we'd,
probably
use
Pion
Etc.
A
So
that
seems
like
a
good
way
to
go
for
that,
and
but
what
we're
trying
to
do
really
is
have
you
know
the
code,
sort
of
Northbound
and
southbound
be
as
common
as
possible,
and
so
that
really
someone
coming
in
and
writing
a
protocol
just
focuses
on
that
protocol
bit
and
doesn't
have
to
worry
too
much
about
those
and
then
hopefully,
if
they're,
picking
up
a
library
as
well,
the
amount
that
has
to
be
written
is
quite
Limited
and
really
what
this
comes
to.
A
Is
this
whole
thing
of
how
do
we,
you
know
so
far,
I've
been
trying
to
build
this
this
project
and
it's
easy
to
say
well,
I'm
doing
this
thing
and
it's
going
to
be
open
source,
but
then
it's
like
well.
How
do
we
get
anyone
to
participate?
I
think
the
first
thing
is
to
make
it
easier,
so
we
can
make
it
easy
to
plug
these
protocols
in
I.
Think
that
will
help
I'm
genuinely
not
infectious
I'm
fine
as
of
Sunday,
but
it's
still
hanging
on
and
so
the
stub
yeah
really
all
it
does.
A
Is
it
terminates
a
TCP
connection
from
the
app
and
then
it
uses
grpcs
talks
to
control
plane,
typically,
of
course,
pods
within
the
cluster
you're.
Only
going
to
have
typically
have
one
app
and
one
TCP
connection,
but
where
would
gatewaying?
We
could
have
a
whole
bunch
of
different
remote
endpoints.
A
Hence,
hence
why
we
use
one
grpc
session
to
Max
all
those
together
which
then
helps
redundancy,
because
if
a
control
plane
fails,
we
can
just
move
that
over
to
a
different
control.
Plane
instance:
the
TCP
session
doesn't
change
the
client's
blissfully
unaware
that
anything's
happened
and
so
on.
A
There
are
some
cases
where
we
intercept
the
data
plane
so
with
rtsp,
for
example,
and
you'll
see
this
at
the
moment.
The
demo
there's
actually
a
case
where
the
data
plane
traffic
gets
multiplexed
over
that
TCP
control
Channel
if
UDP
won't
work.
So
there
are
cases
where
we
can
do
that
and
the
nice
things
we
can
do
that
here
to
the
app.
But
then
we
can
run
UDP
back
out
to
the
network
and
there
may
be
cases
you
know
for
monitoring
Etc
as
well.
A
We
may
want
some
of
the
security
functionality
to
be
here
as
well
and
I
say:
the
footprint
is
pretty
small,
so
I
think
I'm
at
about
five
or
six
hundred
lines
of
rust
at
the
moment.
So
it's
pretty
small.
The
reason
for
rust
was
really
because
there
might
be
these
cases
where
you
insert
the
data
plane.
I
didn't
want
it
and
again
this
is
gut,
feel
I.
Don't
have
any
empirical
data
for
this,
but
my
gut
feel
was
I
didn't
want
anything
where
there
might
be
garbage
collection,
so
I
run
something
like
Russell.
A
That's
what
I
did
also
it's
fun
to
learn
it
right.
I
mean
your
company
lets
you
write
open
source,
you
might
as
well
write
stuff,
that's
going
to
be
interesting,
the
RSP
proxy
we're
just
starting
on
at
the
moment.
So
we
have
a
sidecar
implementation,
but
as
a
standalone,
we're
just
starting
on
that.
The
idea
was
point
one
per
node,
so
it'll
handle
north
south
and
east
west
one
Oaks.
It
also
feels
like
the
right
point
to
do.
Replication
is
one
per
node.
A
Fundamentally
because
it's
terminating
the
r2p
layer,
it
will
give
you
things
like
unicaster,
multicast,
V4,
to
V6
etc.
For
free.
A
We
will
actually
support
us
if
you
have
a
UDP,
TCP
or
quick,
and
it
will
also
proxy
those
lower
layers,
and
one
of
the
key
things
then,
is
having
this
proxy
that
having
the
sort
of
little
stubborn
and
having
this
separate
data
plane
proxy.
The
goal
is
very
much
to
minimize
the
attack
surface,
just
remembered
what
stops
you
coughing
and
I'm
hoping
this
is
my
one
right.
So
in
terms
of
implementation,
the
proxy,
though
we
want
it
to
be
again
back
to
this
thing
of
how
do
we
drive
an
ecosystem?
A
How
do
we
drive
a
community
I?
Think
if
we
have
a
plug-in
model,
that's
the
easiest
way
for
people
to
contribute,
because
then
they
can
write.
Plugins
and
the
plugins
should
have
a
very
simple
interface,
so
you
literally
receive
a
packet
send
out
a
packet
that
kind
of
interface
and
again
this.
This
data
flame
should
be
able
to
work
with
multiple
control
planes
simultaneously.
A
Now
my
we're
we're
probably
going
to
knock
together
an
instrumentation
of
this
just
as
golang
without
the
filter
chain,
but
my
my
goal
is
to
write
this
in
Rust
I
have
that
that
Ingress,
processing
and
rust
and
then
the
filter
chain
would
be
wasn't
plug-ins,
because
that
seems
to
be
the
way
a
lot
of
things
are
going
at
the
moment
and
we
won't
need
the
as
it
was
here.
It's
called.
A
We
shouldn't
need
that,
because,
if
you're
just
receiving
a
packet
sending
a
packet
out
in
the
chain,
you're
not
actually
interacting
with
the
infrastructure
with
the
network
per
se,
but
we
really
want
the
this
to
be
completely
flexible
in
how
you
program
it.
And
then
it
then
enables
people
to
write
their
own
plugins
that
do
specific
things
they
might
want
to
do
so,
for
example,
for
security
reasons
you
might
want
to
validate
the
traffic
that's
coming
in
from
outside.
A
So
if
you're
running
something
like
webrtc
and
you've
got
your
your
infrastructure,
but
you
don't
100
trust
your
clients,
then
they'll
check
the
ssrcs
on
the
traffic
coming
in
that
sort
of
thing,
so
yeah,
but
we
would
be
able
to
recompose
that
filter
chain
to
the
the
order
that
you
want
and
though,
to
some
extent,
stream
replication
feels
like
a
bit
of
a
special
case,
partly
because
it's
going
to
have
to
interact
with
the
control
plane.
A
So,
as
I
said,
the
demo
it's
using
this
hack
together,
sort
of
sidecar
proxy
and
we
have
a
camera
pod,
which
has
a
fake
camera
feed,
there's
an
rcsp
server
instance.
We
have
this
proxy.
There
I've
also
put
non-boy
proxy
in
there.
I
couldn't
use
istio
because
it's
the
same
namespace
thing
and
it
got
it's
a
bit
tricky
to
do
istio
and
media
streaming
mesh
at
once,
but
this
should
be
the
same
sort
of
path
that
you'd
get
with
istio.
A
So
then,
at
the
Ingress
you've
got
an
Envoy
proxy,
the
key
proxy
approach.
We
then
have
two
instances
of
our
own
proxy.
This
one
we
say
with
feck:
it's
not
really
forward
area
correction,
I
just
send
every
packet
twice,
which
has
the
same
effect,
and
then
we
have
a
little
API
that
I
can
use
to
inject
fake
packet
loss,
and
then
this
is
the
thing
that
does
the
URL
lookup.
So
literally
what
you'll
see
so
this
stream?
This
is
on
a
you,
can
actually
access
this.
A
If
you
want
it's
out
on
the
Internet,
it's
a
very
lightweight
VM,
so
it's
literally
just
a
fake
camera
feed.
This
is
going
through.
Envoy
and
it's
taking
forever
and
the
reason
it's
taking
forever
is
it
has
a
fail
with
UDP
first
and
then
try
CCP
and
would
see
exactly
the
same.
If
we
went
to
this
year
now
sorry
to
keep
proxy
now,
one
of
the
challenges
I
have
I
actually
did
have
quite
a
slick
demo
with
a
web
user
interface.
A
A
So
if
we
just
show
the
regular
one,
so
this
is
running,
this
is
running
over
UDP
everything's
good.
Since
I've
got
this
here,
let's
dial
in
some
packet
loss,
this
bit
should
work
at
least
10
percent
yeah
this
this.
Actually,
this
demo
is
quite
cute
when
I
had
a
fatter
VM,
because
it
actually
would
show
me
putting
on
the
mask
and
it
would
detect
like
red
box
green
boxes,
whether
I've
got
a
mask
on,
but
but
I
don't
know.
A
If
anyone's
played
with
machine
learning
models,
you
end
up
with
like
a
gonna
turn
into
a
rant,
but
they
call
it
micro
Services,
you
end
up
with
a
pub
that's
like
100
gigabytes
you're
like
what's
micro,
about
this
service.
It's
nuts
and
it
burns
well,
GPU
like
crazy,
so
yeah
here
it
is,
that
seems
pretty
stuttery
at
10
percent.
A
But
again,
that's
absolutely
fine,
see
because
10
packet
loss
when
you
double
the
packets
is
equivalent
to
one
percent
packet
loss.
Now
this
is
assuming
that
packet
loss
is
random,
I
mean
hands
up
here.
Who's
ever
seen,
random
packet
loss
right
I
mean
that's
a
it's
probably
a
bit
of
a
fake
where
it's
a
very
fake
demo,
but
it
shows
the
kind
of
ideas.
A
Which
is
interesting,
all
demo
should
fail
right,
so
I
think
that
was
about
it.
I
will
quickly.
A
Yeah,
so
so
really
I
mean
the
sort
of
strap
line
for
this
I
guess
would
be.
The
goal
is
to
make
those
real-time
media,
apps
first
class
citizens
in
a
cloud
native
infrastructure,
so
very
much
work
in
progress,
It's
in
open
source
there's
a
website.
You
can
see
it
on
GitHub.
Some
of
the
repos
are
public,
some
aren't
public.
A
Yet,
if
you
want
to
contribute
just
you
know,
ping
me
send
me
your
GitHub,
ID
and
I'll
give
you
access,
because
really
we're
just
looking
for
people
to
help
out
and
work
on
this
so
say
the
control
plane,
largely
written
and
go
data
flame,
probably
mostly
being
rust.