►
From YouTube: Centaurus Monthly TSC Meeting 5/25/2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
We
are
going
to
review
network
seek
renee
is
leading
the
nether
network
secret
work,
so
today
he
is
going
to
present
the
plan
and
the
high
level
architecture
of
and
the
next
important
work
items
for
this
just
to
give
us
an
overview
update
and
also
to
see
if
there
are
any
suggestions
or
feedbacks
or
guidance
for
for
networking
things,
work
for
people
that
are
not
familiar
with
networking.
It's
mostly
about
the
misa
project,
remembering
centers.
A
B
C
A
A
A
If
not,
we
can
just
do
these
two
items
because
I'm,
I
think
the
second
item
most
likely
will
take
a
much
shorter
time.
So
if
you
guys
don't
mind,
I
I
suggest
that
we
we
switched
out.
We
first
do
the
depicts
item.
Then
we
can
have
all
the
remaining
time
for
renee.
So
we
have
more
time
to
have
any
detailed
discussions.
C
Yeah,
let
me
just
share
my
screen.
I
just
want
to
show
it
yeah,
okay,
one
second.
C
Yeah,
I
I
would
like
to
propose
adding
professor
hakeem
weather
school
to
our
advisory
committee,
so
professor
hakeem
is
a
professor
at
cornell
and
we've
been
kind
of
doing
collaboration
work
with
him
in
a
couple
of
projects
actually
and
he's
a
professor
and
he's
the
ceo
of
a
company
called
exotenium
as
well
exotenium
these
guys
they
do
cloud
optimization
and
they
have
a
new
next
generation
containerization
technology
as
well.
C
So
it's
very,
very
you
know
visible
in
the
cloud
community
and
the
research
community
as
well,
so
it'll
be
great
to
have
him
on
board
as
part
of
our
as
part
of
our
sig
tsc.
You
know
as
an
advisor
so
obviously
he's
pretty
busy.
So
he's
not
gonna
be
able
to
make
all
the
meetings.
So
whenever
he
can,
you
know
he
can
provide
us
his
thoughts
and
maybe
guidance.
You
know
part
of
the
tse.
E
C
Yeah
I
can,
I
can
flash,
I
can
flash
his
website
as
well
expotanium
just
quickly.
I
mean
yes,
but
ai
is
pretty
it'll,
be
great
to
have
him
on
board
one
second
here.
C
Okay,
yeah
there
you
go
that's
their
company
as
well
also.
So
this
is
all
professors
you
know,
and
the
students,
the
phds
and
postdocs
they
started
a
startup
funded
actually
spawned
by
cornell
itself.
Anyway,
that's
it
so
yeah
we
can
do
the
vote.
Sonic
yeah.
Do
we
still
have
a
collaboration
with
them.
I
mean
no.
A
C
Well,
yeah
yeah.
We
can
do
that
with
so
yeah,
so
we
can
bring
it
up
with
them,
so
just
bring
them.
Maybe
do
one
meeting
in
one
of
the
meetings.
We
can
ask
the
advisors
to
come
and
share
their
thoughts.
At
least
you
know
come
and
introduce
themselves
and
yeah.
We
can
do
that.
Yeah.
B
C
C
Because
we,
you
know
doctor,
obviously
on
the
on
the
call
yeah
yeah,
but
you
know
the
chris
and
professor
akeem.
If
he
gets
voted,
then
we
can
ask
them
to
kind
of
come
at
least
you
know
and
initially
introduce
themselves
yeah.
A
E
Deepak
one
more
suggestion
normally
like
I
would
suggest
that
have
a
plan
for
like
bringing
this
advisory
board
in
which
area
and
what
exactly
we
are
trying
to
achieve
on
this
one
I
mean
I'll
see.
Professor
sakim's
credentials
are
extremely
good,
but
we
would
like
to
see
that
how
this
correlate
to
our
project
area
exactly.
E
C
Can
I
can
talk
about
that?
Actually,
so
essentially
so
these
guys
have
done
they're
working
with
a
lot
of
customers,
so
the
whole
cloud
optimization
work,
they're
doing
and
the
whole
containerization
work.
So
I
think
it'll
be
good.
So
as
they
you
know
so
maybe
potential
opportunity
to
kind
of
bring
in
their
customer
base
and
see
if
centaurus
makes
that
sense
for
them.
You
know
so
from
that
perspective.
B
I
think
also
for
periodically.
Maybe
we
can
host
like
advisory
board
roundtable.
You
know,
use
one
of
the
our
meetings
and
down
the
road
and
then
they
can
also
talk
about.
You
know
what
they
think
about
our
project.
Give
us
advice,
I
mean
so
far.
We
also
have
a
chris
andercheck
from
linux
foundation,
dr
sean
and
this
professor
right
so
and
also
I
would
recommend
we.
We
should
document
that
on
our
website,
you
know
yeah.
A
And
this
event
right:
yeah
yeah,
that's
a
good
idea!
Yeah.
I
think
I
think
if
we
have
others
advisory
board,
we
should
kind
of
think
about
how
they
benefit
us
right.
How
we
have
some
regular
mechanism
to
work
together
in
detail
later
here.
Yeah.
E
Do
you
want
to
actually
put
together
something
shouting
on
the
on
the
idea
that
how
we
would
like
to
sort
of
like
help
with
the
advisory
board?
It
should
not
be
a
namesake
thing.
I
would
like
to
really
I
mean
as
a
tsc
member
I
would
demand
like.
There
is
some
level
of
clarity
on
the
whole
thing,
because
we
can,
I
mean
I
have
20
other
people.
I
can
bring
to
this
meeting,
who
are
incredibly
valuable
to
this
to
the
community
and
could
be.
E
F
C
He
did
his
phd
from
berkeley,
but
he
is
a
professor
at
cornell,
but
I
think
that
that's
a
good
point,
though,
so
we
should
articulate
that
you
know
what
advisory
board
is
and
you
know,
and
then
we
should
do
a
kickoff
meeting
as
well.
Maybe
one
session
one
dsc
media
session
just
dedicated
to
the
advisor,
let
them
talk
yep.
D
C
D
Just
to
add
some
color
to
that,
if
I
may
professor
akeem
and
that
the
exotonium
company,
the
collaboration
that
we
worked
with
them
in
the
past,
we
one
of
the
things
that
we
found
intriguing
their
technology
called
x,
containers
which
essentially
tries
to
minimize
the
over
minimize
the
kernel
or
hit
the
contact
switching
overhead
and
it
speeds
up
your
secure
containers
so
that
I
we
feel
is
pretty
helpful
for
the
octo
side.
Yeah.
C
This
next
generation
of
containerization
uni
kernel
based
library,
us-based
container
much
more
lightweight
and
the
other
thing
I
wanted
to
highlight
that
so
one
of
the
things
they
have
done
and
one
of
the
collaboration
we
did
was
a
thing
called.
A
technology
called
zen
blanket.
So
essentially
they
have
one
they're
they're
doing
pretty
good.
Actually,
with
the
companies
such
as
autodesk
and
the
bunch,
I
think
they're
working
with
adobe
as
well
to
optimize
the
spark
market.
Actually
so
so
they
have
a
technology
patented
technology.
C
D
That's
wonderful,
yeah.
I
think
they
they
recently.
We
did
the
pilot
project
with
them
on
the
zen
blanket
technology
and
they
did
their
pilot
with
adobe,
I
believe,
and
recently
they
got
funded
by.
F
D
We
we
could
definitely
use
their
input
and
guidance
on
how
to
shape
centers
from
the
compute
side
of
things
as
we
go
forward.
E
Very
cool,
any
other
team
members
who
can
join
from
his
team
deeper
to
continue
to
support
from
excitation
from
yeah.
C
Yeah
yeah,
so
their
professors
they're
all
connected.
We
can,
but
I
know
we
don't
want
to
kind
of
over
enough
that
too
many
people
in
the
advisory
board.
So
that's.
C
Just
like
you
know,
if
you
think
about
it,
nikita
is
there
on
the
call
as
well.
So
one
of
the
opportunity,
the
project
which
I
proposed
last
time.
You
know
the
replacing
at
cde
with
ignite
so
the
technology
which
we
worked
on
and
which
is
the
the
technology,
the
exotanium
they
do
business.
I
think,
that's
a
possible,
you
know
synergy.
Actually,
you
see
the
whole
mobility
cloud
mobility
thing
actually
so
that
that's
a
pretty
powerful
technology
actually.
C
So
how
do
you
make
your
workload
running
in
one
cloud
from
let's
say,
for
example,
in
centaurus,
in
your
local
data
center
and
all
of
a
sudden
like
the
cloud
busting
happens
and
you
end
up
in
aws
or
do
you
spot
market,
and
then
you
end
up
in
azure
seamlessly
without
even
knowing
about
it.
You
know,
there's
a
whole
machine
learning
bit,
so
that's
a
possible
synergy
as
well,
so
so
that
could
be
another
possible
project.
So
that
way,
just
like
big
gain,
you
know,
is
going
to
work
with
us
to
do
that
contribution.
A
Okay,
if
there
are
no
more
questions
money,
I
think
money
is
trying
to
fix
some
yeah.
Okay,
he
solved
her.
She
solved
the
problem
now
and
autism
amendment.
Please
have
a
vote.
G
I'm
sorry,
you
can
see
whether
you
cannot
vote
okay,
just
because
I
got
the
pop-up.
I
have
two
choices
here
about
three.
F
B
A
F
A
A
A
C
Suggestion
is
a
good
suggestion
as
well.
We
should
have
some
kind
of
a
blog,
some
kind
of
a
paragraph
on
our
website,
as
well
as
part
of
centaurus,
what
exactly
the
role
is
going
to
be
and
what
what
exactly
it
entails.
You
know
what
are
the
areas
we
are
interested
in.
You
know,
as
far
as
bringing
in
more
advices,
maybe
one
paragraph
we
can
do
that.
A
Okay,
great,
when
I
think
the
time
now
is
yours,.
A
Just
one
one
minute
before
when
it
starts
in
case
some
tsum
members
are
not
familiar
with
this.
We
have
four
six
special
interest
groups
on
the
samplers.
They
are
scalability
ship,
hsig,
networking,
sig
and
aic.
A
This
review
is
the
third
one
for
this
cellular
with
the
next
one
will
be
the
should
be
the
ai
sig,
it's
kind
of
just
just
rotating
the
one
one
by
one
for
each
https
immediately,
it's
just
a
little
background
in
case
someone
can't
remember.
I
don't
remember
this.
D
Okay,
so
shall
I
get
started?
D
Yes,
please
all
right
so
miser,
I'm
sure
many
of
you
are
familiar
with
it
already
and
if
this
a
lot
of
this
is
a
repeat,
please
excuse
that.
Also,
I
joined
the
visa
team
a
couple
of
months
ago,
so
I
may
have
very
large
knowledge
gaps
in
my
own
understanding
of
misar.
I
have
some
idea
from
the
past
when
I
was
working
with
the
venturi
first
here
and
I
was
used
to
talking
to
him
about
it,
but
I
will
answer
my
answer.
Any
questions
to
my
best.
D
B
D
A
summary
of
our
collaboration
work
that
we
are
doing
and
we'll
take
a
look
at
the
road
map
that
we
have
planned
for
the
rest
of
2021,
and
there
are
some
resources
so
that
you
can
easily
find
information
about
misar
and
how
to
get
in
touch
with
us
regarding
communications,
so
the
overview
in
at
the
core
of
mesa,
we
rely
on
the
xtp
technology
in
the
linux
kernel.
D
This
is
so
ebpf
has
taken
has
is
probably
the
is
considered
one
of
the
most
earth-breaking
groundbreaking
developments
in
the
linux
kernel
in
the
past
whole
decade,
where
it's
been
used
by
various
companies
for
different
reasons.
For
example,
if
there
is
attack
vectors
coming
in
facebook,
I
believe
has
done
tweaks
to
the
kernel.
You
can
patch
the
kernel
without
having
to
rebuild
it.
You
can
do
it
dynamically
and
then
change
how
the
packets
are
handled,
to
prevent
attacks
and
for
our
purpose.
We
are
essentially
looking
at
xtp
as
a
data
plane.
D
Technology
to
perform
the
packet
overlay
function,
functionality
that
we
use
in
sdn
software
defined
networks.
So
at
the
core,
it
is
just
to
give
a
a
summary
comparison
of
misar
versus
psyllium,
which
is
another
implementation
of
kubernetes
networking
and
sdn
networking
in
general.
They
also
use
ebpf.
The
difference
between
us
and
them
is
that
is
the
use
of
is
a
greater
reliance
on
xtp.
So
what
happens?
D
D
So
in
this
host
you
have
the
pod
that's
running
in
the
user
space
and
typically,
we
use
ve
pairs
to
connect
the
part
to
the
host
networking
stack
and,
in
this
case
psyllium
would
go
through
tc
traffic
control
hooks.
So
in
that
case,
what
happens
is
the
socket
buffer
is
allocated?
Sk
buff
is
located
in
the
kernel,
and
then
the
processing
is
done
on
that
and
then
it
gets
sent
out
to
the
host
nic
using
the
only
technology
of
their
choice.
D
Miser
we
use
geneve
for
its
extensibility.
Geneve
has
options
that
you
can
add.
So
this
makes
your
oled
technology
very
extensible.
I
myself
have
been
using
it
in
a
project
that
I've
been
working
on
in
the
past
three
months.
So
in
this
case,
what
happens?
A
packet
that
would
come
through,
that's
being
sent
out
is
intercepted
by
the
xtp
hook
here.
Socket
buffer
is
not
even.
D
And
that's
one
of
the
reasons
why,
where
the
speed
comes
from,
it's
not
kernel
bypass,
it
could
use
the
canal
kernel
features
as
it
needs,
but
in
this
case
we
are
just
to
use
the
redirect
redirect
action
to
send
it
directly
to
the
transmit
hook,
transmit
api
of
the
host
nic
host
driver
and
on
the
incoming
path.
There
is
another
xtp
hook
here
which
does
some
processing.
If
there
is
some
network
policies
that
say,
okay,
this
packet
can
come
through
or
this
packet
needs
to
be
dropped.
D
It's
done
here,
so
we
don't
even
hit
the
escape
of
stack
and
once
it's
accepted
it
just
sent
redirect
to
the
host.
So
this
gives
us
a
lot
more
speed
compared
to
the
psyllium
implementation.
This
is.
B
D
A
very
summary:
this,
not
a
true
apple,
travels
comparison,
but
it
kind
of
it's
a
summary
of
why
we
took
this
approach
and
why?
What
we
feel
are
the
advantages
of
miser.
A
A
Probably,
if
you
can
add
that
part,
maybe
you
will
make
it
more
complete.
I
mean
I
mean
for
me
that
would
build
the
most
of
our
featured
xdp,
but
selium
there
started
building,
for,
I
think,
for
the
load.
Balancing
external
service
dependency
using
xtp.
D
A
All
the
other
parts
are
still
on
the
on
unsocial
network,
ebpf
yeah,
I
mean
if
we
added
that
part
probably
will
make
this
a
kind
of
more
complete
comparison.
There.
D
Right
yeah,
I
just
yeah,
okay,
that
it's
good
feedback.
I
think
I'll.
B
D
That
change
to
see
without
taking
focus
away
from
the
key
differentiation
yeah
yeah.
They
they
use
yes,
you're
correct.
They
use
xtp
here
for
filtering
ingress
and
they
use
tc
for
policies
and
bandwidth
control
and
they
use
socket
filtering
as
well
socket
hooks
as
well.
Yeah.
B
A
A
A
D
Thank
you.
That's
good
feedback.
Yeah
I'll,
improve
this
slide
for
the
future.
Just
mention
the
pieces
there
yeah.
D
Yeah
there
is
more
going
on
it's
more
evolved
and
you
could
say
it's
more
evolved
in
a
different
way.
We
choose
to
do
things
in
xtp,
chained,
xtp
programs
or
pretty
much
rely
on
xtp
for
everything.
So
so
far.
I
believe
there
is
some
cost
savings
in
that
we
do
want
to
get
true
benchmark
head-to-head
psyllium
versus
mesa.
It's
one
of
the
things
that
we're
looking
at
I'll
talk
about
in
the
collaboration
section
of
our
discussion
of
a
presentation
today.
D
Okay,
so
I'll
move
on
we,
the
next.
D
Well,
okay,
so
we,
this
is
looking
at
the
misa
data
plane
a
little
bit
more
closely
in
the
previous
slide.
It
was
a
brief
overview.
What
really
is
happening
when
you
talk
about
vsphere,
you
can
think
of
it
as
two
pipes,
a
pipe
that
goes
this
way
in
this
direction.
Any
packet
that's
sent
out.
That's
that
you
shove
into
the
tx
here
pops
out
at
the
rx
here
any
packet
that
you,
shell
into
the
dx
here
pops
out.
That's
essentially
your
wii.
D
It's
pair,
the
miser
implement
our
implementation
of
mesa,
uses
a
ebpf
program
called
transit,
transit
agent
or
the
xdp
code.
That
runs
at
the
rx
of
this
sweet
pair
in
the
host
side
of
the
name
host
side
of
the
video
in
the
host
namespace,
and
this
is
where
we
use.
We
look
at
the
packet,
we
determine
where
it
needs
to
go
and
we
determine.
Let's
say
this
is
namespace
a
this
is
part
a,
and
this
is
part
b
if
this
was
a
local
part.
If
it's
on
the
same
host,
it's
easy.
D
We
call
the
the
the
program
that
sends
a
that.
The
transit
program
that
sends
it
over
to
the
rtx
if
it
is
a
remote
part,
then
we
determine
where
that
which
host
has
the
pod,
which
host
is
which
hardware
or
virtual
machine
owns
the
pod,
and
we
send
it.
We
encapsulate
it
in
a
geneve
in
a
geneve
packet
and
then
there
is
an
outer
layer
outer
ipp
which
carries
the
the
host
source
address
of
this
nic
and
the
destination
address
of
the
host.
D
D
This
is
a
mistake,
so
we
send
this
out
to
dx
and
the
packet
travels
to
the
remote
host.
Let's
say
it
comes
into
the
ingress
of
the
remote
host.
Then
the
transit
xdb
here
another
ebpf
program
that
we
have
determines
what
the
the
part
the
capsulate
encapsulates
it
and
then
sends
it
or
redirects
it
to
the
actual.
D
The
interface
tx
that
that
is
the
destination
part
and
this
tx
would
pretty
much
send
the
packet
and
it
pops
up
in
the
name
space
or
let's
talk
about
it
going
from
here
to
this
way.
So
this
is
an
overview
of
looking
at
looking
at
this
in
a
little
bit
more
detail
where
our,
where
we
make
the
changes
to
get
misa
working
so
at
the
core
of
the.
In
summary,
at
the
core
of
mesa,
we
have
the
transit
agent
program
and
the
transit
xdb
program.
D
There
is
more
functionality
to
this.
I
will
cover
this
in
the
next
slide.
When
we
look
at
the
scalability
aspect
of
the
mesa
architecture,
as
with
most
evpf
programs,
we
use
there
is
a
user
space,
there's
a
user
space
program,
a
daemon
that
runs
and
controls
the
maps
which
determine
how
these
these
programs
behave.
So
it
programs
the
maps
which
is
essentially
the
data
that
is
needed
for
the
program
to
interpret
determine
where
what
which
destination
ip.
D
It
needs
to
send
the
send
the
packet
to
so
that
that
essentially,
is
a
summary
of
a
little
bit.
A
closer
look
at
the
detail
of
how
our
http
programs,
how
we
use
xtp
in
misar.
A
The
left
side
of
the
redirect
you
just
mentioned
around
it's
also
a
frame
right.
D
A
C
A
C
There
was
a
problem
with
the
v8,
the
thing,
the
plug-in
for
the
viet,
and
it
doesn't
work
with
the
driver
mode
right.
So.
D
I
could
see
that
it's
all
driver.
D
This
predates
this
is
last
year's
diagram.
This
predates
what
I
have
learned
in
the
last
two
months.
I
so
I've
just
picked
it
up.
Sorry,
okay,
so
the
next
part
any
more
questions
on
this
one.
D
So,
in
the
previous
two
slides,
we
saw
what
what
the
data
path
is,
how
where
the
packet
manipulations
happen,
how
this
how
it
gets
done
now
how
what
really
makes
miser
you
know
multi.
We
have
these
terms
multi-tenancy
and
we
have
a
large
scale.
It's
built
for
large-scale,
fast
provisioning.
D
What
is
really
making
that
happen?
So
when,
today
we
it's
pretty
common
to
have
vpc
and
the
concept
of
networks
within
the
vp
subnets
in
the
vpc
aws
gce,
I
know
they
have
it.
I'm
quite
certain
azure
has
similar,
if
not
the
same,.
D
D
Sorry
about
that
that
other
network
doesn't
when
it
starts
having
problems.
It
just
has
problems,
so,
okay,
so
in
the
vpc,
is
essentially
what
we.
B
D
Say
as
a
play
as
a
playground
for
a
single
tenant
in
this
case
in
miser,
it
translates
to
specifically
translate
translates
to
a
unique
vni
virtual
network
identifier
in
the
geneve
packet,
and
this
is
what
gives
us
the
isolation,
and
now
we
have
the
concept
of
bouncers
and
dividers
what
they
are
they're
just
new
terms,
but
if
you
think
about
what
how
packets
are
routed
today,
how
everything
is
connected,
we
have
switches,
we
have
routers
and
we
have.
We
have.
D
I
don't
know,
as
the
bgp
routing
uses
as
and
we
know
from
it's
quite
evident
that
it
works
quite
well.
It
scales
quite
well.
If
you
buy
a
new
computer
and
connect
it
to
your
network,
the
entire
world
doesn't
need
to
know
about
it.
Just
the
local
switches
need
to
know
that.
Okay,
there
is
a
new
computer
and
how
do
you
and
they
figure
out
how
to
reach
it.
So
this
is
a
loose
loose
synonym
bouncer,
you
could
think
of
it
as
okay.
You
have
a
packet.
D
Let's
say
this
container
wants
to
talk
to
this
virtual
machine
and
they're
in
the
same
network.
They
the
container,
would
bounce
the
packet
off
this
and
then
send
it
to
the
vm
and
vice
versa.
D
Now
this
is
a
one
hop
talk
all
the
time
and
we,
of
course
do
have
a
mechanism
where
we
can.
If
this
container
is
communicating
with
this,
what
happens
here,
let's
say
a
new
container.
This
is
a
new
container.
It
pops
up.
The
only
things
that
you
need
to
do
to
get
this
container
connected
is
tell
this
container
or
the
host
that
is
hosting
this
container.
D
What
its
bouncer
ip
address
is.
So
that's
one
entry
to
the
xdp
table
on
the
host.
This
bouncer
is
running
on
a
different
host.
So
then
you
go
to
the
this
bouncer
and
tell
it
okay,
you
have
a
new
container
that
belongs
to
your
network,
and
this
is
its
ip
address
and
mac
address,
or
this
is
its
host
ip
address.
D
So
it
has
the
host
mac
address,
so
it's
gonna
any
packets
that
are
going
to
this
from
this
virtual
machine.
Even
before
it
knows
about
this,
it's
gonna
go
to
the
bounce,
so
the
bouncer
is
gonna,
say
yeah.
I
know
how
to
send
it.
It's
gonna
send
it
this
way,
and
then
it
also
sends
a
redirect
back
saying:
okay,
you
wanna
talk
to
him
directly
talk
to
this
host.
So
all
this
is
happening
in
network.
You
don't
have
to
program
the
the
the
flow
tables
are
as
with
obs.
D
D
If
you
can
imagine
that
you
have
okay,
you
have
a
new
virtual
machine,
that's
come
up
and
it
needs
to
talk
to
all
these
other
previously
existing
serverless
container
instances.
Then
each
of
those
hosts
need
to
know
this.
Is
here
that's
what
oe
oen
does,
and
so
you
will
see
that
the
ovn's
provisioning
time
linearly
increases
with
the
number
of
pods
and
nodes,
whereas
visas,
provisioning
is
pretty
much
constant
time.
D
So
that
makes
it
a
great
solution
for
today's
cloud,
so
cloud
networking
the
reality
of
today's
cloud.
Networking
scenarios
where
you
have
serverless,
which
is
quick
parts,
come
up
to
something
go
away
and
some
pods
are
short-lived.
So
it's
for
those
spots
as
well
that
this
works
very
well.
So
you
want
to
provision
a
new
part,
so
one
operation,
that
always
you
know,
that's
always
a
great
thing,
and
we
are,
of
course
we
when
we
started
building
this.
D
We,
the
sheriff,
took
this
architecture
kept
this
architecture
in
mind,
because
this
works
well
scales.
Well,
we
know
because
we
are
able
to
sit
in
different
places,
different
locations
and
we
are
having
this
meeting
so
following
the
same
architecture
is
a
good
idea
in
terms
of
how
this
is
all
implemented.
D
Kubernetes.
We
have
custom
resources
and
kubernetes,
and
there
is
a
operator
framework.
All
these
terminology
that
you
see
divider
bouncer
vdpc
network.
They
are
all
abstracted
into
custom
resources,
kubernetes,
custom
resources
and
when
these
are
created,
the
operator
goes
to
work
and
executes
a
certain
workflow
to
go
and
program
the
daemons
and
the
the
transit.
D
In
the
previous
slide,
we
saw
the
transit
xdp
here,
it
programs
the
bounce.
So
one
of
these
is
behaving
as
a
bouncer.
It
goes
and
programs
the
bouncer,
it
goes
and
programs
the
demons.
Okay,
you
have
a
new
part
so
when
that
new
part
is
connected,
its
packets,
that
that
come
out
of
that
are
handled
correctly.
D
Okay,
there
we
go
so
custom
resources,
we
use
custom,
kubernetes,
custom
resources
and
the
operators
and
kubernetes
to
manage
all
this,
manage
the
bouncer,
bringing
up
the
bouncers,
initializing
them
and
provisioning
them
and
when
new
parts
come
handling
the
provisioning
of
the
endpoints
for
those
parts,
I've
not
shown
the
those
here,
but
there
are
more
a
few
more
objects.
A
few
more
cr
custom
resource
objects
that
that
make
all
these
things
function.
D
I'll,
perhaps
reserve
it
for
a
future
meeting
to
go
into
details
of
how
control
plane
works.
Maybe
one
of
them.
One
of
the
folks
in
our
team
can
present
that,
but.
D
We
have
not
verified
the
vms
with
container
spots.
Yes,
we
have
done
that.
Well,
let
me
restate
that
I
have
not
verified
it
with
vms.
Yet,
but
in
octo's
we
are
able
to
run
vms
and
containers
side
by
side
and
currently
miser.
There
has
been
some
instabilities
and
we
don't
have
it
fully
working
for
octopus,
but
it
has
worked
in
the
past
with
actos.
D
That's
my
understanding,
shounen,
please
correct
me.
If
I
missed.
A
I
remember
we
probably
did
a
very
basic
vm
and
container
side
by
side
networking
communication
test.
I
don't
remember
exactly,
but
from
the
networking
perspective
actually,
then
there
are
basically
no
much
difference.
We
just
provide
a
a
virtual
network
device,
an
advanced
computing
site,
to
either
mounted
to
a
vm
or
mounted
to
a
container
serverless.
The
self
is
also
running
a
container.
F
A
The
external
form
there
are
only
containers
and
vms,
because
servers
and
services
are
usually
running
as
a
containers.
A
A
A
You
have,
let's
say
you
have
a
kubernetes
cluster,
it's
running
on
a
few
bare
metal
machines,
we
actors,
because
it's
a
enhancement
to
the
act
on
to
the
kubernetes.
We
added
the
vm
capability
so
on
this
actors,
cluster,
you,
when
you
create
a
port
you
can
specify
if
this
product
is
a
vm
port
or
container
port.
F
E
A
A
E
D
Sorry,
if
we
imagine,
let
me
just
take
it
back
one
slide
here,
so
this
part
of
the
v
it's
pair.
This
is
in
namespace
a
of
a
container
or
a
pod.
This
might
very
well
be
inside
a
vm.
D
So
as
far
as
misar
is
concerned,
it
really
doesn't
care,
so
if
they
are
talking
between
themselves,
let's
say
this
is
a
name.
This
name.
This
is
your
vm.
Instead
of
pardon
namespace
a
and
this
wants
to
talk
to
a
container
or
part
in
namespace
b,
it
will
probably
use
tcp
http
udp.
We
use.
Of
course,
we
use
geneve
overlay
to
connect
these
two
and
encapsulate
the
packets
and,
if
necessary,
if
they're,
on
different
hosts
and
then
communicate.
D
If
they
belong
to
the
same
vni
or
the
tenant,
then
they
are
part
of
the
same
vp
they're
part
of
the
same
vpc.
E
The
reason
I'm
asking
this
question,
because
I
have
a
app
modernization
workload
in
microsoft,
that
I'm
looking
into
where
we
are
using
some
virtual
machines
and
containers
running
the
databases
on
the
side
for
for
geograph
geospatial,
fencing,
slash
application
sort
of
it
anyway.
I
will
look
into
it
a
little
bit
more
and
I'll
come
back
to
it
in
the
next
meeting,
because
it
looks
like
solving
a
very
interesting
problem.
C
Even
security
standpoint,
it
shouldn't
really
matter
if
it's
a
vm
or
a
it's
the
same
interface.
So
you
get
the
same
security
policies
that
can
be
applied
to
your
container.
It's
an
endpoint
from
meets
our
standpoint.
It
doesn't
care
it's
an
endpoint,
it
could
be
a
vm
or
it
could
be
a
container
yeah
you
for
christian.
If
you
are
talking
about.
C
E
And
on
the
vpc,
when
you're
creating
the
whole
virtual
private
scenario,
I
see
the
boundaries
of
the
vm
and
content
in
the
same
place
while
serverless,
because
I'm
new
to
serverless.
I
just
want
to
understand
that
these
are
like
two
sub
domains,
all
together
right
in
the
picture
itself.
Is
this
how
it
works
or
it
can
work
in
like
one
one,
venn
diagram.
D
This
is
just
different
applications
connected
to
different
networks.
Okay,
what
this
really
shows
is
that,
within
a
vpc,
a
virtual
machine
running
on
this
say
network
a
can
talk
to
a
server
list,
running
on
network
network
b
or
container
running
on
the
same
network,
a
even
if
they
are
connected
to
a
different
different
networks.
They
can
communicate
by
default,
which
is
the
kubernetes
model
where
everything
it's
a
flat
networking
space.
So
this
could
look
like
a
kubernetes
cluster
that
is
running
a
vm
running
a
container
and
a
serverless
instance.
E
D
E
E
So
from
the
from
the
computer
computing
standpoint,
when
you
are
communicating
between
the
the
both
the
virtual
machines
and
the
containers,
and
if
my
data
is
through
from
the
compute
standpoint,
if
I'm
trying
to
access
the
resources
that
are
running
on
the
container
right
under
the
same
vpc
and
if
I
have
like
already
one
machine
is
running
more
like
an
active
directory
in
another
machine
running
more
into
the
ldap
kind
of
a
scenario.
D
Right,
okay,
is
your
concern
about.
Let's
say
there
is
a
we
draw
a
vpc
another
circle
here.
Call
it
vpc2
will.
Are
they
completely
isolated
from
seeing
any
traffic
that
goes
between
these
two
partially?
D
The
answer
to
that
is:
yes,
there
is
isolation,
so,
if
you're
connected
here,
you
cannot
sniff
anything
and
see
what's
going
on
another
vpc.
The
xtp
program
just
won't
send
anything
towards
your
with
towards
your
way.
That
way.
E
D
E
D
Any
more
questions,
how
many
more
rides
do
we
have
a
few
more
just.
I
think
the
this
is
score
of
the
tech
part
of
it.
There
is
what
we're
doing
current
work,
I'll
kind
of
go
through
this
fairly
quickly,
so.
A
That's
fine,
that's
fine!
We
have
like
two
more
minutes
and
we
for
everybody
else
is
okay.
We
can
run
a
little
over,
that's
fine
just
so
I
want
to
make
sure
yeah,
because
we
probably
want
to
discuss
more
about.
What's
the
current
work
and
the
future
plan
right.
D
Yeah
well,
let's
get
to
that
it'll.
Take
a
few
minutes
to
just
go
over
this.
Given
the
context
I
think
miser.
We
have
had
some
stability
issues
with
vsar
and
we're
working
on
fixing
that
foods
been
working
really
hard
and
fixing
that
and
we're
getting
pretty
close
to
getting
it
very
functional
and
usable
and
easy
to
use.
D
And
then
we
are
integration
with
outdoors,
which
is
currently
not
working
very
well.
We
are
working
on
that
as
well,
and
label
based
policy,
the
it's
one
of
the
projects
that
we're
working
for
this
may
30th
release.
In
a
nutshell,
this
is
an
improvement
optimization
to
improve
the
performance
of
network
policies.
D
What
happens
is
that
let's
say
you
are
netpod
one
and
you
are
trying
to
communicate
to
net
part
two,
and
there
are
policies
which
says:
okay,
you
are
allowed,
but
somebody
else
netpod
three
over
here
may
not
be
allowed.
The
late
there
is
kubernetes
has
labels
on
the
parts,
and
here
we
determine
if
label.
If
the
port
contains
label
x,
then
you
allow
it
today.
What
we
have
is
we
check
if
the
part
has
label
x,
we
take
its
ip
address
and
then
we
check
the
ip
address.
D
Now
this
part
restarts
the
ip
address
has
changed,
so
you
need
to
go
and
update
the
policies.
The
idea
is
to
translate
the
labels
on
the
part
into
a
number
that
is
put
into
the
packet.
This
is
where
geneve
encapsulation
plays
a
big
role,
so
you
have
the
vni,
which
is
your
vpc,
equal,
vpc,
isolation
identifier.
D
Then
you
have
the
label,
which
is
a
number
in
the
geneve
option,
and
when
the
packet
comes
here,
we
just
have
to
look
at
the
option
and
say:
okay:
does
this
label
match
allowed
list,
and
if
it
does
you
let
it
go?
If
not,
you
do
a
drop,
and
the
next
thing
that
we're
working
on
is
the
bandwidth.
There
is
a
request
from
that
was
there
for
to
see
if
we
could
do
classified
traffic,
that's
classified
pods
as
high
priority
and
low
priority.
D
What
that
means
is
that
this
part
is
considered
a
high
priority
pod
and
it
should
get
more
network
bandwidth
and
there's
another
part
here,
which
is
a
low
priority
part
that
should
not
get
as
much
network
bandwidth
we
could
go.
We
could
do
one
better
where
we
see
traffic,
let's
say:
there's
a
traffic,
that's
going
from
a
certain
port,
it's
high
priority
and
a
traffic
that
goes
from
a
different
port
because
classified
as
low
priority.
We
do
that
classification
determination
in
the
xdp
program.
D
D
The
other
one
is
to
use,
use
the
rate
limited
path,
and
for
that
we
use
the
algorithm
called
earliest
departure
time,
which
uses
the
dc
traffic
control
evpf
hook.
Traffic
control
is
fairly
commonly
used.
It's
been
there
for
a
while
in
linux,
and
it's
used
for
rate
limiting
and
packet
traffic
shaping
of
packets.
D
The
edt
algorithm
is
is
essentially
it's.
It
puts
a
delay
between
a
sequence
of
packets
that
belong
to
the
rate,
limited
stream,
and
that
is
effectively
limits.
How
fast
some
and
the
there
is
fair
queueing
here
in
the
dc
infrastructure
that
ensures
that
packets
are
not
departing
the
vm
departing
the
host
until
its
departure
time
has
been
reached
and
that
ensures
rate
limiting
of
the
packets.
D
So
this
is
something
that
we
are
gonna
have
in
the
next
milestone
the
may
end
of
this
month.
So
we've
been
working
on
that
collab
we've
been
there's.
Some
collaboration
work
that
we're
also
involved
in
at
the
university
of
washington
we're
working
on.
There
is
some
common
common
interest
in
looking
at
how
we
can
do
hardware
offload
for
ebpf
and
xdp
hybrid
offload,
they're
interested
from
the
perspective
of
load
balancing.
We
are
sharing
ideas
and
we're
working,
also
working
with
the
us
university
of
science
technology
in
china.
D
They
are
helping
us
with
hardware,
also
helping
us
with
visa
hardware,
offload
functions,
they've
taken
over
some
pieces
of
work,
they're
like
adding
statistics
to
our
current
visa
infrastructure,
reducing
the
memory
footprints
in
the
previous
slides.
If
we
saw
one
thing
you
would
notice,
is
you
have
every
weeds
pair?
You
have
a
transit
program
agent
program,
so
if
there
are
a
hundred
parts
you
have
a
hundred
of
these,
can
we
optimize
this
and
you
know
have
one
common
of
all
of
them
share
one.
D
So
that
was
the
idea,
so
that
reduces
the
footprint
of
the
xtp
program
that
you
are
in
the
host.
It's
not
big,
but
it
helps
so
that's
one
of
the
ideas
that
we
have.
We
were
looking
into
and
they're
looking
at
doing
that,
we
are
looking
at
doing
offload
of
the
bouncers
and
dividers
functionality.
The
we
did
hit
some
challenges.
I
think
the
biggest
challenge
that
we
have
been
we
have
encountered
with
offload
program
is
that
we
don't
have
a
good
support
for
nick's
hardware
offload
next
available.
A
D
Market
right
now,
the
closest
we
have
is
from
a
company
called
metronome
and
they
have
certain
limitations
which
are
crippling
fermis
are,
for
example,
they
don't
support.
Xtp
redirect
and
mesa
relies
heavily
on
the
xtp
redirect
construct.
D
A
Maybe
so
I
recently
also
read
another
user
case
about
a
cloud
provider.
They
are
using
bluefield
to
offload
all
their
computing
network
and
storage.
A
These
agents-
and
I
mean
the
agents
demons
all
these
things
into
the
semanic,
but
you're
better
using
bluefield,
it
looks
like
omidia,
has
really
put
another
kind
of
efforts
on
on
this
stockholder
bluefield.
A
A
D
A
C
A
The
time
mm-hmm
yeah,
I
just
don't
know,
what's
the
performance
difference
there
yeah
we
can.
We
can
look
at
that
yeah.
That's
another
approach.
Yeah.
D
C
D
C
D
Ago,
okay,
okay,
yeah,
two
weeks
ago,
thursday,
I
believe
so
he
talked
about
how
they
essentially
they're
hanging
off
a
bunch
of
flash
storage
office
of
a
pci.
What
is
logically
a
pci
route,
and
it
it
takes
away
the
work
that
a
host
would
otherwise
need
to
do
for
simple,
you
know,
storage.
D
So
even
this
having
a
separate
linux,
a
low-cost
arm,
linux
could
be
useful
if
the
scale
becomes
massive.
A
Yeah
first,
I
don't
know
if
the
arm
cause
on
the
blue
field
is
maybe
they're
very
powerful.
I
don't
I
don't
know,
I
don't
know.
What's
the
exact
performance
level
and
second,
when
we
talk
about
off-roading
to
the
semanic,
it's
not
only
for
cpu
performance,
it's
more!
It's
also
about
isolated
performance
interactions
among
the
this
user
custom
workloads
and
our
own
management
workloads.
So
there
will
be
no
kind
of
intervention.
A
A
This
text,
with
the
real
user
custom
workloads
and
also
provide
all
the
resources
on
the
host
to
go
on
to
customers,
so
hold
bare
metal
yeah
exactly
so.
I
think
that
we
can
still
also
try
this
overload.
Everything
on
bluefield,
but
maybe
it's
more
viable
for
us,
looks
like
there's
a
capability
limit
on
the
nitrogen.
D
D
Yeah,
thank
you
so
and
lastly,
we
have
started
working
with
click-to-cloud
in
india
and
the
team.
There
is
ramping
up
on
and
helping
out
with
misartos
integration
and
one-click
deployment
and
we're
looking
forward
to
the
collaboration
with
them.
E
Well,
as
far
as
I
think,
when
I
see
this
demo,
I
think
I
need
to
have
some
senior
resources
out
there.
It
won't
be
able
to
work
with
this
level
of
depth.
I
think
from
click
to
cloud.
Mostly,
they
could
be
able
to
do
testing
and
mostly
ui
and
integration
work,
but
what
we
are
talking
about
from
the
the
measure
standpoint
or
actual
standpoint-
it's
pretty
intense.
E
Yeah,
so
maybe
we
need
to-
I
probably
will
look
into
it
later
today
to
define
the
use
case
scenario
from
testing
massive
amount
of
testing
on
whatever
has
been
developed
and
finding
the
right
issues
and
the
one
click
deployment
scenario,
but
for
development
and
support
I
mean
after
going
through
this
meeting.
I
think
it
needs
a
very
different
caliber
yeah,
so
we
need
to
find
some
really
good
local,
strong
engineers.
C
I
think
this
is
what
I
wanted
to
kind
of
highlight
that
as
well.
I
think
vinay
has
done
a
great
job
in
kind
of
describing
the
technology,
but
if
you
perform
a
very
30
000,
this
is
one
of
the
biggest
contribution
we
have
as
part
of
centaurus,
because
if
you
want
to
build
a
large-scale
cloud
in
open
source,
you
cannot
build
it.
The
reason
being
networking
becomes
a
big
impairment.
Basically,
so,
if
you're
going
to
support
millions
of
endpoints
come
and
go
serverless
functions
and
containers,
you
cannot
build
that
using
neutron.
C
C
Is
this
is
like
a
distributed
programming
distributed
application?
This
is
not
a
flow
table
thing,
and
this
is
exactly
what
all
the
cloud
providers
do.
If
you
look
at
amazon,
they
don't
use
obs
or
flow
tables,
and
all
that
because
that
doesn't
scale
yeah.
So
this
is
one
of
the
biggest
value
proposition
of
our
platform.
If
you
look
at
kubernetes,
it
doesn't
even
have
it's
networking,
okay,
yeah,
and
if
you
look
at
open
slack,
it
has
a
neutron,
but
you
cannot
build
a
big
large-scale
cloud
platform
because
of
their
networking.
E
C
E
E
A
E
D
Yeah,
I
think
main
thing
is
knowledge
of
kubernetes,
because
actos
is
a
fork
of
kubernetes
and
an
understanding
of
overlay
networking
how
things
work
xdp.
I
agree
I'm
two
months
in
and
I
still
there's
a
lot.
I
don't
know
and
learning
I
can
see.
That's
being
that
as
being
there
is
a
learning
curve
to
it,
because
you
can't
just
type
it
you
can
get
just
any
see
developer
and
have
him
write,
xtp
programs,
you
need
to
know
stuff
yeah.
E
So
well,
I
think
we'll
we'll
have
a
different
conversation,
I'll
call
you
and
shouting
little
bit
later
in
the
day,
if
you
have
time
or
maybe
some
other
day,
and
we
can
go
through
it
like
the
scope
of
different
areas
and
define
it
clearly
so
that
there
is
a
better
success
path
for
everyone.
Okay,
yeah.
D
Right
that
we
want
to
ensure
that
thank
you
yeah,
so
that
pretty
much
covers
the
I'm
at
the
last
slide
here.
So
what
we
have
a
lot
of
these
are
not
defined
in
detail
yet
so
some
of
the
items
that
were
tossed
around
this
was
three
months
back
when
I
just
joined
the
team
that
okay,
we
want
to
have
support
for
multiple
interfaces.
Multiple
networks,
cni
gene,
is
one
of
the
projects
that
was
done
here
at
future
wave
when
I
just
joined
that
was
done
and
we
want
to
see.
D
If
we
can.
You
know
leverage
that
supporting
cluster
ip
notepad.
These
are
concepts
we
want
to
be
able
to
support
all
of
those.
Our
current
cni
implementation,
pretty
much
all
of
mesa,
is
written
in
the
control
plane.
Part
of
it
is
written
in
python.
We
want
to
slowly
start
transitioning.
At
least
this
year
start
with
the
cni
and
transition
it
towards
golang.
That
was
one
of
the
things
that
we
might
work
on
and
the
zeta
features.
Please
don't
ask
me
where
it
is.
D
I
have
just
gotten
a
handle
of
miser,
but
it's
a
it's
a
it's
a
very
interesting
project.
That's
mainly
for
bringing
these
out
mainstream
for
all
as
a
solution
that
would
rival
openstack.
I
believe
deepak
correct
me.
If
that's
not
the
right
way
to
describe
it.
C
So
I
think
zita,
is
one
of
the
missing.
So
currently,
if
you
build,
if
you
want
to
build
a
network
functions,
the
the
only
thing
you
can
do
is
the
stateful
function.
Stateless
functions
currently
in
because
we
don't
have
a
state
management
capability
and
that's
what
zita
is
all
about.
So
once
you
have
the
so,
for
example,
you
know
the
matting,
for
if
you
want
to
build
an
ad
function
or
a
load
balancer
function,
you
need
some
kind
of
a
state.
C
D
Now
I'm
sorry,
I
have
another
meeting
coming
up.
F
A
G
E
Question
before
we
leave
right,
dr
shawning,
are
we
looking
at,
like
other
four
six
people,
also
presenting
what
is
going
on
in
their
different
areas?.
A
A
C
Time
we
presented
that's.
G
A
E
And
then
all
the
six
have
different
owners
like
different,
like
winners,
more
on
yeah.
A
A
A
E
A
C
C
D
Yeah,
this
is
going
to
be
great
if
we
can
get
more
confirmation
on
the
the
multi-networking.
This
is
going
to
be
a
unique
addition
to
me
that
others
don't
have.