►
From YouTube: Centaurus Monthly TSC Meeting 8/30/2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cloud
in
there
as
well
thanks
everybody
for
joining,
so
this
is
our
monthly
tsc
meeting.
So
in
this
meeting,
the
focus
vinay
is
gonna.
Focus
on
the
meats
are
actually
all
the
updates.
All
the
work
that's
been
done
in
the
last
a
few
months
before
we
do
that
we
have
vinay.
I
would
request
you
because
I
know
some
of
the
folks
are
new.
If
you
can
just
give
like
a
very
30
000
feet,
you
know
introduction
of
what
media
is.
A
You
know
high
level,
you
know
how's.
These
are
different
than
other
networking
cloud
cloud,
networking
solutions,
and
that
would
be
great.
B
A
So,
and
just
for
everybody
else,
so
as
you
know,
so
there
are
four
projects
now
under
as
part
of
our
centaurus
umbrella.
Octo's,
you
know
meets
our,
is
our
networking
octo's
being
compute,
then
we
have
a
quad
container
and
then
we
have.
We
got
approval
for
the
slo
work,
which
we
did
so
we
haven't
incorporated
that
yet
so
that
meets
our
is
the
one
vinay
is
gonna
talk
about
the
network,
virtualization
cloud
network,
virtualization
project.
B
B
Okay,
so
this
is
probably
not
the
best
diagram
to
go
with.
I
think
the
I'll
just
go
to
the
diagram
that
I
have
in
one
of
the
downstream
design
documents
to
talk
about
misar
here.
B
So
this
I
was
going
to
cover
it's
part
of
another
project.
So
what
what
is
going
on
here
is
that
you
have,
if
we
focus
on
just
this
portion
of
it
here,
the
these
two
nodes
up
here
we
are
looking
at
two
vms.
One
is
two
worker
vms
which
are
running
a
bunch
of
pods,
and
these
pods
need
to
communicate,
and
that's
where
these
are
kubernetes
spots.
B
When
kubernetes
spots
talk
to
each
other,
you
need
some
kind
of
networking
or
if
they
want
to
talk
external
to
the
cluster
network,
you
need
some
kind
of
networking
and
miser.
There
are
various
solutions
out
there.
Flannel
weave
psyllium
is
one
of
the
very
popular
ones
now,
because
they
use
ebpf
technology
as
well
and
they're.
Pretty
good
they're
really
done.
They've
been
working
on
it
for
several
years
and
they
have
a
lot
of
features.
B
Misa
is
another
cni
driver
which
is
based
on
xtp,
with
we
use
the
xtp
technology,
which
is
ebpf
programs
that
run
attached
to
the
nics
and
the
reason
to
do
this
to
start
the
misar
project
was
one
of
the
requirements
that
came
up
when
we
were
looking
at
the
other
project
called
octos,
which
has
multi-tenancy
in
multi-tenant.
B
Networking
you
want
to
have
actors
provides
multi-tenancy
for
clusters,
so
you
can
have
multiple
tenants.
Users,
isolated
users,
share
the
same
worker
nodes,
but
there's
no
solution
in
the
market
that
allows
you
to
have
the
networking
isolated.
Networking
for
these
and
misar
was
created
with
the
intent
of
you
know,
providing
vpc
isolation
it
uses.
Xdp
is
just
the
technology
that
was
current
at
the
time.
It's
still
current
and
some
of
the
hottest
technologies,
ebpf
and
xdp.
B
So
the
sheriff
started
this
project
and
leveraged
ebpf
as
the
ways
as
a
means
to
you
know,
deliver
the
cni
networking
solution
and
the
mesa
documentation.
Essentially,
this
is
the
architecture
page
we
are
looking
at,
but
in
a
nutshell,
what's
really
going
on
is
if
you
look
at
this
picture,
you
take
this
blue
part
here,
which
can
everyone
see
my
mouse
cursor
here?
B
That
networking
solution
is
what
the
cni
provider
provides.
The
screen
dashed
lines,
so
it's
an
overlay
network
and
it
connects
up.
It
exposes
the
pods,
it's
an
ability
to
communicate
with
other
parts
services
and
reach
outside.
So
that
is
essentially
in
a
nutshell.
What
cnn
does
and
me
miser
is
one
of
the
cni
providers.
B
Does
that
kind
of
provide
a
high
level?
I
don't
really
have
a
good
slide
for
this
one,
but
does
that
kind
of
give
you
a
high
level
picture
of
where
miser
fits.
A
Yeah,
I
think
one
thing
I
want
to
add
one
of
the
reasons
was
because
they
know
that
we
were
building
this
hyperscaler
platform.
You
know
the
outdoors.
We
can
have.
B
A
Thousand
nodes
in
it,
so
once
you
have,
you
know
such
a
big
cluster
environment,
so
many
machines,
potentially
more
than
100k
machines,
so
and
then
the
you
have,
you
may
have
millions
of
endpoints.
You
know
come
and
go
so
the
scale
of
this
current
networking.
You
know
solutions,
you
know
when
a
enumerated.
You
know
celium
and
neutron
and
all
those
kind
of
things
they
don't
scale
up
to
that
level.
A
Basically,
and
and
there's
a
reason
behind
that,
because
if
you
have
a
provision,
a
new
endpoint,
you
have
to
go
to
all
machines
and
update
the
flow
table.
They
use
the
flow
table
flow
rules
concept
to
enable
all
this.
So
that's
one
of
the
main
reason
we
moved
away
from
that
concept,
all
together.
Actually
so
that
was.
That
was
the
reason
we
had
to
kind
of
reinvent
the
network,
virtualization
solution
from
ground
up
and
that's
what
meets
our.
B
Deepak,
that's
a
good
point,
so
I
think
the
idea
is
that
when
in
a
cluster,
pods
are
fml
by
nature
and
we
fairly
short-lived,
so
they
come
and
go
as
needs
arise,
they're,
scaled
up
and
down,
and
we
can't
have
the
network
being
the
long
pole
in
provisioning,
the
pods
getting
the
ip
and
getting
it
network
ready
network
ready,
means
sure
the
pod
gets
an
ip,
but
is
it
able
to
communicate
with
all
the
other
parts
in
the
cluster
and
outside
the
moment
it
gets
this
ip,
not
necessarily
so
with
obs.
B
We
know
that
when
it
tries
to
reach
to
some
other
part,
it
first
has
to
go
to
the
user
mode.
It
has
to
configure
the
flow
table
and
that
is
not
network
ready
until
it's
on
demand.
So
there
is
a
latency
involved
when
it
starts
communicating
in
in
the
mesos
design.
What
it
does
is
essentially
has
a
constant
time.
Operation
to
the
architecture,
allows
for
a
constant
time
operation.
You
go
into
the
bouncer,
so
you
tell
the
bouncer.
Okay,
this
there
is
a
new
port.
B
That's
come
up
and
that's
going
to
be
sitting
on
this
virtual
machine
and
it
has
the
mac
address.
This
is
the
you're
responsible
for
if
anybody
wants
to
reach
it
you're
responsible
for
telling
them
where
it
is,
and
then
you
tell
the
virtual
machine.
Okay,
you
have
a
new
pod
and
if
you
want
to
reach
out
to
any
other
port
or
anywhere
else,
send
your
packet
to
this
particular
bond.
B
So
you
can
have
one
or
more
of
these
for
scaling
purposes
and
that
essentially
allows
you
to
have
pretty
much
a
two-step
operation
configure
the
the
virtual
machine
where
the
port
is
running.
With
the
address
of
the
bouncer
configure
the
bouncer,
with
the
address
of
the
virtual
machine
where
the
new
port
is
so
anybody
tries
to
reach
it.
There
is
a
one
time
operation
where
you
go
to
this.
This
point,
this
bouncer
and
then
from
there
on
its
direct
one
to
one
communication.
B
B
Okay,
so
there
are
three
things
that
have
been
going
on
with
which
I've
been
working
on
for
towards
miser
around
mizar
and
its
performance
scaling
measurements.
So
what
the
the
miser
itself
we've
been
working
on
it
for
the
past
couple
of
years,
and
what's
next
for
mizar
is
the
our
goal
is
to
like
see
if
we
can
leverage
any
offload
facilities,
there
is
metronome
nick,
which
we
have
been
working
with
to
try
and
get
some
offload
performance.
B
B
They
don't
support
tail
calls
and
xtp
redirect
which
miser
uses
so
that
limits
what
we
can
offload
and,
in
fact,
it's
very
limited
in
terms
of
only
offloading
the
bouncer
functionality,
the
where
you
don't
need
to
the
pieces
of
miser
where
you
don't
need
these
tail
calls
only
those
can
be
offloaded
and
that's
very
limiting
for
us.
So
this
is
one
that's
one
of
the
efforts
that's
been
going
on.
The
second
one
I'll
update
status
on
is
cubemark.
The
reason
for
cubemark
is
today
we
don't
have
a
solution.
B
You
have,
you
have
various
different
solutions
called
cni
like
cni
solutions
like
psyllium
flannel,
and
we,
of
course
we
have
miser,
but
there
is
no
a
good
way
to
compare
their
control
plane
performance
or
see
how
they
work
at
large
scale
in
their
data
plane,
performance
and
cubemark
is
a
tool.
That's
been
used
by
kubernetes
for
large-scale
performance,
emulation
of
the
kubernetes
control,
plane
components.
B
It
is
a
pretty
good
scaled
version
of
what
real
world
traffic
real
world
performance
would
look
like,
so
it
has
been
pretty
successful
and
they
use
it
regularly
in
their
ci
testing,
for
detecting
performance
regressions
in
the
kubernetes
components,
and
the
last
thing
we
want
to
look
at
is
here
cncf
project,
cni
genie,
we're
trying
to
restart
that
and
I'll
go
over.
Why
we're
doing
that?
B
So,
let's
start
with
with
misa
uw
collaboration.
Firstly,
I
think
some
of
the
highlights
of
misa
over
the
past
year
has
been
that
we
got
we
select.
We
were
selected
our
cfp's.
We
submitted
cfp
for
open
source
summit.
That
just
happened
couple
of
months
ago
in
austin
texas,
and
that
there
were
four
topics:
four
cfps
from
futureway
that
were
selected
and
two
of
them
were
from
the
mizar
team.
B
One
of
them
is
the
qos
work
that
we
did
last
year
late
last
year
and
another
one
was
the
label
based
policies,
work
that
we
did
last
year,
so
both
these
presentations
they
were
well
received.
We
went
there
qos
in
particular
drew
a
lot
of
attention
from
the
speaker's
party.
I
met
someone
who
came
up
with
a
new
new
use
case
for
it,
and
they
have
they're
very
interested
in
getting
this
mainstreamed
into
kubernetes.
B
At
some
point,
the
kubernetes
leadership
tim
hawkin
in
particular,
is
interested
in
seeing
some
kind
of
a
qs
version
proposal
go
into
kubernetes,
so
we're
still
trying
to
gather
figure
out
what
the
requirements
are.
I
think
my
main
point
is
that
we
need
not
only
qos.
We
need
to
have
bandwidth
reservation.
We
have
bandwidth
limits
in
in
the
form
of
annotations,
which
lets
you
set
bandwidth
limit
for
a
pod.
B
What
my
take
and
another,
the
take
of
another
research
project,
that's
been
going
on
in
ibm
their
take
is
that
you
we
want
to
have
requests
as
well
as
limits,
and
I
think
I
and
tim
disagree
on
that
one.
B
So
we're
still
not
it's
been
kind
of
on
pause
because
of
various
other
things
and
resource
constraints,
but
I
think
there
is
a
strong
interest
in
seeing
qos
at
some
point
in
kubernetes
how
much
we
get
involved
is
going
to
be
up
to
the
resourcing
that
we
have
the
third
thing:
that's
these
are
we've.
We
just
got
word
last
week
that
the
qrs
were
that
we
submitted
the
cfp
for
the
oane
summit,
open
networking
and
ed
summit
in
seattle
in
this
coming
november,
and
this
one
got
accepted.
B
I
actually
submitted
two
different
topics:
the
cubemark
work
that
I'm
doing,
and
the
qs
this
one
this
one
I
was
hoping
that
they'd
select
the
kubernetes
because
that's
new,
but
they
didn't
they
selected
this
one
which
we
already
presented.
B
So
what
the
the
plan
at
this
point
is
we
don't
want
to
just
repeat
what
we
already
presented
in
austin
just
two
months
ago.
B
It's
kind
of
boring
what
we
want
to
do
is
pangdu
did
a
lot
of
very
good
work
last
year,
late
last
year
with
mizar,
using
it
to
connect
edge
pods
to
the
cluster,
so
two
parts
running
in
the
core
core
cloud
of
the
let's
say
in
aws
or
one
of
your
clusters,
that's
running
in
a
cloud
somewhere
and
we
want
to
see
this
is
not
like
ready
to
show
demo
yet,
but
if
time
permits,
what
we
want
to
do
is
see
if
we
can
set
up
a
demo
and
present
misar
as
a
solution.
B
B
We
want
to
show
that
it
works
for
edge
case
edge
networking
as
well,
so
we're
still
figuring
out
whether
this
is
feasible
to
do,
but
that's
something
we
want
to
target
for
pending.
You
know
if
time
permits,
if
everybody's
fully
booked
on
their
projects-
and
we
just
don't
know
whether
we'll
get
the
time
to
do
this.
B
The
other
thing
that's
been
kind
of
going
on
again
on
off
again.
Is
this
engagement
with
uw
to
get
misar
to
get
ebp
offload
specifically
for
mises
architecture,
and
the
professor
irvin
has
been
working
with
the
liquid?
I
o
right
marvel
yeah
marvel
liquid
io,
so
he's
been
working
with
them,
they're
partnering
with
them
for
offload
ebpf
offload,
and
I
don't
know
if
they're
using
fpga
or
if
they
have
actual
no.
B
Okay,
so
if
they
can,
if
they
can
offload
ebpf
programs
and
xdp
programs,
and
they
can
use,
they
can
use,
write
up
the
firmware
to
be
able
to
add
support
for
some
of
the
primitives
that
we
need,
like
xtp,
redirect
and
then
tail
calls
that
would
essentially
bring
misar
into
the
high
performance
category
of
that
can
leverage
offload,
and
I
think
this
is
pretty
much
going
to
be
the
next
big
thing.
That's
coming
up
in
the
development
of
ebpf,
and
we
do
want
to
stay
involved
in
this.
B
Part
of
this
is
also
one
of
the
things
that
limitations
that
we
have
with
the
kernel
in
our
design
is
that
when
we
send
packets,
we
use
xtp
as
well,
and
we
may
not.
That
may
not
be
the
solution
on
the
receive
side
having
xcp
is
great,
because
the
packets
come
in
from
the
network.
It's
just
a
raw
buffer
and
you
process
it
offload,
can
process
it
and
do
all
the
de-capsulation
and
figuring
out
which
endpoint
it
needs
to
go
to
and
call
an
xtp
redirect
and
that's
probably
the
fastest
path.
B
On
the
send
side,
the
packets
that
are
coming
from
the
kernel
from
the
pod,
they
already
traverse
the
host
networking
stack
and
they
are
socket
buffers
and
translating
to
xdp
just
for
using
xtp.
Redirect
is
an
overhead
and
this
was
necessary
two
years
ago.
But
what
happened
when,
when
visa
started,
this
was
necessary
to
do.
But
what
happened
in
the
past
year
is
the
iso
and
and
cilium
folks
have
added
a
new
primitive
to
the
kernel
which
allows
you
to
redirect
socket
buffer,
like
ebpf
redirect
neighbor
ebp
redirect
peer.
B
There
are
a
couple
of
new
functions
that
have
been
added.
We
probably
want
to
try
and
take
advantage
of
that
and
see
if
we
can
do
away
with
the
xdp
translation
overhead
that
way
we
what
we
want
to
do
is
we
want
to
offload
evpf
program.
It's
a
abpf
programs
on
the
send
side
which
can
process
socket
buffers
and
on
the
receive
side,
it's
xdp
buffers.
B
So
that's
a
small
change,
we're
looking
at
doing
to
the
architecture
again
that
might
really
make
things
that
might
be
the
best
solution
that
we
land
at,
and
this
is
also
necessary
because
on
the
send
side
we
don't
have
a
hook
for
hooking
up
xcp
programs
and
for
good
reason.
Now.
B
If
we
had
that,
then
it
would
have
made
sense,
but
that's
not
there.
It
is
a
change
to
the
kernel
and
I
think
one
somebody
from
red
hat
had
mentioned
that
at
some
point
that
it's
good
to
have,
but
there's
no
there's
been
no
effort
to
add
it.
So
I
guess
the
demand
is
not
not
really
there
that
we
essentially
wanted
that
because
we
want
to
reduce
the
footprint
we
want
to
optimize
and
then
put
the
xtp
program
on
the
transmit
side
into
the
nic
as
well.
And
currently
we
cannot
do
that.
B
So
this
is
pretty
much
the
update
for
the
offload
work.
A
Just
I
think
I
just
want
to
add
something.
This
afterload
thing
is
very
profound.
Actually,
if
you
think
about
it,
amazon
was
the
very
first
one.
Who
did
that.
So
what
happens
is
when
you
do
this
network
virtualization?
You
have
tons
and
tons
of
code
network
virtualization
code
and
that
that
uses
all
the
cpu
cycles
on
the
host.
Basically,
so
what
they
realized
was
you
know
they
they
had
a
smart
niche
called
nitro.
A
Actually,
so
they
built
their
own
smart
mac
hardware
and
they
offloaded
all
the
network
virtualization
code
base
through
their
nitro
smart
net.
Actually,
and
that's
what
we
are
trying
to
do
as
well.
So
by
doing
that,
so
we
can
pretty
much
cloud
provider
can
give
the
whole
host
cpu
cycles
to
the
end
customer.
Basically,
so
in
there
would
not
be
any
single
line
of
infrastructure
code
running
on
the
host.
So
that
way
you
can
get
a
bare
metal
performance
and
all
the
the
whole
machine
to
then
customer
base.
A
So
that's
the
direction
amazon
went.
You
know
a
few
years
ago.
Google
is
doing
that
as
well,
and
that's
what
we
are
trying
to
do
that
as
well.
Essentially,
offload
all
the
infrastructure
code
to
the
smart
name
and
and
then
still
and
and
enable
network
virtualization
and
all
those
things
you
know
as
part
of
the
cloud
infrastructure.
A
Yeah
they
they
they,
they
partnered
with
transcendo
smart
nick
company
called
consent.
Oh
and
they're
they're
doing
that.
Yes,
okay,
everybody!
I
mean
that's
the
common
sensical
thing
to
do.
Basically,
you
see,
and
he
and
the
day,
so
it
not
only
provides
it's
not
just
a
business
reason.
It's
a
security
reason.
So
all
of
your
code
base
is
not
on
the
host.
So
if
something
goes
wrong
or
somebody
kind
of
hacks
into
it,
only
the
smart
nic
would
be
affected.
So
the
end
customer
environment
would
be
would
not
be
affected
by
that.
B
B
Okay,
all
right,
so
any
any
further
questions
on
the
offload
work.
Anything,
that's
not
clear!.
B
All
right,
let's
step
into
the
cube
cubemark,
so,
as
I
mentioned
earlier,
I
think
the
main
goal
for
doing
this
was
to
today.
If
you
want
to
say,
try
out
how
how
well
does
your
cni
solution
scale
you
have
to
let's
say
kubernetes
in
theory
allows
you
to
deploy
a
5000
node
cluster
and
you
can
deploy
for
that.
You
can
deploy
5000
vms,
it's
going
to
just
cost
that
much
money.
A
A
They
call
it
ipu.
Actually,
it's
called
infrastructure,
I
don't
know
the
name
the,
but
it's
called
they
call.
So
this
whole
thing
has
been
evolving.
It
was
initially
called
smart,
neck
yeah
and
then
they
started
calling
it
dpu
data
processing
unit
and
then
they
the
intel
has
coined
the
term
called
ipv,
essentially
infrastructure
processing
unit.
So
I
don't
know
the
name
of
it.
If
you
do
a
google
and
you
will
find
out.
B
A
A
B
Okay,
because
google
has
been
using
psyllium,
even
aws,
is
offering
psyllium
as
a
as
a
cni
solution.
Now
I
just
don't
know
whether
that
is
they
leverage
offload
or
not,
and
last
I
checked
hardware.
Next,
only
netronome
was
the
only
one
that's
out
there.
There
are.
There
is
efforts
for
doing
like
fpga
based
ebpf
processing,
which
apparently
is
got
some
gains.
B
It's
limited
in
terms
of
how
much
it
gains,
because
the
fpga
clocking
is
always
less
than
like
eight
to
ten
times
less
than
what
cpu
clocking
in
the
server
is,
but
it
is
still
offloading
functionality
from
work.
That
cpu
has
to
do
is
now
being
done
by
specialized
hardware,
so
there
is
always
a
gain
with
that.
B
It
won't
be
like
a
neck
to
neck
with
the
cpu
okay
like
in
terms
of
comparison.
It's
not
it's
not
quite,
but
it's
taking
work
away
from
the
cpu.
The
problem
with
fpga,
as
far
as
I
know,
is
that
you
have
to
know
how
to
do
the
fpga
programming,
which
is
not
easy.
It's
very
time
consuming.
B
Language,
yeah
yeah,
the
very
longer
I
don't
know
what
they
use.
This
is
so,
let's
get
to
go
mark.
I
think
this
one.
I
want
to
just
briefly
go
over
the
architecture,
diagram
that
I
had
over
there
to
give
and
give
a
sense
of
what
we
are
talking
about.
So
in
coolmark.
What
happens?
Is
you
deploy
today?
B
If
you
want
to
deploy
use
kubernetes,
you
do
you
ask
for
you,
go
to
aws
and
ask
for
amazon
and
ask
for
a
eks
cluster
or
you
can
get
a
gke
cluster,
which
is
really
a
one
or
multiple
masters
and
then
a
bunch
of
work,
a
bunch
of
worker
nodes.
So
this
in
this
hashed,
the
gray
hashed.
This
is
the
kubernetes
cluster.
Here,
these
three
nodes,
it's
a
simple
two
node
two
worker
one
master
cluster
and
you
talk
to
this
cluster.
To
create
parts
run,
deploy
your
pod
workload.
B
Maybe
your
nginx
server
is
running,
so
this
orange
hashed,
not
so
hollow
node,
which
I
call
is,
is
just
a
pod,
and
this
could
be
your
nginx
pod
in
norm
in
normal
kubernetes
usage,
you,
your
application
parts
run
in
this
cubemark.
What
it
does
is
it
when
you
deploy
this,
it
deploys
another
cluster
on
top
of
this
cluster.
B
That's
why
this
cluster
is
called
the
admin
cluster
and
the
kubemark
cluster,
which
runs
on
top
of
the
admin
cluster
is
called
the
kubemark
cluster
kumar
cluster
has
its
own
master
master
node,
which
is
similar
to
the
admin
master
node,
except
that
when
you
deploy
a
kook
mark,
it
creates
these
hollow
nodes
and
the
hollow
nodes
are
faking.
The
you
you
can
they
come
up
and
register
as
nodes
with
this
master.
B
So
if
you
go
to
this
master
and
then
ask
give
me
list
all
the
nodes,
it's
going
to
list
you
one
two
three,
it
will
look
like
real
nodes,
but
they're,
not
they're,
just
parts
pretending
to
be
nodes,
they're
running
a
fake,
cubelet
they're,
and
why
fake
cubelet?
Because
it
does
certain
things,
it
downloads,
the
image.
When
you
ask
this
kubelet
fake
kubelet
to
run
a
pod,
it
downloads
the
port
image,
but
doesn't
really
start
any
workload.
B
It
just
updates
the
status
saying:
okay,
the
pod
is
now
running
and
the
reason
to
do
that
is
it
mocks
all
the
operations
of
running
the
pod
for
the
for
kubernetes.
So
when
you
ask
it
to,
when
you
ask
this
master
to
run
nginx
pod
or
a
hundred
instances
of
engine
exports,
you'd
create
a
deployment,
then
the
controller
has
to
go
in
and
take
that
deployment
and
create
a
hundred
parts.
The
scheduler
has
to
go
and
schedule
those
hundred
parts.
B
Of
course,
the
api
service,
you're,
interacting
with,
is
being,
is
being
pressured
or
it's
being
put
into
use
by
all
this
these
operations,
the
controller
is
watching
the
api
server
for
objects
and
then
taking
action
writing
to
the
api
server.
The
scheduler
is
watching
the
api
server
for
unbound
parts,
and
then
it
is
updating
the
api
server
with
okay.
These
are
the
node
bindings
for
the
pods,
and
then
these
not
this
the
hollow
nodes,
the
cubelet
that's
running.
B
There
is
watching
the
api
server
and
looking
for
parts
that
are
assigned
to
it
and
then
it's
gonna
start
the
part,
except
that
it
doesn't.
It
pretends
to
start
the
port.
That
is
the
normal
kubemark
operation.
What
it
does
is
it
stresses
the
it
scale
tests
api
server,
its
scale
test,
scheduler,
it
scale
test
the
the
controller
and
the
scheduler
and,
of
course,
the
proxy
is
it's
a
hollow
proxy?
So
it
doesn't
really
do
anything
with
the
proxy,
because
there's
no
networking
in
the
current
queue
mark
deployments.
B
The
work
that
I
did
here
adds
the
component
of
a
real
cni
networking
to
kubemark
and
the
reason
to
do
this
is
imagine
you
wanted
to
test
the
network
performance
of
like
say,
10,
000
pods.
You
have
to
deploy
at
least
100
vms
and
those
100
vms
cost
you
money
and
to
give
an
idea.
Let's
say
you
pick
the
smallest
smallest
vm
in
this
category,
so
it's
going
to
cost
you
0.13
13
cents
per
hour.
That's
your
cost
for
running
this
vm.
B
So
if
you
run
100
of
these
to
deploy
10
000
pods
and
see
how
their
networking
performance
is
you're,
paying
13
an
hour,
what
we
can
do
with
this
project
is
you
can
deploy
the
same
10
000
pods
market
mock,
deploy
them
with
a
32
into
high
mem32,
which
is
gonna
cost
you
like
a
sixth,
so
this
is
going
to
be
about
13.
B
This
cost
you
two
dollars,
so
there
is
a
factor
of
six
point
five
savings,
and
this
is
gonna
stress
the
cni.
What
we
do
here
in
this
is
we
we
create
these.
The
parts
that
are
blue.
These
are
network
performance
spots.
What
this?
The
change
that
we
did
here
is
this
the
reason
to
call
it
not
so
hollow
node
is
it
can
run
a
real
pod
which
today's
scope
mark
cannot
do
so
it
can
run
a
cni
like
you
when
you
deploy
misa
or
psyllium
or
flannel.
B
They
go
and
create
a
pod,
a
daemon
set
they
deploy.
They
create
a
daemon
set
object
which
goes
and
creates
a
pod
on
each
node,
and
that
part
is
responsible
for
configuring.
The
overlay,
networking
or
and
wiring
up
the
pods
in
today's
school
mark.
You
cannot
do
it
because
it
would
fake
this.
It
would
download
the
flannel
part
and
say
yes,
flannel
is
running,
but
it's
really
not
running,
so
you
cannot
ask
it
to
run
real
parts
on
this
at
all.
B
The
change
that
I
made
was
to
run
a
real
cni
pod
and
now
what
it
does
is.
It
runs
a
real
networking
in
this
hollow
node.
So
you
that's
the
green
cni
networking
that
it
could
be
flannel.
It
could
be
mesa,
it
could
be
psyllium
so
right
now
we
have
all
three
working
flannel
mesa
and
celium.
We
have.
We
have
those
working
and
you
can
add,
support
for
more
as
you
go
on.
B
If
you
want
to
test
more
cni
solutions,
and
once
you
have
that
real
networking
now
you
can
deploy
parts
real
parts
that
can
talk
to
other
pods,
the
real
workloads.
Of
course,
you
know,
take
up
resources,
so
there
is
a
limit
on
how
many
you
can
do,
but
what's
better,
what's
the
real
benefit?
Is
these
purple
pots
which
are
performance
spots?
These
are
treated
special.
These
are
special
case
by
the
by
our
solution.
B
B
It
manages
the
c
groups
for
the
part,
but
it
delegates
most
of
the
work,
even
the
creation
of
the
pod
sandbox
it
and
then
running
the
containers,
it
delegates
it
to
the
container
d
so
or
any
scryo
or
container
the
cri
solution
that
you
have
that
you're
using
in
this
case,
what
we
I'm
using
is
container
d,
which
is
we
run
a
real
container
d,
except
that
it's
modified
so
that
it
can
run
cni
pods
as
real
workloads.
B
But
when
it
gets
these
net
performance
spots,
which
is
which
has
annotation
net
perfect,
was
a
true.
It
tells
the
cni
to
give
it
an
ip,
but
it
doesn't
really
start
anything,
but
you
can
use
this
for
stressing
the
cni
driver
and
you
can
run
you
can
write
tests
in
here
to
go
and
you
know,
run
targeted
tests.
B
For
example,
you
in
this
particular
case
you
can
have
this
blue
pod
talk
to
this
pod
and
this
one
talk
to
this
pod
and
have
this
one
talk
to
this
part,
so
you
can
have
pairs
of
traffic
or
you
can
make
one
of
these
spots.
Let's
say
this
part
here
as
a
server
and
all
the
remaining
parts
connect
and
get
data
from
the
server.
So
that
stresses
the
networking.
Of
course,
this
is
not
going
to
be
as
at
real
scale,
but
it
is
a
model
skill
and
one
of
the
things.
B
One
of
the
idea
of
this
effort
is
to
see
if
we
can
get
a
good.
Is
this
going
to
be
a
true
approximation
like
what
we
have
today
good
mark
is
a
if
you
see
some
latency
changes
from
one
build
to
another.
You
see
some
performance,
latency
change.
Okay,
the
scheduler
is
taking
longer
to
schedule
the
parts.
There
is
a
regression
that
does
translate
to
real-world
regression.
When
you
write
on
real
cluster,
we
want
to
see
if
the
same
thing
occurs
with
network
traffic
as
well.
B
Let's
say
if
you
had
a
hundred
of
these,
these
smallest
nodes
running
and
you
run
real
hundred
hundred
ports
on
each.
So
you
have
ten
thousand
pods
total,
and
then
you
run
this
traffic
pair
or,
let's
say
hundred
of
those
spots
are
service
and
nine
hundred
and
nine
hundred
990
ports
are
clients
that
are
connecting
to
those
hundred
parts.
You
want
to
run
some
kind
of
traffic
pattern.
B
You
can
do
that
by
doing
this,
it's
just
going
to
cost
you
more
money
when
you
multiply
this
by
100,
but
you
can
also
do
the
same
thing
with
a
smaller
vm,
because
you're
emulating
the
you're
cutting
out
the
actual
workload
running
part
and
only
doing
the
test
part
the
test
that
you
care
about.
B
So
that
is
an
introduction
to
this
komark
work.
Does
that
kind
of
give
a
picture
of
why
we
are
doing
this
before
I
go
to
the
status
updates.
C
It
would
be
worth
to
see
some
of
the
things
in
functioning
model.
It
would
be
really
helpful
to
see
the
some
of
the
demos
I
mean
team
have
used
in
the
past,
for
the
testing
purpose
cube
markers,
it's
a
it's
a
pioneer
tool
in
testing
and
sort
of
those
scenarios
yeah.
But
from
most
of
this
discussion
standpoint,
it's
also
equally
important
that
we
have
something
functioning
and
which
is
like,
obviously,
because
you
might
be
trying
all
these
things
so
side
by
side
scenarios.
C
I
mean
I
like
the
cube
mark
presentation
that
you
did
how
it
is
cost
effective
at
this
saving
seven
times
money
on
on,
because
of
you
are
wisely
using
the
32
32
gig.
I
think
the
machine
compared
to
the
smaller
machine
and
then
creating
those
that's
a
very
interesting
and
good
idea.
I
also
liked
another
presentation
where
you
were
talking
about
the
cni
projects
and
how
you're
handling
the
networking
sort
of
like
more
effectively.
C
But
it's
also
important
for
for
us
to
see
that
how
we
can
sort
of
like
see
these
function
scenarios,
because
we
have
planned
to
put
some
of
this
scenario
to
use
cases
and
use
cases
in
terms
of
like
overall
uber
use
case
from
a
business
community,
mostly
like
deepak-
and
I
was
talking
last
week
about
health
care
scenario.
C
A
B
Okay,
let
me
just
I
don't
think
I
have
a
setup
for
this
one,
but.
A
C
A
So,
instead
of
just
being
a
centaurus
being
just
the
infrastructure
play
yeah
he's
trying
to
kind
of
build
out
real
business
use
case
healthcare
like
prashanth.
As
you
know,
his
team
has
stepped
forward
to
build
out
a
healthcare
use
case,
telco
use
case
and
all
that,
and
then
we
can
showcase
all
of
these
components
as
part
of
that.
Basically,
that
will
definitely
resonate
well
with
the
end
user
community.
C
And
if
you
give
me
one
minute,
because
I
need
to
take
care
of
an
escalation
but
I'll
explain
you
like
in
less
than
two
minutes
what
I'm
trying
to
achieve
it's
a
very
simple
story,
and
maybe
it
would
be
helpful
if
we
can
start
like
looking
at
how
it
is
possible.
So
I
sort
of
like
need
support
for
making
sure
that
centuries
and
other
stuff
becomes
relevant
for
the
community.
C
So
it's
a
simple
architecture.
What
we
have
put
together,
it's
pretty
obviously
visible
so
human
from
neurology.
All
the
way
to
your
respiratory
to
everything
is
connected
through
various
models,
whether
it's
variable,
whether
it's
hospital,
where
it
is
laboratory
or
a
food
logger
which
generates
the
data
for
next,
which
is
an
edge
site,
goes
to
ato's
measure
combination,
use
the
ai
and
then
sort
of
like
produce
the
outcome
for
bots
or
for
power
and
stuff.
C
Like
that
very
simple
story:
a
man
with
diabetes
or
man
with
a
sleep
apnea
records,
his
data
gets
all
those
analytics
and
or
maybe
a
healthy
man
like.
Why
he's
a
healthy
man
versus?
Why
he's
a
sick
man
using
the
entire
story
and
then
sort
of
like
using
some
of
the
cutting-edge
technology
to
do
the
prediction
by
knowledge
graph
or
by
event,
mining
or
those
things?
A
Pencil
you
in
now
as
part
of
the
agenda
for
the
next
meeting,
so
this
demo
thing.
C
A
B
A
A
And
offloading
it
to
smartnex,
if
anybody
in
a
research
community
is
interested
in
working,
collaborating
with
us
and
especially
we
are
not
joined
prashanth
and
his
team
to
build
out
a
business
use
case,
it
would
be
really
great.
Actually
so
thomas,
stefan
victor
anybody,
you
know
you
guys
think
anybody
wants
to
take
part
in
this,
we'll
be
very
happy
to
facilitate
the
meetings
and
all
that
you
know
the
effort.
That's
going
on.
D
So
thanks
a
lot
for
the
for
this
very
interesting
presentation,
also
for
this
great
suggestion,
deepak.
We
will
definitely
think
about
this.
Also.
I
think
this
use
case
is
really
really
impressive
and
and
will
definitely
show
the
power
behind
these
technologies.
D
There's
actually
one
question:
I
would
like
to
ask
you
vinai
about
about
cubemark,
because
we've
also
been
trying
to
introduce
some
benchmarks
on
on
large
clusters
for
our
scheduler,
which
is
also
part
of
our
slos
work.
So
up
to
how
many
nodes
have
you
so
far
simulated
with
cubemark?
Are
there
any
numbers?
D
So
I.
B
With
flannel
I
did,
I
have
a
question,
but
let
me
answer
your
question.
First,
with
flannel,
I
did
a
simulation
with
10
000
fake
pods,
which
is
just
doing
the
cni,
so
the
cna
operation.
When
you
ask
what
happens
when
a
part
comes,
let
me
go
back
to
this
picture
here.
So
when
we,
when
a
pod
is
created,
let's
say
this:
purple
pod
is
created.
It
wants
an
ip.
B
So
before
the
pod
is
launched
by
the
container
d,
it
asks
the
cni
driver
to
give
it
an
ip
by
calling
a
cni
add
operation
and
when
the
part
is
done,
it
releases
that
ip
that
that
network
connectivity,
the
endpoint
by
doing
a
cni
dell
and
after
the
cni
ad
is
done.
It
gets
the
ip
it
creates,
the
the
port
and
container
workloads.
We
cut
out
the
container
workload
creation
part
part
of
it,
but
keep
the
cni
add
cni
dell,
part
of
it.
B
So
that
gives
us
the
scale
and
I
was
able
to
simulate
10
000
on
a
n232
and
that's
what
I
shared
with
the
sig
scale
scalability
folks,
a
couple
of
months
ago.
I
reviewed
this
with
them.
I
reviewed
the
design
with
them.
This
is
the
these
are
the
guys
who
are
responsible
for
they're.
The
sixth
gear
is
the
owner
for
kumark
and
all
the
scalability
work
that
kubernetes
does.
They
reviewed
this
design
and
they
kind
of
agreed
that
this
is
a.
B
This
is
a
good
approach
and
they
didn't
raise
any
concerns
so
based
on
that
we're
continuing
our
work.
So
essentially,
this
has
been
reviewed
by
the
industry
by
experts
in
the
industry.
This
is
he's
the
sig
lead
for
his
ambitious
sig
lead
for
six
scalability
and
his
current
sig
lead.
So
they
have
looked
at
this
and
I
think
the
main
data
point
I
can
give
you
from
what
I
did
a
couple
of
months
ago.
I
don't
have
the
demo
setup
ready
right
now,
but
this
is
what
I
shared
with
them.
B
We
were
able
to
use
highmem32
to
simulate
100
of
these,
not
so
hollow
nodes
with
flannel,
and
that
is
that,
essentially
each
each
node,
you
can
run
100
pods
real
pods.
In
the
hollow
node
case.
You
can
probably
run
100
real
parts.
I
haven't
tried
it
because
you're
limited
by
the
resources
that
have
that
are
available
in
the
node.
B
It's
not
10
000
real
parts,
but
it
prevent
it
pretends
to
create
10
000
pods,
because
there
are
100
of
these
not
so
hollow
known.
I
call
it
not
so
hollow
notes,
because
now
it's
not
really.
You
know
pretending
that
the
pod
is
not
running
at
all.
It
runs
a
real
cni
driver,
real
cnn
port.
You
need
resources
for
that
and
then
it
pretends
to
run
a
hundred
parts,
which
is
the
kubernetes
limit.
B
It's
kubernetes
limit
is
110
by
default,
but,
let's
you
know,
take
100,
it's
a
good
number
to
take
and
then
essentially
we
get
a
cost
savings
of
6x.
That's
the
idea
now
yeah,
it's
more
impressive!
Thank
you
about
your
work
with
scheduler.
Does
this
having
the
facility
to
do
network
network
emulation
or
pretend
faking
the
network
load?
Does
that
help
with
scheduling
at
all,
because
current
codemark
should
be
able
to
do
that?
For
you.
D
Yeah,
because
that's
exactly
what
I
was
thinking,
actually
we
don't
need
to
simulate
the
networking
capabilities.
We
just
need
to
be
able
to
simulate
different
node
types,
because
so
far
we've
been
using
fake
cubelet
manually
with
a
configuration
script,
but
I
was
thinking
after
your
presentation.
Maybe
cubemark
would
would
make
our
life
easier.
B
So
you
want
to
here
instead
of
having
this
fake
node,
you
want
to
have
different
node
types.
Is
that
what
you're
saying.
D
B
D
Yes,
yes,
exactly,
and
currently
we
do
that
using
using
a
custom
script,
but
I
was
thinking
maybe
maybe
cubemark
can
can
do
some
of
that
work
for
us.
B
No,
no
because
I
need
to
the
only
reason
I
I
didn't
mean
to
cut
you
off
in
this
one,
but
the
only
reason
is
I
want
to
understand
more
if
you
can
share
some
documentation
on
what
you're
trying
to
do.
Maybe
by
next
meeting
we
can
have
a
better
picture
on
and
a
more
productive
discussion
on
what.
B
B
Just
deploying
the
admin
cluster
here
takes
like
what
I
deployed
one
here,
and
it
takes
five
minutes,
and
let's
see
this
will
probably
take
10
minutes.
We
don't
have
the
time
today,
but
next
time
I
would
show
this
demo.
What
I
could
show
you
here
is
that
this
is
a
good
mark.
I
did
this
is
the
admin
cluster
you
have
a
real
master
and
one
one
worker
that's
running,
and
on
top
of
that
I
have
a
this
is
when
I
deploy
a
kubemark
cluster.
B
So
my
coop
config
points
to
the
kumar
config
and
I
do
the
get
nodes.
So
it's
showing
me
this
master,
which
is
real
master
and
not
so
hollow
node,
which
is
actually
a
pod.
So
if
you
go
to
the
admin
cluster,
where
you
get
the
nodes
and
you
get
the
parts.
B
So
you'll
see
that
it's
running
one
of
these,
and
if
the
more
of
these
that
you
run,
if
you
run
a
hundred
of
these,
you
can
get
ten
thousand
parts.
This
is
honest.
This
is
running
on
a
single
real
node,
which
is
this
one
here.
B
This
is
the
real
node.
This
is
the
real
vm
of
size,
32
n2,
high
m32,
and
it's
currently
I'm
doing
development
trying
to
get
misa
working
fully.
There
are
some
bugs,
so
I'm
just
using
one,
but
when
you
do
scale
testing,
this
would
be
you'd
see
a
hundred
of
these
pods
hollow
nodes,
and
each
of
these
is
capable
of
running
a
hundred
parts.
It
gets
you
to
ten
thousand
number
100
fake
ones.
If
you
want
to
do
real
workloads,
it's
going
to
be
much
less,
probably
yeah.
B
So
definitely
so
that
is
the
intent
of
this
is
for
stressing
the
networking.
But,
yes,
you
can
use
real
pods
and
then,
of
course,
it
can
be
very
heavy
weight
if
you
use
that
lightweight
alpine,
the
5
mb
images
and
then
add
the
tools
that
you
really
need
like
hyper
for
something,
then
you
can
do
network
simulation
tests
or
any
other
traffic
test
that
you
want
to
do
you?
Can
you
can
use
this
for
real
workloads
of
your
type?
B
So
this
is
why
I
was
not.
I
was
when
the
only
guys
selected
my
miser,
which
is-
I
already
talked
about
it
in
texas,
but
they
didn't
select
this
one.
I
was
like.
I
guess
you
guys
only
want
to
hear
if
I,
if
I
submitted
a
cfp-
and
this
is
true
even
for
kubecon-
I
think
I
submitted
this
one
for
the
ebpf
colo,
which
is
where
I
think
it's
a
real
benefit,
but
there
was
another
one
that
we
submitted
for
resize
they
selected.
A
B
A
B
Is
really
upcoming
technology?
So
what
we
see
here
is
just
let
me
complete
this
one,
since
we
are
already
on
this
demo,
so
we
are
running
a
real
one,
hollow
node
in
that
one
hollow
node.
We
are
running
this,
this
mizar,
of
course,
where
this
is
with
misar.
So
it's
running
a
daemon
like
any
cni
would
do
this.
B
It
would
run
a
daemon
on
each
of
the
nodes
and
then
it
turns
in
in
this
case
we
are
running
an
operator
that
listens
to
the
parts
and
configures
ips
and
two
real
ports
in
this
case,
so
they
have
this
miser,
ascend
ip
10.,
10,
20,
0,
6
and
10
10
205
and
of
course
we
are
able
to
let
me
exit
this,
so
I'm
able
to
exec
into
that
I'm
going
to
use
the
kumar
kub,
config
and
exec
into
that
netpod
one
and
then
do
ipa,
so
that
is
10206..
B
This
is
the
part
that's
running
in.
If
you
look
at
it
here,
it
is
running
in
the
hollow
node
or
in
the
master
node.
And
then
there
is
another
part:
that's
running
in
the
hollow
node,
which
is
the
which
is
really
a
pod,
so
we
can
go
and
ping.
The
pink
should
still
be
in
the
buffer
yeah,
so
we
can
ping
it
so.
B
B
B
D
But
I
think
it's
a
very
important
work,
because
this
really
saves
tons
of
money
with
testing
cni.
A
D
Yeah
sorry
for
getting
so
much
off
topic.
But
oh.
B
No,
no,
this
is
it's
important.
I
think
we
should.
We
should
take
this,
and
that
was
the
reason
that
was
the
main
reason
I
said
we
should
take
this
offline
and
because
this
is
really
interesting.
Your
use
case
is
very
interesting
and
I
cannot
possibly
do
justice
giving
you
all
the
information
in
without
knowing
the
full
extent
of
what
what
you're
looking
for
so,
not
not
to
run
my
own
agenda,
but
I
think
your
agenda
is
more
important
here.
B
D
That's
great,
thank
you
so
deepa
can
you
please
send
me
the
contact
details
of
veenai.
A
B
B
B
They
tried
to
use
maltas
for
multi-networking
multi-isolated
multi-networking
for
multi-home
networking
for
their
use
case
where
they
want
to
separate
out.
They
want
to
have
a
part,
and
they
want
to
have
isolation
between
the
two
different
interfaces,
the
back
end
and
the
front-end
side
of
things,
and
they
tried
to
use
psyllium
with
multis.
It
just
didn't
work
very
well
and
they
had
to
do
some
special
work
to
do
it.
B
The
idea
here
is
to
continue
see
if
we
can.
I've
submitted
an
annual
review
annual
report
to
the
toc
it's
coming
up
for
review
in
three
weeks
in
four
weeks
or
so
by.
I
think,
third
week
of
september,
they
meet
twice
a
month
and
if
they
have
any
questions
I'll
be
there
to
answer,
but
I'm
hope
hoping
that
they
don't
archive
it.
Yet
we
don't
have
the
resources
to
work
on
this.
B
So
I
don't
know
how
much
progress
we'll
make
on
this
and,
like
I
said,
I'm
the
only
person
working
on
all
these
three
things.
So,
but
the
idea
is
to
see
if
we
can
take
misar
the
technology
and
misar
do
some
upgrades
to
it.
Essentially,
the
send
side
make
it
ebpf
dc,
tc
programs
and
the
receive
said,
keep
it
xtp
and
see.
B
If
that
gives
us
good
performance
and
miser
already
has
isolation
built
in
and
it
works
pretty
well,
and
if
that
cfp
gets
selected
for
kubecon's
ebpf
day
I'll
present
it
over
there
with
wong,
I'm
already
going
there
for
another
thing,
so
I'm
hoping
that
they
select
that
for
the
ebpf
co-located
event.
B
So
this
is
pretty
much
the
status
update
for
cni
genie.
I
think
the
short
story
is
that
you
know
we
don't
want
the
toc
to
archive
it
just
yet
and
the
only
person
I
kind
of
know
a
little
bit
over.
There
is
dims
he's
one
of
the
toc
chairs.
B
D
A
Just
one
one
tip
the
very
last
line:
sync
with
sushanta
to
gain
any
travel
knowledge
make
sure
this
guy
is
the
india
huawei
guy
so
make
sure
you
do
that
in
open
source
forum.
Just
be
careful.
B
Yeah,
how
about
talking
to
kevin
he's
kevin
is
gonna
is
committed.
A
A
A
B
Okay,
so
no
one-on-one
conversation
with
because.
B
He's
also
coming
there
he's
also,
I
think,
he's
maintainer
for
a
volcano
project,
so
he's
coming
for
a
maintainer
track.
He
should
be
coming
there
for
a
maintainer
track.
I
don't
know
if
he's
coming
in
person
or
not,
I
figured
I'm
going
to
run
him
in
the
speaker's
launch
or
something
so
yeah.
That's
fine!
I
think.
That's!
Okay,.
A
Right,
yeah,
yeah
yeah.
I
think
this
is
good.
This
is
really
good
presentation.
You
covered
a
lot
of
things
so
appreciate.
It
looks
like
there's
a
lot
of
synergy
between
between
the
thomas
project,
the
one
we
are
working
on
and
then
the
the
use
case,
which
click
to
cloud
folks
mentioned
that
as
well.
So
we'll
we'll
nothing
discuss
more
on
this
side
yeah.
I
appreciate
that
thanks
vinay.