►
From YouTube: CNCF Telecom User Group Meeting - 2020-10-05
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
we
can
get
started
so
for
those
who
haven't
been
here,
this
is
telecom
user
group
and
it
makes
on
alternating
times
on
the
first
monday
of
the
month.
A
So
we
just
finished
on
es.
We
have
an
elephant
technical
virtual
conference
coming
up
on
the
13th
and
then
kubecon
after
that
schedule's
been
announced.
Does
anyone
have
anything
interesting?
They
want
to
add
that
was
from
o
nes
or
I
see
gergae
and
tom
and
a
bunch
of
people
on
here
that
have
come,
but
just
point
out
some
interesting
things
that
happened
at
one
es
from
last
week.
B
Yeah
very
difficult
to
to
remember
any
highlights
yet
I
still
did
not
process
my
my
notes,
but
I
remember,
is
that
there
were
these
two
sessions
in
parallel.
One
is
the
the
the
tag
b
of
session
and
the
other
session
was
the
panel
with
the
I
don't
know
what
was
the
title
cloud
native
warlords
one
and
I
ended
up
in
the
warlords
one
and
we
had
quite
an
interesting
discussion
in
the
chat
of
the
chat
of
the
panel
about
the
different
problems.
B
What
we
have
with
with
the
intersection
in
the
intersection
of
cloud
native
event
and
telecommunications.
A
Yeah
that
that
was
a
a
good
panel,
I'm
hoping
that
we
can
do
more
of
those
it's
hard,
the
I
guess,
even
with
the
recorded
trying
to
make
the
the
recording
and
then
the
q
a
being
live
is
is
good,
but
I
guess
it
becomes
a
discussion
either
way
that
the
unfortunately
the
what
was
the
birds
of
a
feather
for
the
telco
music
group,
the
zoom
links
didn't
work,
were
they
they
were
not
clickable.
I
should
say,
but
that
that
one
we're
going
to
have
an
updated
version
at
the
kubecon.
A
On
the
monday
we
put
together
a,
I
guess:
it'd
be
a
workshop
versus
a
tutorial
where
we
had
six
different
cncf
projects,
one
after
another,
giving
a
condensed
or
dense
introduction
demo
and
focused
on
how
to
use
those
projects
with
either
focus
on
telecom
or
edge
and
had
those
back
to
back.
So,
if
folks
haven't
seen
those
and
as
soon
as
the
recordings
are
available,
I'd
recommend
checking
them
out,
because
it
was
you
get
six
six
cncf
projects
all
one
after
another,.
A
We're
hoping
to
continue
following
up
with
those
projects
on
how
to
collaborate
and
get
feedback
into
the
different
communities
and
projects
other
projects.
I
know
just
from
some
of
the
feedback
across
the
groups,
the
the
people
that
were
presenting,
that
they
were
hearing
staff
on
on
the
how
they
could
interoperate.
That
was
new.
B
I
have
a
question
about
the
the
birds
of
a
feather
session.
Were
there
any
any
new
participants,
any
new
new
parties
who
are
interested.
A
A
When
that
was
noticed,
then
a
bunch
of
links.
The
session
had
already
started,
but
I
know
a
few
people
started
posting
the
link
in
the
slack
channels,
but
I
think
by
then
it
was
too
late
and
people
shifted
the
other
box
so
that
I
didn't
notice
a
few
names
I
hadn't
seen,
but
there
weren't
too
many.
Unfortunately,.
B
A
C
Also,
that
was
our
first
experiment
with
in
expo
the
platform
being
used
there
and
there
were
a
number
of
parts
of
it
that
were
kind
of
clunky
for
lack
of
a
better
word
and
didn't
quite
work
as
expected.
So
you
know
it's
we're
trying
to
experiment
with
a
number
of
platforms
to
make
these
virtual
events
more
engaging.
C
You
know
we're
aware
that
with
so
many
virtual
events,
there's
a
bit
of
virtual
event
fatigue
sitting
in
so
we're
doing
what
we
can
to
try
and
make
them
as
close
to
live
events
as
possible,
and
this
was
an
experiment
with
the
an
expo
platform
and
yeah.
So
we
did
have
a
number
of
things
didn't
go,
quite
as
we
wanted
it
to
for
sure.
A
I
appreciate
y'all
trying,
I
think
the
public
slack
channel
for
oneis
worked
pretty
well
and
there
was
a
lot
of
ongoing
conversations
that
went
even
to
multiple
days
over
topics
from
different
talks,
so
that
was
really
good
to
see
and
be
nice
to
make
sure
those
continue
past
that
time
period
don't
get
lost.
A
There's
a
a
question
and
zoom
chat
from
irvin's
campaigns
beyond
high
level
discussions
on
cnf.
Is
it
right
topic
discuss
which
particular
hardware
acceleration
technology
are
employed
by
members,
for
example,
dpdk
or
af
xdp,
and
how
better
to
stitch
them
with
containers
all
right?
I
don't
think
this.
That's
a
good
topic
and
question.
We
can
just
add
that
to
the
agenda.
D
Yeah,
I
know
sorry
I'm
just
wanting
to
ask
because
is
it?
Is
it
too
obvious
and
that's
why
I
just
ask
a
bit
because
we
are,
we
are
working
for
that
for
quite
a
lot,
but
if
it's,
if
it's
not
in,
if
it's
a
natural
topic
for
agenda,
so
you
can
disregard
it,
it
was
just
a
question
because
you're
trying.
D
Level
discussions,
so
it's
just
just
just
to
ask
my
dj
just
to
just
to
make
it
because
we
are
a
little
bit
too,
I
would
say
just
living.
E
D
The
high
lofts
and
just
just
a
question,
for
I
don't
know
making
hands
dirty
and
some
kind
of
technical
discussion,
but
if
it,
if
it's
not
in,
if
it's
not
right
topic
for
agenda,
please
disregard
it.
I'm
not
interested.
A
A
So
they
I'll
give
a
quick
update
on
the
cnf
conformance
test
suite,
and
we
have
a
pretty
large
group
here
so
for
those
that
aren't
familiar
with
it
and
cncf
has,
I
guess
I
can
just
say,
has
three
initiatives,
and
one
of
them
would
be
this
group,
the
tucker
music
group
there's
a
cnf
testbed,
which
is
a
whole
chain
framework
and
stuff
for
working
with
technology
solutions
and
it
deploys
to
packet
and
if
you're
interested
go
check
out,
cnn
testbed
and
then
the
cnf
conformance
is
a
test
suite
similar
to
you
think
of
it
similar
to
the
kubernetes
conformance
test
suite
so
or
the
maybe
the
ede
test,
it's
actually
more
similar
to
the
sauna
boy
side.
A
But
it's
as
far
as
the
configuration.
How
it's
set
up
and
its
goal
is
to
provide
a
way
to
test
cloud
native
principles
and
properties
for
both
cnfs
application
side
and
the
platform
pieces
beyond
what
core
kubernetes
ada
tests
and
performance
are
covering.
So
this
gets
could
get
into
items
like.
How
do
you
provide
hardware
acceleration
that
the
questions
just
ask?
How
do
you?
How
do
you
provide
it
in
a
way
that
services
can
consume
it
and
you
could?
A
We've
recently
updated
to
it
started
with
workspace,
and
then
platforms
were
added.
Platforms
were
running
as
a
you
would
designate
that
you're
going
to
run
the
platform
test
separately.
Now
the
workspace,
the
workload,
testing
or
cnf
testing
has
been
moved
under
a
whole
workload
section
as
far
as
the
name
space
and
everything.
A
So
you
can
either
run
workloads
the
workload
test,
the
entire
piece
or
you
can
run
categories
within
the
individual
test,
and
you
can
also
be
able
to
run
the
whole
thing
if
you
want
to
test
a
platform
with
specific
applications
that
are
running
on
it
and
most
of
the
new
test
focus
has
been
on
adding
platform
tests.
A
So
this
could
be
stuff
like
what
happens
when
a
node
reboots
or
dies,
and
that
sort
of
thing
and
there's
been
a
lot
of
updates
on
the
usability
site
from
the
getting
set
up
based
on
feedback
over
the
last
month
and
some
of
that's
to
make
it
easier
to
quickly
get
started
some
of
it's
on
feedback,
if
you're
a
developer
and
trying
to
run
the
test
and
make
changes
so
that
it
gives
you
the
feedback,
you
want
or
change
different
levels
of
logging
and
get
a
lot
more
feedback
and
and
then
trying
to
look
at
what
cnt's
needs
are
for
the
requirement
side
of
workload
platform.
A
A
It
also
does
these
things
on
platform
add-ons
like
cni
csis
and
that's
where
the
goals
are
so
we're
taking
feedback
there,
and
on
that
note,
we
recently
got
integrated
with
cntt
for
the
testing
and
it's
now
running
for
workloads.
Specifically,
platforms
were
still
being
not
released
yet
on
the
integration
that
we
were
using,
but
it's
now
running
in
the
cntt
openfv
funk
test
and
those
those
are
running.
A
A
Does
anyone
have
any
questions,
comments
or
anything
about
cnf,
conformance
test,
suite.
A
All
right,
so
we
had
a
couple
of
items
on
the
agenda
and
no
one
person
wasn't
able
to
join
us
and
this
morning,
so
their
talk
is
gonna,
we'll
bump
that
to
an
upcoming
meeting,
and
I
think
sadie
was
going
to
talk
with
us
about
5g
cnf
deployment
models
is:
are
you
on
the
calls
to
do
I'm
not
seeing
all
right?
So
it
looks
like
we
may
need
to
defer
that
one.
A
And
that
this
may
turn
out
to
be
something
that
be
nice
like
in
a
a
survey.
There's
been
surveys
in
the
past
and
I
know
cntt
had
a
survey
just
several
weeks
back
for.
A
A
F
D
Yeah,
I
I
think
that
I
think
if
this
is
just
to
say,
we
are
just
looking
for
some
some,
I
don't
know
advices
or
opinions
because,
like
we,
I
think
everybody
knows
dpdk,
but
but
there
is
also
this
address.
Family
xdp,
so
xdp
or
like
embedded
vpf,
so
so
like
the
both
like
technologies
are
like
bypassing
the
kernel
and
in
terms
for
the
dealing
with
with
the
network
stack.
D
So
just
just
might
be
that
I
don't
know
like
might
be
some
kind
of
suggestions
from
the
other
members,
just
just
what
what
they
are
preferring
or
what
is
what
they
are
the
views,
because
it
would
be
quite
interesting
to
to
also
understand
like
what
what
are
their.
I
don't
know
opinions
about
the
technology.
F
Sure
can
you
hear
me
at
all
yeah
go
ahead,
yeah,
sorry,
so
I
I
would
answer
briefly
in
two
ways
for
things
we're
doing
at
red
hat
so
ebpf
is
a
platform
is
really
maturing
quickly.
There's
a
lot
of
interest
in
it
from
various
aspects
and
very
recent
versions
of
the
linux
kernel
have
really
improved
it
in
many
ways
and
yeah
that
that's
definitely
a
very
exciting
path.
I'll
also
mention
smart
knicks.
F
It
really
depends
on
your
uk
use
case
and
what
you're
trying
to
do,
but
a
smart
nick,
some
sort
of
fpga.
You
can
offload
a
lot
of
your
networking.
F
For
example,
ovs
open
virtual
switch
and
really
layer
ovn
on
top
of
of
that
and
really
create
create
a
whole
data
plane
based
on
smart
mix.
It's
it's
challenging
to
integrate
this
into
kubernetes
networking
seamlessly
it's
not.
It's
definitely
not
trivial,
but
these
are.
These
are
two
areas
that
we're
working
on
at
at
red
hat.
E
So
there
is
a
third
one.
This
this
is
section
kapoor
from
juniper,
so
in
addition
to
dpdk
and
smart
neck,
there
is
another
one
which
is
sriv.
That's
a
that's
becoming
quite
predominant
as
well
and
in
the
smart
nic
category
there
is
another
one
which
is
a
gpus,
so
that's
those
are
the
ones
which
are
predominantly
being
looked
at
by
most
of
the
telcos.
D
Yeah
I
understood
now
we're
like
mostly
are
focusing
on
the
like
public
cloud
and
like
the
particular
there
like,
there's
like
like,
if
we're
talking,
for
example,
if
we
are
our
like
some
deployment
environment
is
aws
and
then,
of
course,
there
is
like
they
they
have.
I
would
say
they
are
supporting
with
this
enhanced
networking,
adapter
they're,
supporting,
dvdk
and
and
and
and
xdp.
D
So
I
just
wanted
to
to
to
gather
some
kind
of
insights
might
be
that
members
have
done
something,
and
is
it
worthwhile
on
which
one
to
to
to
to
how
to
say
to
shift
whether
it's
a
dpdk
or
af
60dbp
or
they
are
overlapping,
each
other
and
and
then
at
the
end
it
will
be
one
technology
at
the
end.
E
So
there
is
another
one
which
is
a
competitor
of
aws.
It's
a
smaller
company
stack
path.
They
they
are.
They
are
geared
towards
fully
cloud
native
deployments
across
the
globe
and
that's
geared
towards
kubernetes
100,
kubernetes
and
they're,
using
fully
sriv
underneath.
E
So
if
you
want
to
deploy
kubernetes
and
virtual
machines
that
they
give
you
25,
gb
throughput
in
your
vms
and
that's
more
than
adequate
for
most
of
the
application
deployment
so
so
like
I
said
these
are
the
three
very
predominant
ones.
F
I'll
mention
quickly
that
sriov
has
a
cni
plug-in
that
was
actually.
F
Contributed
by
members
of
my
team
and
as
long
as
you
have
the
hardware,
if
you
have
a
hardware
that
can
support
it,
you
can.
You
can
really.
I
think,
that's
a
very
quick
way
to
start
getting
into
high
performance
networking
in
the
lab.
G
H
This
is
anand
from
ndc,
so
we
have
tested
couple
of
cnfs,
so
many
implementations
are
using
sri
uv
with
dpdk,
like
sriv
vf,
then
a
dpdk
or
vpp
inside
the
cnf.
To
achieve
this
xtp.
What
we
have
seen
is
we
could
not
find
a
vendor
that
supports
xtp
in
the
port
level.
Okay,
there
is
an
stp
driver
that
that
bypasses
or
the
net
filter
level,
which
gives
a
throughput
similar
to
dpdk,
but
for
a
communication
between
port
to
port,
which
does
not
go
via
the
network
card.
H
So
we
evaluated
a
couple
of
cni's
and
they
said
these
accelerations
today
not
available,
but
most
of
the
implementation.
Today,
what
we
are
seeing
with
srlv
srv,
plus
dpdk
or
vbp,
and
like
like
red
hat
mentioned,
this
is
sriv
cni,
which
is
available
in
open
shift
multis,
and
we
have
some
of
our
customers
are
using
this
at
scale.
F
I
can't
give
a
a
complete
answer,
other
than
saying
that
it's
just
very
well
integrated
openshift
does
come
with
multis
so
which
is
fully
supported.
So
if
you
want
to
add
an
extra
sriv
interface,
you
can
manage
that.
I
can't
say
much
about
installation
and
infrastructure.
I
do
not
know
enough.
E
F
I
know
that
melanoxa
smartnics
are
on
the
roadmap.
Some
support
is
already
there.
It
really
depends
what
you
want
to
support
right.
There's,
there's
a
lot.
You
can
do
with
smartnix
very,
very
flexible.
So
I
know
that
on
the
roadmap
is
to
fully
integrate
ovn
and
ovs
via
smartnix,
so
that
is
on
the
roadmap
for
for
openshift.
F
Oh,
the
the
sri
will
be
specifically
sorry.
Yes,
what
do
you
mean
melanox
as
our
iov?
Yes,
oh,
okay.
Sorry,
I'm
deep
inside
down
there
like
smartnix
these
days.
I
I
don't
know
for
sure
which
ones
I
think
srio
v
is
a
pretty
standard
protocol.
I
don't
know
if
the
differences
are
that
important
or
which
hardware
are
certified
exactly,
but
I
I
imagine
that
the
major
ones
I
can
get
you
a
list
if
you're
interested.
A
So
what
providers
allow
you
to.
A
E
A
Okay,
who
else
besides
stack
path-
and
I
guess
I
could
throw
out
packet-
I
mentioned
them.
A
A
A
Well,
I
guess
I'm
thinking.
I
know
that
some
cloud
providers
you're
not
going
to
have
control
over
the
what
would
essentially
be
the
underlay.
A
So
at
packet
you
could
actually
set
a
flare
to
between
machines
and
then,
unless
I'm
off
and
it's
changed
recently,
so
some
of
the
club
providers
or
so
actually
I
don't
need
anomaly
you
you
can
do
an
overlay
but
you're
not
going
to
be
able
to
set
up
layer.
2.
F
So
openshift
would
not
be
included.
I
know
openshift
runs
on
at
microsoft,
but
I
don't
know
if
they
actually
provide
the
hardware
for
for
any
of.
F
A
What's
happening
with
maybe
on
that
and
tying
in
with
usage
cnt,
the
or
maybe
I
should
say
the
lfn
lab
for
doing
testing
for
the
new
kubernetes
cnt
implementation.
A
Sorry,
what
was
the
question?
Apologies
just
wondering.
I
guess
this
tied
in
a
little
bit
with
it's
not
fully
open,
but
what
lab
is
being
used
for
cntt
for
ri?
I
know
that
elephant
and
jim
you
may
have
input
if
you're
still
on.
I
Yes,
it's
mainly
been
yeah,
it's
mainly
been
driven
by
the
kubref
project
in
opnfv.
At
the
moment,
okay
and
it's
been
deployed
into,
I
think,
an
intel
pod
within
the
european
fe
labs.
J
K
Yeah
well
hi
there.
Basically
it's
it's
true,
so
this
is
a
bare
metal
app
in
the
open
v
context
and
they're,
basically,
three
different
labs
or
pods.
We
have
available
right
now,
two
hosted
by
erickson
and
your
intel
lab
as
well.
So
from
that
perspective,
yeah,
it's
a
bare
metal
thing,
so
you
have
full
control
over
everything
because
you
deploy
everything,
but
it's
also
not
a
public
cloud
environment,
but
it's
really
like
a
bare
metal
lab.
K
K
A
Okay,
that's
interesting
good
to
know
so
that
the
the
whole
topic
was
mainly
about
what
are
people
using
and
they're
talking
about
hardware
acceleration
that
kind
of
goes
into
what's
available
technology
wise
and
then,
where
can
you
use
those
things
of
course,
in
a
production
internal?
You
can
do
whatever
you
want,
it's
nice
to
know
what's
out
there
if
for
collaborating?
A
Even
I
would
call
the
cnt,
maybe
almost
a
hybrid,
it's
not
fully
public,
but
it's
if
you're,
if
you're
interested
in
joining
cndt
and
collaborating,
then
you
you'd
end
up
with
access
while
you're
working
with
people,
so
that
makes
it
where
folks,
who
want
to
know
how,
how
something
looks
like.
A
Maybe
if
someone
was
wanting
to
see
how
smart
necks
or
something
else
would
work
with
the
reference
implementation
and
put
that
before
it
and
maybe
collaborate
all
right,
does
anyone
have
anything
else
or
want
to
talk
about
on
this
before
we
move
on?
Let's
tal
talk
about
his
topic.
A
A
A
Open
lab,
so
if
you
want
all
your
own
hardware,
then
then
that's
it
goes
down
a
different
path.
All
right,
let's
hear
about
knapp,
is
that
how
you
say
that.
F
Or
not,
I
think
I
pronounce
it
nap
or
knapp.
Sometimes
nap
is
actually
a
word.
So
do
you
share
your
screen
yeah?
That
would
be
a
great
idea.
Thanks
all.
C
F
Here
we
go
so
I'll
I'll,
take
about
10
minutes
for
this,
and
I'm
I'm
not
going
to
do
a
demo.
I
actually
this
is
kind
of
an
ad
hoc.
I
thought
alex
wool
would
be
presenting
today,
but
when
I
saw
him
removed,
I
thought
I
would
quickly
put
myself
on
the
agenda.
So
let
me
explain
the
problem
and
I
think
you'll
want
you'll
understand
it
pretty.
Well,
the
problem
is
no.
It.
F
Here,
oh
examples,
all
right:
the
problem
is
this:
if
you've
ever
used
multis,
you
know
that
you
need
a
lot
of
knowledge
to
configure
the
networks.
You
know
expand
this
a
bit,
I'm
giving
an
example
here
of
a
very
trivial
or
straightforward
use
of
multis
and
the
idea
here
I
have
two,
so
I
have
two
deployments.
F
First,
one
is
attached
to
network
a
second
one
is
attached
to
two
networks.
So,
as
you
may
know
already,
there
is
a
special
annotation,
a
cncf
annotation
that
activates
multis
multis
will
know
how
to
look
for
this
and
attach
an
extra
interface
for
this
network
and
then,
similarly,
we
have
another
deployment
here.
That
has
two
networks:
explicit,
a
and
explicit
b
and
now
these
names
here
are
names
for
custom
resources,
called
network
attachment
definitions,
and
these
are
very
simple
crds
that
all
they
do
is
include
the
configuration
for
cni.
F
So
you
give
the
cni
plug-in
here,
I'm
just
giving
an
example
with
a
simple
bridge
and
pretty
straightforward
right,
but
actually
very,
very
difficult
to
manage
at
scale.
I
think,
as
anybody
who's
tried
to
use
multis
knows
in
order
to
write
this
configuration.
F
You
have
to
be
a
system
administrator
or
at
least
have
access
to
system
administrator
information,
not
just
for
the
cluster,
but
even
for
the
particular
host
on
which
this
deployment
and
its
pods
will
eventually
be
running
on,
because
you
need
to
know
which
technologies
are
available
available
if
you're
using,
for
example,
sri
ov
right,
you
need
to
know
what
hardware
srov
hardware
is
available.
If
you
are
configuring
ipam,
you
have
to
know.
Who
else?
Is
configuring
ipam
right?
It
could
be
somebody
not
even
in
your
namespace,
some
other
workload.
F
B
Just
one
one
one
comment,
so
I
think
that's
a
that's
a
fundamental
design
error
in
in
multis
that
the
the
network
administration
and
the
other
train
attachment
of
networks
to
put
this
kind
of
mesh
together.
F
F
This
very
specific
problem
of
attaching
cni
perfectly,
I
think,
but
then
or
not
perfectly.
It
also
has
gaps.
But
yes,
absolutely.
This
is
what
I'm
trying
to
show
today
that
this
is
a
big
big
problem,
a
major
problem.
In
fact,
you
know
if
you're,
if
you're
coming
from
the
world
of
openstack
you're
used
to
having
a
network
as
a
service,
you
have
neutron
and
neutron,
of
course,
has
many
limitations.
I
I
don't
think
we
want
something
identical
to
neutron
for
kubernetes,
because
neutron
assumes
overlay
networks
right.
F
It
already
assumes
that
you
can
create
any
subnet
you
want
and
it
makes
sure
that
you
get
it
and
that's
not
always
the
use
case.
We
want
in
kubernetes
and
definitely
not
in
telco,
but
we
do
need
some
way
to
manage
these
and
and
that's
the
project
that
I'm
I'm
really
showing
you
today.
So
I
called
it
knapp
and
I'll
say
this
is
a
poc.
I
think
what
I'm
trying
to
do
here
is
opening
open
up
this
discussion
and
I
don't
want
to
be
the
only
one
who
providing
solution
here,
but
I
want
to.
F
F
Here's
the
example,
the
same
example
we
just
saw
but
using
nap
resources
instead,
so
the
kind
is
network.
So
this
is
a
new
kind
of
of
custom
resource,
but
you'll
see
there's
no
cni
configuration
here.
Instead,
I'm
specifying
a
provider,
and
I
say
that
I
want
a
bridge
provider
and
then,
similarly
for
the
deployments,
I
just
use
a
different
kind
of
annotation
and
again
I
give
it
a
name,
but
in
this
case
I'm
not
talking
about
a
network
attachment
definition,
but
I'm
actually
talking
about
these
networks.
F
F
So
a
pretty
simple
solution,
right,
obviously
that
all
the
knowledge
has
to
be
in
the
provider.
The
provider
has
to
know
what
to
do
and
how
to
provide
these
networks,
so
it's
provisioning
them
and
also
de-provisioning
them
too
right.
So,
for
example,
if
there's
a
pool
of
subnets
the
provider
will
know
how
to
give
it
and
unprovide
it.
You
know
return
it
to
the
pool
if
it's
no
longer
in
use.
F
If
it's
sri
ov,
you
have
a
very
specific
limited
number
of
resources,
of
course,
so
you'll
have
some
sort
of
provider
running
and
knowing
what
resources
are
available
available,
maybe
by
introspecting
the
node.
Something
like
that.
So
so.
This
is
just
a
quick
explanation
of
of
the
basic
idea
and
I
think
it's
very
powerful
actually,
because
the
idea
here
is
that
now
I
can
design
workloads
that
use
multis,
but
I
don't
have
to
know
anything
about
the
system
administration,
stuff,
that's
offloaded
to
this
provider.
F
The
secret,
of
course,
is
making
these
providers
right
so
I'll
I'll
go
over
some
of
the
so
so
that
kind
of
explains
the
rationale
of
what's
happening
here,
that
the
provider
is
kind
of
interesting,
and
I
want
to
explain
how
the
providers
here
work.
So
they
work
through
a
system
that
I
call
extra,
thick
plugins,
and
so
some
of
you
who
know
a
little
bit
more
about
multis
and
about
cni
plug-ins,
know
that
we
and
multis
we
talk
sometimes
and
then
cni.
Generally,
we
talk
about
two
kinds
of
plug-ins.
F
On
the
one
hand,
there
are
thin
plugins,
which
are
just
one
shot,
executables
that
run
right,
that's
kind
of
how
cni
works.
It's
a
command
line
interface,
so
you
give
it
standard
in
it
gives
you
standard
out
very,
very
straightforward
and
you
can
have
a
cni
plug-in.
That's
designed
just
like
that.
It
runs
it
does
what
it
needs
to
do
and
when
it
finishes
it
quits.
F
We
also
talk
in
cni
about
thick
plug-ins.
So,
yes,
you
run
a
one-shot
cli
interface,
but
you
have
some
sort
of
service.
Maybe
it's
a
systemd
service.
Maybe
it's
a
docker
image
running
somewhere.
Maybe
it's
something
even
external
to
the
cluster
that's
running
and
and
provisioning
networking
for
you.
If
it's
some
sort
of
sdn
solution,
for
example
those
we
call
thick
plug-ins
because
they're
not
just
one
shot,
there's
something
running
all
the
time.
F
That
we
signed
the
demo
example
with
is
actually
running
as
a
pod.
Now,
why
is
this
interesting?
You
can
also
call
them.
I
think
cloud
native
plugins,
if
you
like,
because
they're
actually
native
to
within
the
cluster
itself.
The
advantage
of
doing
something
like
that
is
that
you
can
have
a
network
function,
actually
work
as
a
provider.
F
So
a
pod
could
be
both
a
consumer
of
multis
here
and
also
a
provider
for
multis,
so
that
was
kind
of
the
architectural
decision
here
in
terms
of
these
thick
or
cloud
native
plug-ins
I'll
mention
quickly
a
big
disadvantage
of
this
solution,
for
some
of
you
might
have
already
identified
it,
but
if
you
motors
can
only
work
during
the
initialization
of
pods,
so
if
the
pot
is
already
running,
you
cannot
dynamically
change
the
interfaces.
It's
not
a
feature
that
malta
supports
right
now
and
generally
kubernetes.
F
Doesn't,
of
course
you
you
like
to
think
that
your
pods
are
very
lightweight,
so
they'll
just
be
restarted
with
the
new
interface
information.
If
there's
there
something
there
so
so
right
now
the
way
nap
works.
You
always
see
all
the
parts
coming
up
and
then
they're
restarted
after
the
crd
is
created.
F
That
could
be
okay
and
that
could
be
a
fatal
flaw.
So
I've
really
considered
another
way
of
solving.
This
is
not
via
the
the
operator
pattern,
but
instead
actually
being
an
extra
layer
in
front
of
multis.
So
if
multis
is
an
extra
layer,
a
kind
of
multiplexer
for
cni,
you
can
have
another
another
solution
before
that.
That
would
be
make
sure
to
provide
those
cni
configurations
for
multis
or
it
could
be
integrated
into
multis.
F
As
the
question
was
here
before
you
know,
if
multis,
if
this
is
considered
something
that
malta
should
be
doing,
it
could
be
enhanced
in
that
way,
but
that
would
be
really
growing
that
particular
project.
So
I
just
wanted
very
quickly
and-
and
with
this
I'll
finish,
my
my
little
presentation
here
to
talk
about
this
specific
question
you
know
is
this
kind
of
neutron
for
kubernetes.
F
I
would
say
that
a
little
bit
it
is.
The
idea
here
is
to
give
this
same
kind
of
ease
of
use
that
we
have
in
an
open
stack
also
for
for
kubernetes
and
but
by
ease
of
use
I
mean
that
developers
are
able
to
create
workloads
without
having
that
admin,
administrative
information.
F
F
You
would
have
to
create
some
other
system
to
do
that,
and
I
know
people
do
that,
sometimes
with
helms,
sometimes
with
other
kinds
of
rendering
before
the
deployment,
but
you
end
up
having
to
create
your
own
deployment
system
to
make
this
work
and
the-
and
I
think
this
removes
that
requirement,
but
I
also
at
the
same
time
want
to
say
I
I
the
ideas
to
do
here,
something
that's
really
cloud
native
and
not
to
duplicate
neutron,
not
to
create
exactly
networks
as
a
service.
F
It's
more
about
network
attachment
definitions
as
a
service,
but
this
is
a
this
is
the
point
where
I
really
want
to
stop
and
open
this
up.
You
know
this
is
my
give
me
giving
one
shot
entry
into
this
problem,
but
I
really
imagine
that
a
lot
of
you
have
other
ideas
or
or
or
want
to
discuss
this
so
I'll,
stop
here
and
open
it
up.
If
there's
interest.
B
Let's
say
thing
to
separate
the
network,
administration
and
and
tenant
network
creation,
and
I
think
that's
that's
something
that
we
do
not
have
in
kubernetes
and
we
might
need
like
some
kind
of
an
api
to
manage
networks,
and
I
had
some
similar
ideas.
But
my
my
approach
or
my
idea-
was
to
to
create
crd
definitions
for
the
api
and
to
use
different
controllers
to
let's
implement
some
different
support
for
backends
like
the
same
api
could
be
used,
could
be
use,
multi-sort
them
or
network
service
mesh
or
whatever
networking
solution
the
infrastructure
has.
F
Well,
let's,
let's
join
forces,
I
I
would
love.
As
I
said,
I'm
not
I'm
not
married
to
my
idea.
This
was
an
attempt
and,
as
I
said,
it
does
have
certain
disadvantages.
Also,
you
know
the
the
real
the
real
meat
here
is
the
providers
and
how
they
work.
So
a
demo
using
here
you
know
the
bridge
provider
is
relatively
trivial
and
the
way
my
bridge
provider
works,
it
just
says,
saves
a
pool
of
subnets
to
a
file.
F
So
we
can
manage
it
and
and
make
sure
to
synchronize
on
that
file
to
to
make
sure
that
everybody
gets
their
own
unique
subnet,
and
then
they
can
return
the
subnet
back,
but
obviously
for
more
complicated
networking
solutions.
There's
much
more
work
to
be
done
during
provisioning,
for
example,
you
might
need
to
configure
a
pnf
somewhere
using
netconf,
so
so
the
magic
really
is
in
those
providers.
I
think-
and
if
there's
a
standard
api
for
that-
that
of
course
would
would
make
things
easier
for
everybody
but
yeah.
F
B
Yes,
okay,
let's,
let's
do
that,
I'm
open
for
collaboration,
I'm
like
trying
to
figure
out
from
who
I
can
get
from
from
nokia
to
actively
participate
in
this,
and
when
I'm
done,
I
will.
B
Send
out
some
emails,
let's
say
because,
like
I
have
a
list
of
interesting
interested
participants,
also
from
the
last
open
deaf
conference,
where
we
had
a
very
similar
discussion
and
we
had
the
conclusion
that
we
should
have
some
kind
of
an
agreement
or
a
networking
api
for
kubernetes,
and
also
we
kind
of
agreed
that
that
this
should
be
done
as
something
out
of
three,
because
the
current
s
networking
seek
is
not
really
interested
in
these
more
advanced
networking
problems.
A
F
Yeah,
exactly
and
and
even
motus
was
very
painful
to
get
where
it
is
right.
Now
there
was
a
lot
of
resistance
to
that.
It
kind
of
looks
like
a
temporary
solution
right.
G
I
was
just
saying
this:
is
I
I'm
ryan
tidwell
from
sousa?
I
was
just
gonna
mention
here.
I
I
I'm
interested
in
opening
to
collaborating
on
this.
I've
got
a
lot
of
background
with
neutron
been
a
contributor
there
for
quite
some
time
kind
of
moving
into
kubernetes
space
and
yeah
I'll.
Just
say
that
the
the
problem
that
you
mentioned
here
is
one
that,
on
the
surface,
is
pretty
obvious
to
me
as
well.
G
Maybe
we
we
have
an
api,
that's
very
focused
on
infrastructure
and
those
sorts
of
objects
where
with
kubernetes,
maybe
we
swing
the
pendulum
a
little
too
far
and
wave
our
hands
around
things
and
what
you're
describing
here
seems
to
kind
of
be
aiming
for
that
middle
ground,
and
I
I'm
I'm
interested
in
and
open
to
collaboration
with
this
as
well.
F
Wonderful
yeah
I'll,
say
you
know
my
my
focus
is
on
orchestration.
I
care
a
lot
about
the
the
underlying
technologies,
of
course,
but
I
think
we
all
agree
that
kubernetes
is
not
as
mature
as
the
legacy
clouds
in
terms
of
managing
these
resources
at
scale.
F
I
think
the
scheduling
paradigm
that
kubernetes
has
introduced
is
very
scalable
in
itself,
but
I
think
it
caught
us
all
off
guard
in
terms
of
adapting
our
systems
to
it.
I
think
this
is
part
of
the
importance
of
this,
this
tug
right
that
we
can
really
discuss
these
challenges
and
see
how
we
we
get
there,
so
so
wonderful,
yeah
I'll
love
to
continue
this
conversation.
K
One
more
guy
jumping
on
board
here
I
tell
I'd
also
like
to
well.
I
don't
like
to
restate
the
importance
of
this.
I
think
that
was
mentioned
already
a
couple
of
times,
but
yeah
george
from
ericsson.
I
think
I'd
like
to
be
part
of
that
as
well.
I
think
I
should
be
able
to
find
your
contact
somewhere.
I
guess
so
that
we
can
continue
like
all
of
us,
of
course,
and
some
other
context
to
discuss
this
right.
B
Let's
just
start
the
list
in
the
minutes
with
the
interested
parties.
A
Awesome
thanks,
there's
a
a
question:
real
quick,
maybe
to
end
this
from
victor
morales.
I
think
that's
towards
you
tell
about
upgrading
to
the
new
version
of,
or
I
think
it's
covering
the
multis
annotation
for
default
network
and
then
there's
another
question
about
the
scheduler.
That's
in
the
zoom
chat.
Do
you
see
this.
F
I'm
gonna,
let's
see
oh
there,
it
is
yeah.
I
am
considering
the
new
multis.
Of
course
I'll
say
you
know
this
is
a
poc,
it
really
works,
but
it's
as
I
said,
I'm
even
reconsidering
the
whole
use
of
operators.
Here
I
think
the
the
disadvantage
of
having
to
restart
pods
could
be
a
major
one
yeah.
I
could
use
some
help:
brainstorming
how
how
it
could
work
so
very
interested
in
getting
feedback
and
trying
out
different
solutions.
You
know
if
you
try
different
pocs.
A
All
right,
I
I'd
like
to
hear
about
the
differences
between
this
and
the
approach
that
network
service
mesh
is
taking
one
of
the
things
that
they're
doing
is
dealing
with
being
able
to
make
modifications
after
the
pod,
but
we're
out
of
time.
So
maybe
that
can
be
an
up
follow-up
discussion.
F
Yeah,
it's
a
it's
an
excellent
question
and
personally
I
don't
think
that
multis
is
opposed
to
nsm.
I
think
those
multis
could
be
an
implementation
of
nsm.
I
can
see
all
this
working
together,
possibly
oh.
A
It
can
there's
been
proof
of
concepts
using
multis,
directly
right,
they're,
definitely
complementary
on
on
the
multi
side,
I'm
specifically
talking
about
the
k9.
F
Right
right,
yeah,
I
I'm
thinking
about
it
too,
but
I'm
sorry
I
have
a
really
sharp.
I
have
to
leave
right
now.
Oh
I
understand
you
know.
A
Thanks,
everyone
next
call
is
1100
utc,
that's
3am,
pacific
time
and
and
for
the
next
call
in
november.