►
From YouTube: CNCF Telecom User Group Meeting - 2021-03-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
it's
five
after
so
I
think
we
can
get
started.
Welcome
everybody!
This
is
the
telecom
user
group
meeting
we
meet
on
the
first
monday,
every
single
month
at
1,
500
utc,
that's
decided
after
a
recent
poll.
We
used
to
flip
back
and
forth,
but
now
we're
just
meeting
at
1500,
ut
c.
If
you're
joining
today
and
you're
new,
please
feel
free
to
add
your
or
actually
everyone,
please
feel
free
to
add
your
name
to
the
meeting
notes.
A
So
we
can
keep
a
record
of
who's
been
here,
so
we
don't
have
a
ton
on
the
agenda
today
we're
going
to
have
a
presentation
from
the
great
folks
at
eno
I'll
talk
a
little
bit
about
the
what's
going
on
in
the
cnf
working
group
to
keep
this
group
up
to
date,
but
before
we
jump
into
that,
is
there
anything
that
anyone
else
would
like
to
add
to
the.
B
B
Okay,
so
one
it's
more
like
a
question
than
a
reference
to
the
auditions,
don't
you
want
to
discuss
a
bit
how
to
continue
with
the
network
orchestration
group
what
was
started
or
discussed
in
the
last
meeting,
we
had
some
discussions
on
black,
but
I'm
not
really
sure
how
to
continue
with
that.
C
C
Yeah
that
that's
that's
a
good
question,
it's
a
little
bit
on
me
right
now.
I
there's
one
task
I
need
to
do,
but
anyway,
let's
add
that
to
the
agenda
and
discuss
good
idea.
Thank
you.
Okay,.
A
Okay,
great,
is
there
anything
anyone
else
would
like
to
add.
A
Okay,
it
seems
I
forgot
to
copy
up
the
events,
so
just
as
an
fyi
for
people
that
are
interested,
the
cnf
working
group
meets
weekly
on
mondays.
The
etsy
plug
test
is
now
done.
The
kubernetes
on
edge
event,
if
you're
interested
in
learning
more
about
running
kubernetes
on
the
edge
is
going
to
be
monday
may
3rd.
A
The
cfp
is
now
closed
and
I
think
there's
a
lot
of
really
exciting
talks
so
register
for
that
now
and
find
out
more
and
also
that's
going
to
lead
into
kubecon
called
native
con
virtual
europe
2021
and
that's
the
first
week
of
may
so.
There's
links
for
all
of
that
in
the
meeting
notes.
A
D
Okay,
thanks
so
hello,
everyone
myself
look:
I've
been
working
with
ericsson,
part
of
this
external
network
orchestration.
D
We
started
up
as
a
study
within
within
our
team
working
on
the
kubernetes
network
orchestration
and
how
we
can
actually
fill
in
the
gaps
which
we
have
currently
in
the
kubernetes
default
networking
model
and
how
we
can
make
it
more
dynamic
and
can
be
orchestrated
on
demand.
So
we
started
up
as
a
study.
Now
we
are
in
a
phase
that
we've
been
running
a
proof
of
concept
and
have
the
intention.
D
So
this
is
a
quick
walk
through
about
the
proposal
which
I
think
I
shared
it
with
bill
couple
of
weeks
ago,
and
we
decided
to
bring
it
up
here
in
this
forum
with
the
a
group
of
people
and
collect
the
feedback.
How
are
we
targeting
their
right
thing
or
is
it
something
which
doesn't
solve
purpose
or
won't
add
value
to
the
community?
D
So
a
bit
of
a
background
here
to
begin
with,
as
we
all
know,
the
standard
kubernetes
networking
model
that
relies
on
the
single
netted
interface
and
that's
what
tal
also
mentioned
in
one
of
his
slide
deck
right
so
and
while
with
that
single
netted
interface,
when
working
with
the
interworking
external
networks,
so
it
doesn't
really
or
doesn't
always
provide
the
proper
network
separation
and,
above
all,
its
implementation.
Through
the
linux
kernel.
Ib
stack
mechanism
doesn't
fulfill
the
performance
requirements
which
many
of
the
telco
network
functions
have
been
looking
for.
D
D
D
So
we
cannot
basically
automatize
deny
with
the
life
cycle
of
the
network
function.
So
we
basically
when
we
deploy
a
network
service
or
a
network
function.
D
D
So
it's
a
it's
a
provider
network
that
we
create
or
that
we
orchestrate
in
nfvi
based
requirements.
It's
a
multi-segment
network
right.
So
it
consists
of
vlan
and
vlans
vxlan
segments
and
then
the
l2
gateway
connections
are
being
created
using
the
openstack
api
right
to
bridge
the
vlan
and
the
vx
lan
on
the
hardware
beta.
D
D
By
creating
the
supports
in
the
provider
network
on
in
the
open
stack
terms
in
case
of
bare
metal,
it's
fairly
straightforward.
D
D
I
think
it
might
not
be
the
ideal
solution.
It's
more
error
prone!
It's
a
it's!
A
tedious
task
to
do,
even
if
it
is
through
some
bash
scripts
or
or
some
some
some
scripts,
so
to
solve
it
to
make
it
more
automatized.
D
D
Known
and
it's
it's
a
there-
are
lots
and
lots
of
efforts
being
done
by
other
projects
to
to
basically
handle
how
how
we
can
handle
the
state
full
applications
or
so
basically,
here
we,
what
we
are
trying
to
introduce
is
the
external
network
operator
to
basically
automate
those
network,
orchestration,
the
creation,
basic
or
the
life
cycle
of
those
external
networks.
D
D
So
this
is
the
overall
architecture
of
eno
and
the
the
plugin
api.
So
if
I
start
from
top
to
bottom,
so
we
have
this
northbound
api,
which
basically
we
through
this
api,
we
are
collecting
or
feeding
in
the
crds,
the
custom
resource
definitions
and
then
this
fabric
agnostic
operator,
it's
the
southbound
of
hino,
and
then
that
basically
has
the
plugable
pluggable
cni
support,
which
supports
various
fabrics.
D
You,
let's
call
it
fabric
a
fabric
b
or
neutron
in
case
of
openstack
based
deployments
and
to
be
a
specific
or
to
call
a
specific
fabric,
an
obvious
bridge
and
it's
a
dummy
fabric
which
we
are
implementing
it
for
our
park.
Basically,
so
we
have
various
fabrics
and
for
each
fabric
we
have
a
corresponding
fabric,
plug-in
to
basically
orchestrate
that
fabric
and
basically
eno,
has
the
distorted
line.
D
Splits
the
eno
between
two
as
two
facades,
the
the
upper
one
basically
handles
or
manages
the
custom
resources
which
we
will
discuss
in
detail
in
coming,
slides
and
the
the
below
one
is
for
the
external
fabric,
orchestration
and
the
external
network,
creation
and
management
of
those
external
networks.
Basically
and
then,
as
I
said,
we,
we
are
working
on
a
proof
of
concept,
so
this
obvious
bridge,
which
we
are
using
it
as
a
dummy
fabric
for
our
realization
of
the
as
a
fabric
and
then
having
obs
plugin
to
basically
orchestrate
that
fabric.
C
C
One
question
so
the
the
dotted
line
here
that
separates
the
internal
versus
the
external
is
the
the
idea.
As
I
understand
it
here,
is
that
these
plugins
are
not
just
for
orchestrating
external
networks,
but
really
to
connect
them
to
containers
running
in
kubernetes
right.
So,
even
though
it's
not
cni,
for
example,
ovs,
the
idea
is
that
you
will
get
this
l2
network
to
your
pods.
Am
I
correct.
D
E
F
D
C
Is
yeah?
I
think
it
was
clear
but
you're
you
have
a
lot
of
echo
in
your
sound.
It
was.
It
was
kind
of
hard
to
understand,
but
I
I
think
I
got
my
answer.
D
Okay,
thanks
to
metros,
yeah,
so
moving
forward,
we
have
actually
modeled
the
northbound
api
as
a
data
model,
but
it's
like
a
meta
slide.
We
kept
it
to
visualize
how
it
looked
like,
but
maybe
that
we,
our
intention,
is
to
keep
it
like
a
general
introduction
session,
for
you
know
and
let's
spark
the
the
detailed
data
model
discussions,
maybe
for
for
further
discussions
due
to
interest
of
time,
and
we
can
yeah
come
back
if
there
are
necessary
discussions
around
this
topic.
D
So
okay.
So
this
is
the
example
workflow
which
we
kept
for
visualizing
the
end-to-end
flow
and
the
orchestration
how
it
looks
like.
So.
If
I
start
from
the
manual
layer,
the
orchestration
layer,
we
basically
onboard
the
csar
packages
right
for
for
every
network
functions
using
nfu,
so
the
nsd,
the
network
service,
descriptor
that
gets
passed
and
then.
D
D
D
Let's
start
from
this
orchestration
layer,
the
manual
where
we
actually
onboard
the
csar
packages
using
nfu
right
so
nsd.
The
network
service
descriptor,
will
then
get
passed
and
generates
the
eno
external
resource
definitions,
which
is
nothing
but
your
custom
resources
which
will
then
be
feed
into
external
network
operator.
D
D
Obs,
so
we
basically
create
the
network,
attachment
definitions
and
then
once
that
has
been
done
so
the
gateway
orchestration
can,
like
I
said
in
the
in
the
beginning,
can
be
done
either
through
nfo,
which
is
the
creation
of
or
the
created
vlans
associated
with
the
vrfs,
either
through
the
nfvo
or
some
scripts.
D
If
that
functionality
is
not
there
in
the
solution,
so
once
all
this
has
been
done
from
step,
one
to
step
five,
we
basically
have
our
tenant
network
in
place
and
then
operator
can
basically
delegate
the
task
to
vnfm
to
deploy
their
network
functions
and
that
can
be
done
through
vnfm
deploying
the
cnfs
that
will
be
using
the
tenant
network
that
has
been
configured
using
step,
one
till
five,
so
this
is
the
overall
flow
and
the
end-to-end
orchestration
would
like
I
mean
I
would
like
to
highlight
here
that
it's
not
just
about
eno,
it's
it's
the
supporting
components
which
we
have
like
nfu
in
the
orchestration
layer
and
then
the
gateway
orchestration
in
the
family
like.
D
C
Why
would
you
fix
it?
Can
I
ask
a
question
about
the
previous
slide.
C
So
is
this
example:
workflow,
actually
something
you
built.
Is
there
a
poc
of
csar
packages
working
with
a
specific,
an
feo
that
that
works
so
yeah.
D
So
we
have
our
manual
architects
and
internal
orchestration
team,
which
we
we
did
some
technical
feasibility
with
them
and
they
kind
of
like
agreed
that
it
is
possible
for
for
poc
we
haven't
put
the
end-to-end
part,
but
I
think
that's
one
of
the
requirement
which
we
have
triggered
from
our
for
our
nfo
team
and
we
have
basically
during
the
productivity
we
have,
that
requirement
which
will
provide
this
feature
to
to
make
it
end-to-end
orchestration.
So
on
technical
grounds
it
is
possible,
but
we
haven't
tested
so
far.
D
C
G
C
All
right,
well
I'll,
just
self-promote
myself
here
and
say
that
if
you're
looking
for
an
operator
that
can
quickly
get
csar
packages
into
kubernetes
check
out
my
turon
dot
orchestrator,
and
I
can
help
you
with
that.
If
you
like,
because
anyway,
we'll
continue
to
talk
after
the
presentation.
D
F
Yeah,
so
thank
you
alex
so
to
actually
put
everything
to
the
test.
We
decided
to
actually
create
a
ninupoc
to
to
implement
everything
that
we
that
analog
have
just
shown
and
for
this
in
a
book
we
will
have
three
main
use
cases:
the
access
mode
use
case,
the
selective
trunk
use
case
and
the
transparent
trunk
use
case.
And
for
those
use
case,
we
are
going
to
handle
the
obvious
cni
and
the
host
device
and
I
to
actually
implement
everything.
F
So
for
the
first
use
case,
we
are
going
to
create
some
service
attachment
and
because
it's
an
access
module
case,
we're
gonna
assign
a
single
l2
service
which
corresponds
to
one
vlan
actually
and
we're
going
to
test.
The
creation
of
this
service
attachment
the
updation
and
the
deletion
for
the
second
use
case,
which
is
the
selected
trunk
use
case,
we're
going
to
create,
update
and
delete
an
l2
service
attachment,
and
we
are
going
to
include
there
a
range
of
l2
services,
which
means
a
raise
on
billions
and
also
for
this
one.
F
We
are
going
to
use
obscni
and
for
the
transparent
use
case,
transparent
trunk
use
case.
We
have
two
branches
there.
We
have
the
host
device
ni
and
the
obvious
ni
brands,
and
in
those
branches
we
are
going
to
create,
update
and
delete
ultra
service
attachment,
which
would
be
type
type
trunk
and
we
are
going
to
include.
There
are
a
ranges
of
file
to
services,
which
means
also
a
range
of
lights
again.
F
We
have
four
worker
nodes
which
are
arranged
into
pools.
We
have
the
blue
pool
and
the
red
pool
the
criteria
that
we
actually
separate
nodes
to
different
pools
are
the
networking
characteristics
of
the
nodes,
so
in
the
red
node
pool,
we
have
obvious
bridge
bridges
and
the
in
the
blue
pool.
We
have
a
verteiro
pool
which
include
a
range
of
tire
interfaces
underneath.
F
F
Those
worker
vms
are
connected
through
virtual
trunk
interfaces
to
the
dummy
fabric,
which
for
poc
will
be
an
obvious
bridge
fabric,
but
for
a
real
deployment,
could
be
an
actual
data
center
fabric
and
to
actually
be
able
to
depict
the
networking
characteristics
that
our
nodes
have
to
the
kubernetes
api.
We
need
to
create
three
connection
points,
as
we
can
see
from
left
to
right.
We
have
a
connection
point
which
corresponds
to
the
virtio
pool.
F
Networking
object
and
we
have
a
connection
point
which
corresponds
to
the
bridge
transparent
rank,
and
we
have
also
a
connection
point
that
corresponds
to
the
bridge
data
that
we
have
in
our
system.
Those
those
crs
are
going
to
be
created
through
a
kubernetes
lifecycle
management
system
or
through
an
administrator.
G
That,
if
I
can
interrupt
quickly
since
we've
not
mentioned
what
l2
service
attachment
or
connection
point
is
maybe
just
a
minute,
if
you
can
spend
on
what
these
concepts
are,
I
think
it
will
help
the
group.
F
I
I
have
l2
services
and
elder
service
attachments
in
the
in
the
next
slide
that
we
we
have
a
full-blown
example
here,
but
so
the
connection
points
are
like
custom
resources
that
in
understands
and
visualize
the
actually
represent
the
network
characteristics
of
our
nodes.
So
because
we
have
three
different
networking
objects,
the
bridge
data,
the
bridge
rank
and
the
virtio
pool.
We
need
to
create
three
different
connection
points
for
for
each
of
those
pools,
so
we
have
two
pools.
F
So
we
register
that
to
our
kubernetes
system
and
we
move
forward
next
slide.
Please
so
in
day,
two.
We
need
to
create
a
tender.
Two
services
attend
subnets,
so
those
are
two
services
actually
represent
one
vlan.
So
here
we
have
a
terminal
two
services,
so
we
are
gonna,
have
ten
villains
from
ten
to
twenty
and
the
subnet
objects
represent.
The
ip
address
ranges
that
we
want
to
associate
to
each
of
those
vlans.
So
those
are
custom
resources.
F
Also
in
other
stands,
so
we
register
that
to
the
system
and
in
the
next
slide
we
are
going
to
create
some
elder
service
attachments
that
will
bind
all
this
together.
So
for
now
we
register
that
the
system
and-
and
we
move
on
to
the
next
slide
so
here
to
bind
all
these
together.
We
need
to
create
foreign
to
service
attachments
with
those
ultra
service
attachment,
eno,
we're
gonna
kick
in
and
we'll
open
the
corresponding
villains
of
the
fabric,
and
also
we
create
the
corresponding
network,
attachment
definitions
for
the
portuguese
zoom.
F
So
in
the
first
l2
service
attachment,
we
can
see
that
it's
villain
type
access.
It's
related
to
the
obvious
bridge
data.
It
will
consume
only
one
l2
service,
because
it's
an
access,
type
to
service
attachment
and
that's
villa
13
and
the
implementation
that
we
are
going
to
use
here
is
the
obvious
cni
on
the
on
the
second
l2
service
attachment
is
type
selective
trunk.
F
We
are
going
to
also
use
here
the
obvious
bridge
data
connection
point
and
here,
because
the
electric
rank
service
attachment
we're
going
to
use
a
range
of
l2
services
from
vlan
10
to
14.
and
again
we're
gonna
handle
here
the
obvious
ni
for
the
solutions
attachment
for
the
third
one.
We
have
a
villain
type
trunk
again
we're
going
to
use
a
ranges
of
l2
services
from
villa
13
to
16..
F
Here
we
have
a
different
connection
point
which
is
obvs
bridge
trunk
and
the
implementation
will
be
again
over
c9
and
for
the
last
one
it's
a
villain
type
trunk
again.
The
connection
point
here
is
different
is
vital.
Pool
corresponds
to
the
blue
pool
the
blue
worker
pool
the
we're
going
to
use
also
arranges
a
range
of
l2
services
from
vlan
12
to
20,
and
the
implementation
here
will
be
host
device
ni.
F
Next
slide.
Please.
F
So
now
that
we
have
everything
in
place,
we
need
pods
to
actually
consume
all
those
stuff
and
we
are
going
to
create
four
parts:
three
for
the
red,
node
pull
and
one
for
the
blue,
node
pool.
So
for
the
first
pod.
We're
gonna
have
an
access
mode
interface
for
vlan
13
at
the
middle
of
the
image,
because
we
consume
the
network
attachment
definition
that
actually
corresponds
to
the
access
of
the
sni
case.
F
C
A
very
quick
one-
these
are
three
different
clusters.
Am
I
right.
D
Question:
okay,
all
right,
so
we
have
the
worker
node
pools,
which
classifies
the
certain
characteristics,
so
each
node
pool
has
a
set
of
characteristic
bind
to
it.
So
in
this
case
the
red
node
pool
has
certain
characteristic
versus
the
blue
node
pool,
which,
which
is
bound
for
host
device
using
the
pert,
the
vert
io
pool
and
has
certain
characteristics
so
yeah
running
on
a
same
cluster.
I
F
So
right
now
it's
it's
a
mandatory
to
have
the
network
resource
injector
deployed
in
your
in
your
cluster
yeah,
so
the
network
resource
ingest
injector
is
smart
enough
to
understand
that
you
want
to
get
a
an
obvious
interface
or
a
virtio,
transparent
trunk
interface,
and
I
actually
will
spin
up
the
port
to
the
appropriate
worker
node
without
for
you
to
actually
specify
anything
more
than
than
only
the
network
attachment
definition
thanks.
C
C
Our
approaches
are,
and
we
identified
the
same
kind
of
problems.
I
think
that
the
difference
is
mine
is
just
much
smaller.
I
worked
on
it
on
my
own.
You
guys
definitely
went
farther,
especially
that
slide
with
a
dotted
line.
C
You
you
guys,
went
beyond
the
dotted
line
and
I
I
stuck
above
it
at
least
for
poc
purposes,
yeah,
that
initial
kind
of
slide,
but
one
technical
aspect
that
I
had
an
issue
with
the
the
custom
resources
and
dealing
with
maltese,
because
one
of
the
limitations
of
motus
cni
and
cni
generally,
it
depends
on
which
cni
plug-in
exactly
is
that
you
can't
do
day
two
changes
or
after
that
the
pod
has
been
already
set.
C
That
is,
if
you
change
the
multis
annotations
right,
you'll
you'll
you'll
want
to
recreate
the
pod
recreational.
B
F
I
mean
we
don't
solve
it
actually,
we
we
assume
that
we
have
everything
in
place
three,
you
know,
so
we
create
the
vlans
on
the
fabric
through
inno
and
the
network,
attachment
definition
and
and
the
pods
will
just
consume
those
natural
attacks
and
definition.
If
we
want
to
update
those
network
attachment
definition,
then
we
need
to
bring
down
the
ports
and
and
create
the
new
ones.
We
don't
have
any
hot
plugin
of
interfaces.
D
Yeah,
so
it's
more
of
a
rolling
update,
use
case
right,
so
you
you
change,
or
you
update
your
configurations
for
a
particular
vlan
or
if
you
change
certain
wheel
and
ids
or
extend
your
network,
then
you
basically
bring
up
your
version.
2
network
service,
which
will
then
deploy
or
basically
makes
and
makes
before
break
and
then
basically
does
that
does
that
transition
from
version
one
to
version
two
so
yeah?
D
C
So
if,
if
you're
interested
the
way
I
solved,
it
was
which
is
not
necessarily
a
great
solution.
Definitely
other
people
have
given
some
other
ideas,
but
one
solution
that
I
did
was
actually
to
monitor
kind
of
that
relationship
between
certain
kubernetes
resources
that
have
that
annotation.
So
it
would
be
deployments
replica,
sets
and
pods
and
to
see
the
annotations
that
they
have
there
and
then
you
know
which
custom
resources
they
actually
connect
to.
C
So
if
you
detect
a
change
in
the,
if
the
operator
detects
the
change
of
the
in
the
custom
resource,
it
will
know
to
restart
those
deployments.
You
have
to
do
a
little
bit
of
trickery
right,
because
kubernetes
doesn't
have
a
restart
api
right.
It
would
restart
if
there's
a
certain
change
to
the
resource,
so
you
can
update
its
version.
For
example,
it's
ideal.
C
Yeah
right
right,
I
think
the
reason
it's
so
awkward
is
that
well
it
assumes
that
users
would
only
use
a
deployment
or
a
replica
set.
You
know
the
built-in
controllers,
but
you
know
there
are
daemon
sets
there's
stateful
sets.
These
are
built-in
controllers
part
of
cubelet,
but
if
somebody
extends
with
something
new,
your
your
operator
wouldn't
know
about
them.
It's
it's
not
the
best
solution.
There's
there's
an
issue
here.
C
I
think
in
kubernetes
that
we're
all
aware
of
that
handling
this
by
annotations
might
not
be
the
best
way
right,
but
this
is
how
multis
works.
It's
weird
right,
annotations
seem
like
this
is
metadata.
This
is
not,
but
then
we're
dealing
with
something
really
intrinsic
in
terms
of
connectivity.
C
Anyway,
it's
exactly
this
kind
of
topic
that
this
might
lead
us
into
the
next
item
on
the
agenda
or,
what's
available
on
the
agenda
today,
to
talk
about
networking
orchestration
right.
These
are
exactly
the
topics
that
I
think
we
all
want
to
talk
about.
How
are
we
actually
solving
these
things
at
a
low
level
at
a
high
level,
and
I
I
just
want
to
thank
you
for
this
work
too.
I
think
this
this
adds
a
lot
to
the
discussion
and
it's
really
wonderful.
So
thanks.
D
Thanks,
tal
and
and
just
to
add,
we
actually
evaluated
k
nab
in
in
the
beginning,
like
what
all
the
different
approaches
which
is
available
in
the
community,
and
there
is
one
from
red
hat
as
well,
which
is
cluster
network
operator,
if
I'm
remembering
correctly.
But
I
think
the
bottom
line
is
that
they
are
exclusively
for
meant
for
the
internal
kubernetes
ecosystem
or
handling
the
custom
resources.
And
we
we
don't
have
as
such,
this
external
or
the
second
facade,
which
basically
does
the
data
fabric
orchestration
right.
D
So
so
we
kind
of
stretch
it
a
bit
and
ex,
like
you,
said,
extended
a
bit
that
idea
and
to
bring
to
bring
the
end
to
end
orchestration,
going
towards
the
switch
level
or
on
a
fabric
level
and
then
try
to
automate
that
that
area
that
it
yeah
so
yeah.
It's
it's.
We
can
say
it's
a
bit
of
an
extension
to
what
we
have
been
in
projects
like
knapp
and
cluster
network
operator,
but
yeah
that
we
actually
evaluated
before
and.
B
C
So
I
don't
want
to
take
too
much
time
here,
but
I
I
guess
another
aspect,
challenge
that
I
think
comes
out
of
this
poc
is:
how
do
you
you
know
you
created
custom
resources
for
these
specific
technologies,
right,
l2
attachment
and
the
challenge
in
kubernetes?
Is
that
all
these
kind
of
can
look
like
one
shots
right?
So
for
a
very
specific
use
case,
you
would
create
a
custom
resource
with
its
own
operator
that
works,
of
course,
but
the
challenge
is
really.
C
How
do
we
unite
all
these
different
custom
resources
that
might
be
contributed
by
the
community
into
some
kind
of
solution?
That
really
could
be
more
generic
right.
If
you
can
install
these
plugins
in
a
generic
way
or
yeah,
how
do
we
move
beyond
specific
bespoke
solutions
to
to
a
really
a
general
solution?
I
think
that's
one
of
the
the
tasks
I
see
for
the
networking
orchestration
task
force
to
think
about
this
problem
and
and
provide
solutions.
C
B
Yes,
I
was
sorry
just
one.
One
thing
like
I
was
thinking
about
this,
and-
and
there
is
there-
is
this
way,
how
the
csi
interface
works
and
how
the
csi
selects
the
let's
say
the
storage
implementation
to
use,
and
I
think
maybe
we
could
use
something
like
that
to
select
the
correct
back-end
plug-in.
But
I
totally
agree
that
that
that
we
shouldn't
have
any
technology
specifics
in
the
northbound
api
of
of
of
this,
and
also
another
thing.
B
What
what
is
my
worry
is
that
we
somehow
need
to
separate
the
let's
say:
network
administration
tasks
from
the
network,
consuming
tasks.
B
That,
okay,
that
sounds
good
and
is
there
any?
Is
there
also
like
interoperability
of
the
of
the
bin
fd,
in
a
sense
that
if
you
run
it
on
another
kuneth
cluster,
which
uses
like
something
as
an
obs,
obs
bridge,
then
the
same
thing.
D
J
D
J
D
E
G
Yeah
sure
so
we
have
been
pondering
about
using
building
something
like
an
open
config
with
either
grpc
or
net
conf.
That
can
talk
to
different
devices
for
this
gateway
configuration
as
far
as
you
know,
maybe
I'll
look,
you
can
go
back
to
the
fabric,
plug-in
slide
sure
yeah.
As
far
as
you
know,
itself
is
concerned,
though
we
have
not
found
any
adequate
sort
of
api
which
can
work
at
a
fabric
level.
G
You
know
something
like
neutron
or
if
there's
a
vendor
fabric,
there's
we
haven't
seen
an
open
source
standardized
fabric
api
as
such.
So
that's
where
the
gap
is
of
you
know,
although
the
southbound
from
such
a
service
could
be
open,
config,
which
is
a
very
device
configuration
specific
interface,
there's
no
standardized,
at
least
for
knowledge
fabric
api,
which
could
which
could
basically
help
build
that
layer.
H
Top
of
the
hour,
I
want
to
make
sure
that
we
have
enough
time
for
anything
else.
B
H
And
what
one
of
the
actions
going
forward
and
related
to
several
of
the
comments?
There's
multiple
projects
that
are
out
there
that
are
trying
to
solve
this.
H
So
one
thing
that
could
happen
would
be
an
effort
to
list
all
of
those
and
then
potentially
map
out
the
differences
so
that
people
can
see
here's
what
this
offers
and
here's
the
other
thing
and
then
potentially
a
separate
action
would
be
to
to
analyze
the
projects
to
see
what
parts
are
they
trying
to
solve
we're
kind
of
talking
about
here,
but
there's
potentially
parts
that
can
be
broken
out,
and
then
we
bring
those
forward
like
we're
talking
about
these
apis
right
now-
and
I
know
some
of
these
items
are
have
been
discussed
for
quite
a
while
over
in
my
cluster
api.
C
Right
so
I
think
that's
exactly
one
of
the
goals
of
the
task
force,
as
always,
as
I
see
it,
to
do
that
work
of
comparison.
H
Absolutely,
and
and
one
of
the
big
things
and
we've
already
talked
about
this
towel
is,
is
splitting
what
are
the
needs
and
that
we're
having
and
what's
missing
versus
potential
implementations
so
that
and
we
can
work
backwards.
It's
fine,
like
you
know,
and
and
some
of
these
things
already
are
have
implementations,
of
course,
but
there
was
something
driving
it.
So
we
want
to
bring
that
up
to
the
top
is
what's
going
to
be
desired,
especially
if
we
dive
in
the
kubernetes
community
and
get
more
people
involved.
C
So
we
are
almost
at
the
top
of
the
hour,
but
the
agenda
item
about
the
update
about
the
task
force
I'll.
Do
it
in
one
sentence.
Please
continue
to
the
cnf
workgroup
meeting.
That
is
just
after
this,
because
the
decision
has
been
basically
to
move
the
task
force
to
the
cnf
work
group
governance.
C
So
the
update
will
happen
there.
I
guess
in
the
next
meeting.
A
Well,
thanks
tell
and
the
last
thing
for
people
that
are
interested.
There
is
currently
a
self
nomination
period
for
leaders
for
the
cloud
native
network
function,
working
group
leadership
and
there's
a
link
to
it
in
the
meeting
talks,
and
so,
if
you're
interested
to
see
who's
running
for
leadership
or
are
interested
in
getting
more
involved.
There's
more
details
in
the
link
in
the
docs
to
the
mailing
list,
with
that
the
cnf
working
group
will
be
starting
in
about
two
minutes
here.