►
From YouTube: CNF WG Meeting 2021-04-26
Description
CNF WG Meeting 2021-04-26
A
A
A
Hi
we
will
get
started
in
about
two
minutes.
If
you
just
come
in-
and
you
can
add
your
name
and
any
agenda
items
to
the
meeting
notes.
A
B
Yeah,
I
added
two
of
them
just
as
a
follow-up
on
the
discussion.
The
last
week
we
had
around
the
external
network
orchestration
and
about
the
glossary
terms
that
to
be
added,
so
I
think
you
created.
A
A
I'll
just
add
the
review
of
prs
that
were
merged
during
the
week
and
any
open
prs.
In
addition
to
these.
C
B
And
so
the
intention
is
to
bring
everyone
on
the
on
the
same
page
when
it
comes
to
when
we
are
using
these
terms
and
what
we
are
basically
trying
to
define
for
the
x
for
the
external
network,
orchestration
and
the
discussion
so
to
ease
out
such
discussions.
And
that
are
we
talking
about
the
same
thing.
So.
B
D
Thank
you
for
this.
I
think,
there's
a.
I
think
we
need
to
spend
some
time
reading
it
carefully.
At
least
I
do.
B
E
Yeah
I
I'll
chime
in
like
something
like
pod.
I
think
we
should
avoid
overloaded
terms
yeah,
so
I
actually
use
pod
internally
kind
of
the
way
that
alok
has
it
here,
but
obviously
pod
means
something
very
specific
in
the
kubernetes
world
as
well,
which
is
important
right.
That
is
true.
I
mean.
B
I
I
spent
some
time.
Okay,
we,
like
you,
said
we
we
use
it
heavily
when
it
comes
to
saying
about
optimized
data
centers,
which
involves
compute
switches
and
yeah
storage
and
and
the
physical
infrastructure
belonging
to.
But
but
you
are
totally
right.
It's
it's
overloaded
term
in
at
least
in
the
kubernetes
ecosystem,
so
I'll
I'll
try
to
yeah,
replace
it
with
some
some
other
unload.
I.
E
Mean
it's
fine,
like
everybody.
We
need
to
like
help
you
through
this
right
but
like
because,
like
I
said,
I've
used
ponson
same
thing
with,
like
the
term
availability
zone.
Yeah,
that's
that's
a
common
term
in
the
networking
world.
It's
also
a
product
feature
in
aws.
It's
a
metadata.
E
B
A
F
B
F
B
D
Another
quick
comment:
the
the
network
attention
attachment.
You
know
talking
about
primary
and
secondary,
that's
very
multi-specific,
and
maybe
it's
even
going
into
too
many
details.
You
know
talking
about
specifically
relationship
to
pods
yeah.
D
I
I
think
we
should
talk
about
it
in
a
more
abstract
way,
perhaps
but
yeah,
I'm
not
sure.
E
I'm
inclined
to
agree
with
tal,
I
would
say,
if
we're
going
to
bring
anything,
that's
like
I
don't
know
product
or
solution
specific
that
maybe
we
just
do
like
like
a
prefix
to
it
like
if
we're
talking
about
like
network
attachment,
and
we
call
it
just
that
it
should
be
abstract
to
tells
point
if
we're
going
to
talk
about
like
something
that's
specific
to
multis.
You
should
maybe
you
just
put
like
multi-dash
secondary
network
attachment
or
something,
but
really
we
don't
want
to
like
do
anything
in
the
glossary.
E
D
D
What
we're
really
missing
here
is
even
a
definition
of
what
a
network
is,
but
part
of
the
problem
here
is
that
the
kubernetes
plumbing
network
plumbing
sig,
already
took
over
the
the
term
network
attachment
and
network
definition.
So
that's
that's
already.
D
You
know-
and
it's
referenced
here
so
so
that's
already
something
that's
defined
and
and
it's
true,
it
is
kind
of
defined
in
relation
to
malta.
Specifically,
so
those
terms
do
exist,
but
in
some
of
the
discussions
we've
been
having,
I
was
using
this
ian
was
using
it
too.
We
were
thinking
of
of
a
higher
level
kind
of
abstraction,
and
I
was
using
the
word
networking
rather
than
network,
and
it's
not
great.
A
F
Mean,
I
would
argue,
yeah,
I
understand
what
you're
after.
I
would
still
argue
that
the
the
term
network
attachment
as
coined
by
the
kubernetes
network
plumbing
working
group
is,
is
more
generic
than
maltose.
Multis
is
just
a
reference
implementation
of
that
concept.
So
I
think
this
deserves
a
place.
We
can
kind
of
qualify
it
here
to
mean
exactly
that.
Yeah
and.
A
F
F
Have
a
more
generic
abstraction
of
a
secondary
network
attachment
that
encompasses
the
multus
network,
secondary
network
attachment
nsm
secondary
network
attachments
and
any
any
other
type
of
secondary
network
attachment.
D
Yeah
that
that's
a
good
idea
to
get
a
little
bit
technical
here
for
people
who
aren't
totally
versed
in
this.
So
there
is
something
called
a
network
attachment
definition
which
is
standardized
crd
within
the
kind
of
standard,
kubernetes
name
space.
D
Sorry
name,
space
is
the
wrong
world.
The
naming
convention,
malta
specifically,
adds
an
annotation
to
connect
a
pod
to
that
network.
Attachment
definition
so-
and
I
won't
comment
about
how
awkward
those
annotations
are-
I'm
not
a
big
fan
of
them,
but
yeah
you're
right
the
multis
way
to
it
to
use
those
is
specific
to
motus,
but
a
network
attachment
definition
by
itself
could
live
by
itself.
But
the
strange
thing
is
is:
if
it
lives
by
itself,
there's
no
there's,
no
definition.
D
Used
yeah,
it
doesn't
have
a
lot
of
specific
meaning
and
also
I'll
point
out.
It's
it's
a
very
minor
definition.
The
the
crd
is
extremely
simple.
It's
it
simply
encapsulates
a
cni
configuration
in
json,
so
there's
not
a
lot
there.
It's
very
very
generic.
E
E
We
should
be
careful
like
being
generic
in
some
areas
and
be
specific
and
stuff,
because,
if
it's
all
just
like
kate
centric's
networking,
that
should
be
like
the
kate's
primary
network
or
something
this
is
the
awkward
place
that
we
arrive
when
we
come
to
like
cnf's
is,
if
you
talk
to
a
network
operator,
and
you
talk
about
primary
networks,
right
they're,
not
thinking
about
it
from
a
kate's
perspective
right
like.
E
Most
cases
so,
like
I'm,
hesitant
to
use
terms
like
primary
network
and
then
I
have
very
specific
connotations
like
just
because
we're
trying
to
bridge
two
worlds.
Here
I
mean,
if
you
talk
to
a
kubernetes
person,
you
said
primary
network.
They're,
probably
gonna
have
their
own
biases,
so
I
I
think
we
should
be
like
explicit
with
our
terminology
when
it's
important
or
if
we
do
do
something,
that's
vague,
like
network
attachment.
Then
it
should
be.
You
know
like
what
I
was
saying.
E
D
Right,
I,
I
think,
that's
a
very
good
point.
I'll
add
that
you
know
we
usually
talk
about
planes
in
our
networking
world,
so
we
would
call
this
maybe
the
kubernetes
control
plane,
but
then
at
the
same
point,
sometimes
the
data
pain
playing
piggybacks
over
the
control
plane.
You
know
the
primary
network,
so
you
know
other
terminologies
that
we
use
are
things
so
planes
and
we
also
have
fabrics.
D
I
think
this
is
a
very
good
start
to
help
us
thinking,
but
I
think
we
there's
a
lot
more
stuff
in
the
glossary
we
need
to
add
and
think
about.
But
but
thank
you
for
this.
This
is
this
is
a
good
opening
shot.
C
Perhaps
a
better
term
might
be
default,
kubernetes
network
yeah.
It
makes
it
very
explicit,
yeah.
C
B
C
And
default
also
gives
the
implication
that
there
may
be
other
networks
attached
to
it
as
well,
so
as
opposed
to
just
a
unified
primary
or
secondary,
like
secondary,
even
gives
the
connotation
that
there's
only
two
networks
when
there
may
be
more
than
two
as
well.
D
So
I
I
have
a
preference
for
calling
it
a
plane,
because
network
is
so
overloaded,
it's
more
than
just
a
network.
You
know
this
is
the
technicality
of
it.
Yes,
it's
an
ipv3.
Sorry,
it's
a
it's
a
third
layer,
ip
network.
Dual
stack
right
now
is
supported
in
the
latest
version
of
kubernetes.
D
So
we
can
talk
about
it
that
way,
but
it's
implemented
often
using
some
sort
of
fabric,
some
sort
of
sdm
controller,
so
I'm
more
inclined
to
call
it
a
plane
and
then
that
plane
itself
is
is
implemented
through
various
networking
solutions.
Right.
E
E
And
so
we
have
data
planes
and
then
data
planes
you
know,
can
be
subdivided.
So
the
the
thing
too,
that
it's
gonna
be
tough,
is
figuring
out
like
how
networky
you
know,
I'm
gonna
make
that
a
word
we
get
just
because
I
definitely
know
dealing
with
like
the
ed's
of
the
world
in
the
past.
It
calls
us
the
sneaky
network
people
they
tend
to
like
get
a
little
queasy
when
we
start
talking
about
overlays
and
this
and
that.
E
D
E
And
sure,
even
like
any
of
the
ones
that
have
direct
bgp
attachment
right,
there's
ways
to
like.
E
With
the
underlying
cellular,
but
that's
that's
the
thing,
though
right
is
that's,
why
it's
important
to
acknowledge
playing
games,
like
you
said
it's
important
to
acknowledge,
overlays,
etc,
because
that
drastically
changes,
even
at
the
cni
layer,
where
we
do
have
some
some
of
these.
You
know
just
court
case
constructs.
Yes,
that's
a
good
one.
E
I
I've
gone
through
this
before
with
that,
because
we
did
the
same
thing
where,
like
you,
you
bridge
these
two
worlds
like
in
the
nsm
space,
where,
like
nobody
agreed
on
what
a
network
was,
what
an
attachment
was
with
an
interface
even
in
like
the
term
interface
was
like
this
super
complicated
thing
for
all
of
us
to
like
agree
on
so,
but
I
do
like
the
idea
collectively
of
us,
like
centering
around
the
concept
of
networking
networks,
planes
and
overlays,
because
it
kind
of
helps
clarify
those
implementees
implement,
can't
talk,
implementation,
details
that
you
were
describing.
E
And
the
last
part
I'll
say
on
this:
is
it's
important
right?
Because
if
we
go
with
something
like
psyllium,
then
the
gnat
assumptions
that
we
come
into?
Typically
when
we're
dealing
with
kubernetes,
because
we
start
talking
about
that
primary
network,
for
instance,
like
some
of
those
assumptions,
may
be
false
in
certain
contexts.
So
we
don't
want
our
terms
to
like
specifically
lead
us
astray.
C
Yeah
and
another
really
good
example-
and
this
is
one
of
the
ones
that
we
it
was
the
the
early
writing
on
the
wall-
that
it
was
much
more
complex,
was
calico.
C
So
in
fact,
when
the
plumbing
group
was
being
created,
we
met,
we
all
met
in
person
in
austin,
and
it
was
at
the
time
called
multi-interface
group,
and
one
of
the
things
that
we
had
agreed
upon
was
to
get
rid
of
that
specific
name
because
with
calico,
if
you
want
to
add
something
or
change,
something
you
weren't
going
to
add
a
secondary
network
to
it
or
a
secondary
interface.
C
C
It
may
just
be
a
configuration,
that's
flipped
in
a
in
a
control
plane
that
causes
the
the
functionality
that
you
want
within
a
single
interface,
a
single,
a
single
network,
but
from
a
from
a
production
perspective
or
from
an
operational
perspective
that
still
ends
up
with
that
separation
that
you
want.
D
I'll
point
out
another
thing:
one
of
the
one
of
the
things
I
hope
there
will
be
a
deliverable
from
this
group
is
suggestions,
recommendations
for
the
plumbing
group.
So,
as
I
said,
the
curtain
network
attachment
definition
accepted
by
the
plumbing
group
is
extremely
extremely
generic
and
simple,
and-
and
it's
obvious
why
you
know
there's
just
so
many.
There
are
a
lot
of
problems
to
reach
an
agreement
in
alignment.
That
would
please
everybody,
but
you
know
we're
a
group
that
I
think
is.
Is
we
are
versed
in
these
things?
D
Also,
of
course,
as
you
guys
know,
in
in
crds,
you
can
version
things
so
so
maybe
the
version
one
of
this
of
the
crd
that
exists
now,
or
maybe
it's
version,
one
alpha
one
that's
already
set
in
stone,
but
we
could
potentially
think
about
a
version.
Two
of
that
network
attachment
definition.
D
That
would
eventually
encapsulate
a
lot
of
the
new
thinking
that
that
we
might
introduce
here
so
anyway.
My
hope
is
to
eventually
get
to
that
point
that
might
take
a
while.
C
Yeah
and
it's
something
that
we
we
should
not
try
to
deconflict
the
entire
world
here
we
should
just
deconflict
and
explicitly
say
what
we
mean
ourselves
locally,
because,
like
even
something
like
data
plane,
like
we've,
had
conversations
where
it's
like.
Oh
this
is
a
data
plan.
No,
no,
that's
actually
a
control
plane.
The
real
data
plane
is
here
and
then
it's
like
you
look
at
the
hardware.
C
Plane
on
the
hardware,
the
real
data
plane
is
here:
it's
like
turtles
all
the
way
down,
and
so
we
should.
We
should
draw
a
line
somewhere
and
say:
here's
explicitly
what
we
mean:
we're
okay
with
we're,
okay,
that
it
doesn't
cover
100
of
the
edges.
It
should
be
clear
what
we
what
we
mean
and
if
it's
not
clear,
then
let's
make
sure
that
we
we
get
that
clarity,
but
without
having
to
deconstruct
for
across
the
whole
industry.
D
C
Exactly
it's
yeah,
why
my
data
plane
is
someone
else's
control,
plane.
E
I
I
think,
long
term.
Personally,
it
is
because
when
we
start
talking
about
overlays,
you
need
to
have
context
for
what
you're
writing.
On
top
of
I
mean
at
some
point,
you
know
you
need
to
understand
that,
like
if
you're
pulling
in
say
an
srv
like
vfid
into
a
pod
or
something,
then
that
means
you're
starting
to
get
like
down
into
the
weeds
and
like
who
knows.
Maybe
the
best
practices
eventually
say.
E
Srov
is
a
bad
idea,
but
when
you
start
getting
into
those
low-level
things-
and
you
start
like
doing
direct
peering
into
the
underlay
or
even
something
like
calico
right
when
you
you
appear
with
the
underlay
versus
building
an
overlay
on
top
like
you
need
to
have
that
concept
of
an
underlaying
and
overlay
in
place,
and
then
it's
exactly
like
the
planes.
It's
not
the
overlay,
it's
an
overlay
right.
So
I
I
would
say
that
it's
important.
C
Yeah,
I
tend
to
think
of
it
as
if
it's
something
you
have
to
build
before
you
can
establish
connectivity,
then
it's
it's.
Maybe
it
not
guaranteed,
but
it
may
be
an
underlay.
So,
for
example,
I
if
I
have
two
kubernetes
clusters
and
I
want
to
hook
two
istio
based
overlays
to
each
other
or
two
hdl-based
systems.
C
C
D
Well,
you
know
another
term
that
might
need
some
definition
is
mesh
right.
I
think
we
keep
inventing
new
terms
because
network
is
already
taken
so
there's
fabric
plane
mesh,
and
I
was
always
curious
why
network
service
smash
took
that
term,
but
yeah
I
don't
know
if
mesh
even
has
a
common
definition.
Well,
it.
C
Is
a
mesh
of
network
services,
and
so
that's
why
we
chose
that
that
term
it
it
does
meet
it
does
it
does
meet
that
we
negotiate
connections
between
each
other.
We
establish
those
connections
and
the
other.
The
other
phrase
would
be
to
call
it
a
dag
or
or
not
even
a
dag.
It's
it's
also
a
graph
but
yeah
it's.
It
is
a
it
it.
It
is
a
hard
problem,
picking
yeah
picking
names
that
don't
completely.
A
Tell
one
of
the
I
think,
very
important
things
that
you
pointed
out
was:
if,
ideally,
we
can
get
recommendations
accepted
or
at
least
seriously
considered
upstream
into
kubernetes,
and
I
think
the
the
important
thing
out
of
that
would
be
to
make
sure
that
whatever
we
use,
we
can
communicate
clearly
how
it
relates
to
existing
terms.
A
So
the
conversations
earlier
around
like
pod
and
other
stuff,
if
we
feel
like
we
have,
we
need
to
use
a
term
and
ident
and
show
it
where
there's
a
conflict
and
the
meaning.
Then
we
need
to
be
very
clear
and
wherever
we
use
it,
what
we
mean
and
if
we
can
do
that,
then
when
we
present
use
cases,
then
they'll
be
a
lot
easier
to
consume,
because
what
we're
asking
is
for
people
to
take
their
time
to
read
through
understand
what
we're
wanting,
what
we
need
and
then
try
to
find
solutions.
D
Yeah
and
and
by
the
way
one
of
the
things
we
can
contribute
upstream,
it
doesn't
have
to
be
something
technical
in
terms
of
a
new
definition.
It
could
be
updating
the
documentation
right
right
now.
The
documentation
for
kubernetes
networking
is
problematic.
I
think
for
some
of
us
some
of
the
language
there
won't
fit
some
of
the
concepts
we
have
here.
It's
not
generic
enough.
D
So
so
that
could
be
something
that
we
would
do
upstream.
You
know
help
help
kubernetes
find
better
language.
I
mean
it's
no
mistake
that
it
took
so
long
for
kubernetes
to
finally
get
dual
stack
ip
support.
D
D
That's
putting
the
carriage
before
the
the
horse,
maybe
I
think
we
have
a
lot
of
work
to
get.
C
There
yeah,
I
would
even
go
far
enough
to
suggest
that
early
kubernetes
was
not
even
concerned
about
things
like
ip
or
similar.
It
was
primarily
concerned
with
just
connectivity
like
I
have
a
name
it
resolves
to
something.
Can
I
can
I
connect
to
it,
and
there
were
basically
three
properties
that
if
it,
if
you
met
it,
then
it
was
a
happy
like
you
can
nodes
talk
to
nodes.
You
can
know
stock
to
pods
and
bots
talk
to
pods
and
how
that
happened.
C
C
Yeah,
but
possibly
I
ip
is
probably
the
one,
the
one
assumption
that
it
made
in
that
in
that
path.
A
A
E
You
go
back
to
the
discussion
taylor
so
which.
E
The
and
get
this,
and
if
you
scroll
up
so
I
don't
know
if
you
guys
remember
like
very
first
call
or
second
call,
I
said
we
needed
to
define
cnf
people
came
at
me
with
pitchforks
and
then
sure
enough.
None
of
us
agreed
what
it
was
when
the
first
pr
was
put
in.
So
I've
made
another
attempt
to
pull
this
one
from
the
tug.
H
I
thought
sorry,
I'm
late
by
the
way
I
got
another
meeting
and
I
only
just
got
out
of
it,
but
I
saw
gergei
made
a
perfectly
reasonable
point
that
kubernetes
varies
from
version
to
version,
but
you
know
applications
still
run
on
kubernetes,
regardless
of
the
version.
So
I
think
there's
something
we
can
do
with
coop
native
here.
E
Yeah
so
I
mean
first,
he's
got
the
cloud
native
like
victor
kind
of
talked
about,
maybe
just
rephrasing
it
a
little
bit.
I
mean
I'm
fine
with
whatever,
but
I
would
like
us
to
just
have
a
starting
point,
but
when
we
say
we're
working
on
cnf
stuff,
what
does
a
cnf
mean.
H
E
Well,
not
only
that
but
like
we
should
assume
that,
like
this
definition
is
really
just
a
placeholder,
because
we
just
got
done
discussing
for
30
minutes
the
fact
that
basically,
everything
that
we're
using
to
build
the
definition
of
a
cns
cnf
is
poorly
defined.
So
then
you
get
into
this
weird
chicken
in
an
egg
scenario,
but
like
I
mean
as
people
come
and
start
checking
us
out,
keep
keep
kind
of
used.
E
D
Yeah
yeah
I'll,
add
that
you
know
some
definitions
can
be
very
specific
and
they
can
be
very
generic,
so
we
could
potentially
work
something
that
is
kind
of
a
big
definition
that
allows
for
a
lot
of
a
lot
of
specificities.
H
H
So
I
think,
even
with
a
fairly
light-handed
definition
we'll
catch
somebody
out,
but
yeah
I
mean
other
than
that.
We
don't
have
to
go
too
far
in
depth.
It
doesn't
need
to
be.
It
runs
with
a
certain
kind
of
networking
it.
It
requires
cpu
pinning
this
kind
of
thing.
In
fact,
that
isn't
part
of
the
cnf
definition.
I
think
we
all
accept,
but
somebody
will
say
it
is
so
you
know
we're
going
to
have
to
find
some
middle
ground
there.
E
That
I
think
I'll
just
put
the
pr
in
later
today
and
I'll
probably
incorporate
some
of
them
victor's
suggestions,
and
then
you
know
this
comes
to
thousand
earlier,
like
that,
that's
the
one
I
pulled
from
the
tug
white
paper,
the
first
one
that
caused
all
the
like
conflict
was
the
one
I
pulled
from
the
scene,
principles
etc.
Like
you
know,
at
some
point
too,
we
could
just
the
thing
is:
theoretically,
all
of
this
is
agile.
Theoretically,
all
of
this
is
open
right.
A
Frederick,
I
think
you
put
forward
the
the
term
of
using
cube
native.
C
C
So
with
cloud
native,
we
don't
want
to
make
the
assumption
that
it's
that
it
that
it
runs
any
specific
place
in
the
cloud,
but
instead
to
try
to
to
drive
down
what
we
mean
to
to
a
smaller
thing
like,
I
would
argue
things
like
if
you
create
a
build
pack
which
runs
in
lambda
or
runs
in
cloud
run,
and
you
were
to
run
in
that.
C
So
I
think
it's
it's
a
balance.
We
don't.
We
don't
want
to
turn
the
crank
too
far,
but
I
think
the
term
I
still
think
the
term
is
still
still
useful,
but
I'm
okay
with
with
dropping
in
favor
of
another
term
as
well.
If
that's
what
the
group
would
like,
the
the
purpose.
H
H
H
C
So,
as
I
was
trying
to,
I
was
trying
to
be
careful
in
not
having
to
argue
those
particular
types
of
things
it's
like
it
could
be.
We
could
say:
here's
here's
the
best
way
to
run
within
career
kubernetes
on
how
to
interact
with
it,
and
it
separates
out
the
the
question
as
to
whether
it's
a
good
idea
to
to
expose
out
these
type
of
things
like
what
what's.
C
What
are
our
best
practices
towards
this?
And
what
is
something
that
runs
well
in
those
in
those
environments
and
separate
the
two
out
so
that
we
don't
and
the
end
and
isolates
the
conversation
specifically
to
things
that
are
that
are
within
kubernetes,
rather
than
try
to
drag
in
a
whole
range
of
other
things,
which
we
may
eventually
have
to
jump
into
some
of
those
things?
But
but
we
don't,
we
don't
have
to
do
them
now.
D
I
I
don't
know
if
this
is
helpful
or
or
will
complicate
things
that
anymore,
but
for
me
I
never
liked
the
term
cloud
native
network
function.
D
I
I
don't
refer
to
workloads
as
cloud
native
cloud
native
to
me
implies
a
set
of
practices,
so
you
could
potentially
take
a
network
function
that
was
not
designed
necessarily
with
cloud
native
practices
but
wrap
it
in
some
sort
of
orchestration
system
connectivity,
layer
set
of
operators
that
cloud
nativizes
it
right
makes
it
work
much
better
within
the
kubernetes
environment,
in
a
way
that
can
make
it
seem
cloud-native
whether
the
workload
itself
was
designed.
That
way
is
almost
beside
the
point.
C
A
little
bit
of
historic
context,
what
you're
describing
is
exactly
what
we
meant
as
in
it's
not
enough
to
do
a
lift
and
shift,
but
you
really
should
redesign
it
to
to
work
in
a
cognitive
environment.
So
if
you
have
something
that
was
a
lifted
shift
in
the
containerized
that
that
is
not
the
intention
of
cloud
native
network
functions,
it
was
it's
literally.
How
do
you
design
following
12-factor
apps,
creating
good
metadata
that
you
can
then
consume,
and
you
can
then
reason
about
it.
C
So
that
way,
your
scheduler
can
make
decisions
about
your
workload
like
oh
you're,
a
workload
that
supports
ip,
I'm
not
going
to
join
you
to
something
that
only
speaks
ethernet
frames
instead.
I'll
make
sure
I
connect
you
to
something
else.
That's
ip
or
if
you
use
srov
I'll,
connect
you
to
an
sroe
thing
and
make
sure
you
get
all
that
in
the
scheduler.
H
And
we're
going
to
go
around
this,
I
mean
because
I
want
to
contradict,
tell
and
I'm
going
to
bite
my
tongue
and
not.
I
don't
think
this
is
going
to
be
a
productive
way
of
using
our
time
on
this
call,
because
there
are
probably
as
many
perspectives
as
what
cloud
native
means
or
could
mean,
as
there
are
people
on
this
call.
So
let's
take
this
to
the
discussion.
If
you
really
want
to
have
it,
but
I
think
yeah
as.
C
Yeah
and
to
add
to
that,
we
spent
well
over
a
year
trying
to
get
people
to
come
to
agreement
on
what
it
means
to
be
cloud
native
network
function
and
there's
still
no
agreement
on
that.
So
that's.
The
other
reason
for
driving
towards
cube
native
was
to
avoid
all
of
the
discussion
around
that,
because
it's
a
that's
a
trap
that
will
that
will
lead
us
down
towards
a
dark
hall
that
we
may
never
emerge
from.
D
Yeah
that
was
pretty
much
my
point
as
well.
You
know
it's
kind
of
nice
to
idealize
and
think
of
these
pure
excellent
cloud
native
functions
that
are
out
there
but,
for
example,
our
whole.
Our
whole
conversation
about
network
networking
orchestration
is
not
cnf
specific.
I
mean
you
could
work
with
pnf's
as
well.
D
Right
anyway,
I
I
don't
think
I'm
helping
here,
I'm
just
making
it
more
more
disgraceful.
H
A
It
would
be
all
week
eight
hours
a
day,
let's
everyone,
if
you,
if
you
have
thoughts
on
cubenative,
then
please
add
it.
If
we
feel
if
we
feel
like
we
need
a
new
into
the
discussion
thread
and
if
you
feel
like
we
need
a
dedicated
thread.
Just
for
cube
native
then
create
one.
We
do
already
have
a
dedicated
thread
for
cnf
definition,
so
feel
free
to
add
in
here
and
then
ian
use.
A
B
B
H
I
want
you
to
ask,
I
mean
it
seems
to
me
that
we've
got
dan
m
maltes,
this
nsm
a
bunch
of
theoretical
things
that
could
exist,
but
don't
they're
solutions
to
a
problem.
They
could
potentially
be
a
best
practice
if
you
can
make
a
strong
argument
that
one
of
them
does
everything
that
could
possibly
be
conceived
of
and
could
never
be
better
right.
H
There
is
a
perfection
here
and
you've
reached
it
and
I
presume
you're
not
arguing
that
you
know
is
that
you're
saying
you
know,
is
you
know
better,
not
the
best
ever
the
question.
B
F
I
think
that
that's
not
our
point
ian
is
not
at
all
substituting
or
replacing
any
of
those
that
you
mentioned.
It
is
complementing
them
because
it's
filling
a
gap
for
which
there
is
nothing
there
today,
and
that
is
to
orchestrate
networks
that
then
multus
and
the
cni's
can
use
in
order
to
attach
pots
to
them.
H
Right
so
you're
thinking
in
terms
of
more
the
connectivity
that
multis
doesn't
address
as
opposed
to
the
presentation
that
multisum
does
address,
but
I
mean
all
right:
fine,
motorsports.
F
A
very
small
gap
or
a
task,
and
that
is
to
to
to
plumb
pots,
to
networks
that
that
already
exist
and
are
configured
up
to
a
certain
level
inside
the
cluster
on
the
worker.
Node.
That's
what
the
cni
does
right
right,
you
know,
is
addressing
all
the
rest
that
is
there
and
to
set
up
those
networks
in
the
fabric
and
inside
the
cluster
and
maybe
on
the
dc
gateway.
In
order
to
prepare
the
infrastructure
for
for
multus
to
do
its
job
or
for
diamond
to
shop.
H
Yeah
or
anything,
that's
fine.
What
I
was
trying
to
I
may
have
not
used
the
most
elegant
words
to
do
it,
but
the
the
point
I
was
getting
to
is
you
can
take
this
two
ways
either.
You
can
say
that
eno
itself
is
the
best
practice
or
should
be
a
best
practice,
because
it
solves
this
problem
either
as
well
as
anything
does
right
now
or
as
well
as
ever
will
be
solved.
The
second
part
of
this
is,
rather
than
take
eno
the
implementation.
H
B
I
mean
with
the
external
with
eeno
we
basically
bring
in
the
like
jan
said,
the
the
automation
for
the
external
networks,
which
then
eventually
be
consumed
by
the
network
managers
like
maltese
or
dynam
and
nsm
so
yeah.
We
kind
of
bringing
the
sense
of
automation
for
for
such
networks
that
will
then
later
be
consumed
by
the
network
functions
which
doesn't
exist
today,
yeah
in
the
ecosystem.
B
H
F
I'm
not
sure
ian,
I
understand
what
you're
after
at
all,
I
must
say:
I'm
completely
puzzled.
What
do
you
mean
with
best
practice?
Well,
we
have.
We
have
a
challenge.
We
have
a
challenge
today.
If
an
operator
deploys
a
kubernetes
cluster,
he
has
to
manually
set
up
all
the
networking
underneath
and
inside
the
cluster
in
order
to
prepare
for
these
secondary
network
attachment
managers
like
multisynth
and
this
the
cni's
that
they
control
to
do
their
work,
so
we
don't
have
a
best
practice
today.
F
What
we
are
trying
to
do
is
to
create
something
that
that
gives
that
provides
an
api,
a
kubernetes
style
api.
So
it
is,
it
is
it's
meaningful
to
actually
host
it
on
the
cluster
itself,
crd,
to
provide
an
interface,
a
declarative
way
for
for
an
orchestrator
to
to
to
create
those
networks
automatically
all
right.
That's
that's
the
idea.
That's
what
we're
after.
H
That
a
best
practice
here
that
we're
looking
for
is
that
you
have
a
set
of
apis
well,
an
initial
best
practice
is
that
you
have
a
set
of
apis
that
allow
you
to
reconfigure
the
networks
that
you
can
attach
to
where
you
want
to
attach
to
and
a
long
term
best
practice
would
be.
You
use
precisely
this
api
because
this
api
is
standard.
If
you
use
it
you'll
work
on
any
any
kubernetes
deployment
you
find
and
that's
where
we
could.
You
know
again
either
of
those
actually
says
eno
in
it.
H
But
the
thing
I'm
trying
to
I'm
not
trying
to
say
eno
is
good
or
bad.
H
You've
heard
that
I
have
my
I've
thought
about
this,
and
I
I
think
there
are
other
things
we
can
do
here,
but
that's
not
to
say
that
I
mean
right
in
my
choice
of
implementation,
I'm
just
trying
to
work
out
what
it
tells
us
that
we
can
use
from
a
best
practice
perspective,
and
I
think
I
do
absolutely
accept
that
eno
lets
you
do
something
that
you
need
to
do,
and
and
interestingly
also
that
today
you
can't
practically
speak.
You
do
right.
H
So
if
we
were
to
write
you
know,
user
stories
and
use
cases
are
not
altruistic.
In
my
experience,
you
write
them
with
a
fairly
pointed
aim
of
saying
there
is
a
hole
here
that
we
need
to
fix.
So
what
you're
saying
here
is.
I
would
like
to
connect
to
the
network
that
sits
next
to
my
cloud,
and
I
currently
can't
do
that.
I'm
gonna!
Well,
you
don't
have
to
say
I
can't
do
that.
F
F
A
A
A
Yes,
I
it
sounds
like
there
are
some
best
practices
that
are
at
least
being
are
part
of
the
design
for
you're
trying
to
get
some
type
of
practical
solution
that
could
be
used
and
at
a
minimum,
you're
saying
declarative
apis
for
configuring,
the
network.
H
Yeah,
I
think
also
there's
some
things
about
you
know
as
it
is
it's
current
implementation,
the
fact
that
it
does
layer,
two
networking
that
one-
I
think
I
mean
you've,
heard
my
opinion
on
this
before,
but
it
isn't,
to
my
mind,
necessarily
a
best
practice,
because
you
know
there
are
other
ways
of
doing
networking.
They
may
or
may
not
be
more
valuable,
so
that
one
might
be
more
of
an
implementation
choice.
But
again
it
sounds,
like
we've
just
said,
not
really.
The
focus
of
this.
H
H
B
So
yeah,
so
that's.
H
Fine,
I
I
I
I
appreciate
why
we
do
that
because
it's
you
know
it's
a
simple
thing
to
do:
it's
logical,
it's
basic
and
actually
it
plays
the
history.
So
everybody
understands
it
totally
fine
and
it
may
well
have
its
uses
and
that's
completely
good
as
well,
but
I
think
if
we
divorce
the
two,
then
you
don't
lose
one
argument,
because
you're
trying
to
win
the
other
one
you've
got
you're.
We
need
a
decent
networking
api.
H
A
A
A
For
that,
so
we
can
do
that
yeah.
All
right!
I
want
to
quickly
go
through.
We
got
about
a
minute,
so
switch
on
what's
been
merged,
so
we
switch
individuals
and
the
interested.
A
If
anyone
would
like
to
add
themselves,
it's
now
just
a
long
list
of
anyone
interested
and
then
we
tried
to
attach
names
company
names
to
everyone,
so
we
see
that
so
this
is
backwards
compatible
with
what
we've
existing
have.
But
please,
if
you're,
not
on
here
and
you'd
like
to
be
added,
then
do
a
pr
request
to
add
yourself
and
it
has
the
get
abuser
name.
We
remove
the
tech
leads
from
from
the
governance
items
until
we
need
them.
We
can
add
them
back
later.
H
What
we
were
trying
to
do
there
is
they'd
kind
of
grown
up
as
a
concept
without
really
having
a
purpose,
so
we
thought
it
was
better
to
remove
the
wording
until
we
found
what
we
want
people
to
do,
and
then
we
will
fold
it
back
in.
So
it's
not
like
they've
gone
and
they've
gone
forever.
We're
not
trying
to
change
the
way
things
work,
we're
just
trying
to
make
sure
that
that
need
drives
change
versus
you
know:
change
for
changes,
sake.
A
Yep
all
right
and
let's
see
the
acceptance
process
for
delegation,
this
has
been
merged,
so
this
is
about
the
simpler
items
and
will
be
based
on
the
contributing
guide
and
the
pull
request
information.
So
all
of
this
is
now
merged
in
there
and
you
can
see
that
and
I
think
those
are
the
top
ones.