►
From YouTube: CNF WG Meeting 2021-06-28
Description
CNF WG Meeting 2021-06-28
A
A
All
right,
I
think
we
can
go
ahead
and
kick
things
off.
Can
everybody
hear
me
yep.
B
A
Okay,
so
good
morning,
apologies
for
being
away
a
little
bit.
I
was
fulfilling
my
army
reserve
duty
been
playing
a
little
bit
of
ketchup,
so
it
doesn't
look
like
there's
a
ton.
It's
been
added
to
the
agenda.
A
I
did
one
thing,
but
just
to
start
just
so,
everybody's
tracking
this
sunday
is
july
4th,
so
we're
canceling
next
monday's
call,
just
because
the
majority
of
participants
in
the
us
are
going
to
be
on
a
long
holiday
for
that
weekend
and
observance
of
independence
day
and
then
same
thing
for
the
tug
it's
going
to
be
pushed
out
to
august,
2nd
everybody's
tracking,
the
kubecon
and
o,
and
the
s
are
coming
up.
So
if
anybody
has
anything
they
want
to
add
to
the
agenda,
we
can
dive
into
the
glossary
and
the
discussion.
A
A
They
were
looking
for
some
help.
This
is
one
of
the
ones
that
adam
specifically
mentioned
that
the
glossary
team
could
use
some
help
on
is
providing
just
a
succinct,
high-level
definition
of
what
a
cni
is
from
the
context
of
network
people.
D
A
And
I'm
of
the
mindset
that
we
just
regurgitate
that
definition-
I
don't
know
it's
just
supposed
to
be
readable
right,
like
we
got
the
overview
on
what
the
glossary
is
aiming
to
achieve.
So
we
could
do
something
else.
It's
just
one
of
the
things
that
they
asked
us
about,
so
because
it's,
in
my
opinion,
a
pretty
well-defined
thing.
It
shouldn't
be
that
hard
for
the
cnf
working
group
to
put
in
a
pr
and
contribute
to
the
cncf
glossary.
C
Yes,
tal.
What
they're
trying
to
do
is
make
all
of
the
cncf
glossary
start
with
definitions
and
and
the
terminology
that's
accessible
for
people
that
aren't
even
familiar
with
cloud
native
networking
or
anything.
C
So
it
would
be
help
with
that,
like
just
bring
it
up
as
high,
as
I
think
they
said
like
explain
this
to
a
five-year-old
type
of
thing
or
a
third
grader
or
whatever
that
that
the
terminology
is
so
explain
it
like
that
and
then
go
up
and
then
the
otherwise
is
just
making
sure
whatever
goes
in.
There
is
aligned
with
kubernetes
and
communicate
something
about
networking.
A
Additionally,
tal,
I
I
think
the
parts
that
we
can
help
with
is
this
right.
Here
I
mean
I
don't
know
how
many
people
messed
around
with
kate's
before
you
know,
the
cni
was
drafted
and
the
lack
of
uniformity,
but
I
mean
this
little
bit
of
color
commentary
and
the
problem
it
addresses
and
how
it
helps,
I
think,
is
something
where
we
could
throw
our
opinions
in
as
network
people.
A
D
Well,
I
guess
if
they
do
want
feedback,
it's
for
that
second
sentence:
it
takes
care
of
the
basic
setup.
Well,
it
takes
care
of
the
setup.
I'm
not
sure
basic
is
the
right
word
here.
It
could
be
very,
very
complex
and
the
other
part
host
network
interfaces.
Well,
it
could.
E
D
Well,
of
course
right,
but
but
even
in
terms
of
cni
plug-ins
that
we
we
work
with
a
lot,
I'm
not
sure
this
definition
is
comprehensive
enough.
It's
it's
a
bit
odd
to
me
the
host
network
interfaces.
Well,
that's
you
know
a
lot.
A
lot
of
these
plugins
work
with
sdn
and
stamp
configuration
and
host
network
interfaces
is
not
the
not
the
core
issue
that
it's
not
their
main.
Their
role
is
not
what
is
described
here.
I
feel.
E
A
Looking
yeah,
but
I
think
we
could
eventually
like
add,
like
links
to
other
stuff
we
provide
but
like
at
the
end
of
the
day
it
provides
connectivity
from
the
pods
to
the
rest
of
the
network.
Right
I
mean
we
should
be
more
like
eloquent
than
that,
but,
like
I,
don't
think
we're
necessarily
for
this
definition.
D
So
I
I'm
just
saying
that
the
second
sentence
just
seems
wrong.
Even
if
we're
not
getting
into
it,
I
would
remove
the
word
basic
and,
as
the
n
said,
you
know
it's
about
setting
network
interfaces
and
pods
this
part
about
host
network
interfaces,
I
think,
should
be
removed.
F
If
I
may
add
one
more
perspective,
I
I
also
agree
that
we
shouldn't
make
this
as
complex
as,
like.
That's
seems
simple
enough,
but
in
the
service
mesh
world
we
use
it
the
cni,
the
fact
that
you
can
chain
load
cni's
and
they
are
executed
in
a
privilege
mode.
We
use
it
as
a
way
to
actually
do
you
know
ip
tables
and
whatever,
which
is
not
necessarily
setting
up
interfaces.
It's
just
you
know
doing
networking
stuff,
but
the
crucial
point
here
is
that
this
is
run
in
a
privilege.
F
D
G
It
hi
this
is
sheetal
joshi
from
aws,
I'm
a
developer
advocate
for
eks
team.
Here
I
joined
in
a
little
bit
late,
but
I
have
my
basic
question.
I
think
I
am
actually
working
on
the
telecom
space
currently
and
the
cni.
I
think
more
questions
come
related
to
the
core
delegate,
cni
versus
the
maltese
interfaces.
What
that
cni
should
look
like,
and
what
does
the
cni
mean
for
when
multiple
interfaces
are
involved?
G
A
That,
if
we
don't
feel
like
there's
an
adequate
definition
or
if
we
need
you
know
more
explanation,
it's
something
that
we
could
put
in
the
cnf
working
group
glossary.
But
for
this
it's,
like
I
think,
taylor
said
it
was
like
to
explain
it
to
me.
Like
I'm
five
thing,
this
is
for
people
like
executives
at
your
company
who
know
nothing
whatsoever.
H
Probably
still
not
good,
but
there's
also
a
semantic
thing
here.
So
it
is
cni
is
not
a
kubernetes
specification.
Cni
is
a
standalone
definition
which
it
lives
under
under
either.
I
think
either
the
linux
foundation
or
the
cmcf,
I'm
not
sure
which
so
it's
also
designed
to
for
configuring
network
interfaces
in
linux
containers,
and
we,
I
think
we
should
word
it
where
we
were
like
that,
and
then
we
put
in
such
as
kubernetes
spots
just
just
to
make
it
exact,
because
we
don't
want.
D
Very
good
point:
I'm
going
to
link
the
specification
here
in
the
the
chat
for
whoever
is
interested.
H
Yeah
and
very
often
there
are,
there
is
additional
information.
So
when
you
run
cni,
their
kubernetes
will
often
include
additional
environment
variables,
which
may
be
kubernetes
specific.
So
just
from
a
practical
perspective,
it
is,
it
is
common
to
see
sometimes
cni's
that
will
only
work
in
a
in
a
kubernetes
perspective,
depending
on
how
that's
set
up,
but
but
generally
the
the
purpose
of
cni
is
to.
H
A
D
How
about
this
had
it
has
been
adopted
by
kubernetes
for
creating
network
interfaces
on.
D
D
A
A
D
I
F
J
Hello,
hello,
yeah,
I'm
kishan
abdul
from
from
orange
france,
and
I'm
a
research
and
development
engineer
at
orange
labs
and
just
for
that
question.
J
So
I
don't
know
if
you
aim
only
to
to
define
the
cni
for
for
developers
and
users
or
or
you,
and
you
aim
also
to
to
define
best
practices
for
using
siena
so
and
more,
particularly
using
mil
test
to
attach
multiple
interfaces
to
the
compelling
spots,
because
I
know
when
using
multis
we
are
able
to
to
attach
interfaces
of
different
types
and
each
type
has
its
dependencies
on
may.
J
Have
it
dependencies
on
on
the
worker
notes
of
the
kubernetes
class
there
like,
for
example,
the
the
presence
of
given
interface
on
the
port.
So
so
it
may
make
network
functions
less
adaptable
to
to
to
cabela's
clusters
in
general.
So
I
don't
know
if
you
aim
to
to
define
best
practices
like
for,
for
example,
using
additional
interfaces.
J
Only
when
it
is
required,
for
example,
because
I
know
it-
it
is
a
little
bit
more
complex
than
using
the
default
network
provided
by
by
kubernetes
and
the
the
primary
cna
plugin.
So.
H
Yeah,
I
think
part
of
it
as
well
is
not
just
identifying
best
practices,
but
there
are
some
very
real
gaps
there
as
well,
so
I'll
use
sreov
as
an
example.
So
when
you
request
for
srv,
there
are
multiple
ways
of
beginning
sorry,
unnecessarily
interfaced
into
your
into
your
system,
either
through
direct
or
through
a
kernel
interface.
H
But
if
all
you
want
is
like
hey,
I
need
something
that
connects
to
a
certain
srov,
then
you're
in
good
shape.
But
if
you
need
a
specific
one,
such
as
I
need
one
that
goes
to
a
specific
network
with
a
specific
clause,
that's
been
configured
by
a
specific
type
of
rec
switch
and,
and
you
needed
to
align
against
a
certain
certain
limo
zone.
Then
these
type
of
questions
become
much
more
difficult
because
the
there's
there's
not
a
strong
topology
management
story
in
kubernetes
just
yet.
H
H
So
we
can
ideally
try
to
work
out
ways
to
to
fix
it,
but
we
also
have
to
be
careful
because
telecom
is
not
the
primary
purpose
of
kubernetes
and
we
need
to
be
careful
not
to
over
complicate
kubernetes
for
the
sake
of
service
provider
workloads,
especially
when
you
consider
that
the
total
size
of
telecom
is
minuscule
compared
to
the
overall
to
the
overall.
Well,.
A
H
A
Up
because
this
is
not
relevant
to
the
contribute
contribution
to
the
cnf
cncf,
some
thing,
so
I
didn't
catch
whose
name
it
was
from
orange
labs.
But
those
are
the
types
of
exact
things
that
we're
looking
for
for
us
to
add
into
the
use
cases
which
is
here.
A
You
can
see
that
there's
a
couple
that
have
been
committed
and
then
ultimately,
these
use
cases
will
feed
into
the
best
practices
right
so
like
you
could
start
by
providing
some
use
cases
following
the
template
in
here
on
the
types
of
interfaces
that
you
are
using
to
connect
to
your
network,
and
you
know,
interface
with
pods
and
then
from
there
we
could
start.
You
know,
building
the
best
practice
out
there,
like
here's,
a
really
bad
way
of
doing
srv
in
a
pod,
here's
a
good
way
etc.
A
But
for
now
I
just
like
you
said
I
don't
want
to
like
confuse
like
the
work,
we're
doing
versus
the
work
that
we're
just
trying
to
assist
the
cncf
thing
and
actually
I'll
jump
over
to
that
real,
quick.
Just
so
people
can
see.
A
I
do
not
know
why
my
right
click
does
not
want
to
play
nice
today.
So
for
those
who
weren't
here
for
adam's
presentation,
the
cncf,
you
know
it
wasn't
just
our
group
that
complained
about
the
lack
of
definitions
out
there
in
the
ecosystem.
A
But
you
know
the
reality
is:
is
most
people
outside
of
like
this
group,
and
you
know,
people
who
really
care
about
like
api
life
cycle
management
probably
are
happy
with
something
like
this
and
then
the
parts
that
I
would
like
us
to
really
quickly
at
least
put
some
footnotes
down,
and
then
we
can
jump
to
the
glossary.
A
A
I
mean
the
fact
that
I
can
move
from
psyllium
to
calico,
to
you
know
flannel
if
I'm
kicking
it
old,
school
or
whatever,
and
have
a
reasonable
expectation
that
things
are
going
to
work
while
having
completely
different
network
paradigms.
I
think
is
you
know
something
that
that
helps
with
so
the
right.
D
And
you
know
multicni
multiplexes
the
whole
thing
too,
so
that
adds
yet
another
important
contribution
that
cni
does
but
right.
This
is
out
of
scope
for
this
specific
issue.
J
Okay,
so
thank
you
for
clear
response.
Thank
you.
A
Yeah
absolutely
and
the
stuff
that
you're,
addressing,
though,
is
things
that
this
group
uniquely
cares
about,
so
by
all
means
please
kind
of
take
a
peek
at
the
the
section
for
sorry,
I
lost
my
stuff,
the
section
for
adding
use
cases
and
then
building
best
practices
off
of
that.
C
And
you
could
also
create
a
a
new
discussion
directly
if
you're
not
ready
to
add
a
use
case.
If
you
have
a
use
case,
those
are
one
of
the
best
ways
for
us
to
have
the
focus
discussion,
but
this
discussion
area
is
another
place
to
add.
A
H
J
J
I
think
I
think
yes,
network
plugins
can
can
be
used
for
virtual
machines,
but
I
think
the
cna
project
aims
only
to
to
adapt
this
plugins
to
container
use
case
use
cases.
So
I
don't
know
if
it
is
useful
to.
D
It's
actually
a
good
point.
I
did
I
I
posted
in
the
chat,
my
my
rewrite.
I
did
put
other
resources
the
reason
it's
worth
pointing
out.
Well,
first
of
all,
it's
not
for
containers.
It's
pods
right!
The
interfaces
are
shared
by
all
containers
within
the
pod,
but
also
there
are
interesting
resources
in
kubernetes
that
use
cni
independently,
so
cube,
vert
mimics
a
pod.
Well,
it
is
a
pod,
but
it's
not
a
pod
running
containers.
D
Vert
does
something
entirely
different,
so
you
know
cni
is
kind
of
the
standard
within
the
kubernetes
world.
The
internal
resources
in
cubelet
use
it,
but
also
other
extensions,
adopt
it
as
well
so
yeah.
I.
J
H
And
I'm
trying
to
avoid
getting
into
the
definition
of
linux
containers
here
as
well,
because
right
in
reality,
all
of
these
things.
If
it's
configuring
the
network
name
space,
you
can
make
an
argument.
That's
running
in
some
form
of
a
linux
container,
with
the
process
sitting
in
there,
whether
it's
hubert,
firecracker,
vm,
g-visor
or
so
on.
H
So
you
could
say,
that's
actually
being
hosted
in
a
something
that
looks
like
a
container
like
a
limits
container,
but
but
for
the
sake
of
simplicity,
because
I
think
maybe
keeping
the
word
container
there
rather
than
just
saying
oneness
container,
probably
probably
makes
more
sense
because
it
won't
less
likely
to
confuse.
H
D
H
See
and
cni
explicitly
is
an
executable,
so
kubernetes
calls
it.
It
runs
cni
exits
and
the
interface
should
exist
and
it
provides
information
back
in
adjacent
format
as
to
what
what's
present.
So
this
means
there's
no
long-term
lifetime
of
the
cmi
plug-in
itself,
so
you
could
have
something
else
like
an
sdn
or
similar
or
keep
the
information
and
use
it
out
of
band.
So
it's
so
cni
itself,
because
it's
an
executable.
Has
these
significant
limitations?
D
I
I
don't
think
the
problem
is
that
it's
an
executable,
because
we
we
have
a
concept
in
the
network.
Plumbing
group
we
call
a
thick
cni
plug-in,
so
the
executable
can
just
be
a
client
that
runs.
The
problem
is
the
definition
of
the
interface
itself
and
when
it's
called,
it's
called
for
creation.
What
happens
behind
the
scenes
can
be
a
long
time
running
service,
but
the
problem
is
it's
not
called
again
right
for
changes
right,
so
the
problem
is
the
interface
itself
and
how
it's
defined
and
how
it's
supposed
to
be
used.
I.
A
Cni,
like
gotchas
later,
what
things
does
it
do
right
now
other
than
portability?
How
does
it
help
us
because
I'd
like
elope
to
be
able
to
put
this
in
we'll
have
he
kicked
us
off
initially?
But
I,
like,
I
said
it's
tough,
because
I
know
we
want
to
like
go
down
these
rabbit
holes
about
like
the
things
that
leave
us
woefully
underwhelmed
with
the
cni,
but
as
far
as
the
cncf,
you
know
glossary
context.
D
A
D
I
would
add
here
is
decoupling.
It
allows
decoupling
network
solutions
from
kubernetes
development.
I'm
not
sure
I
would
phrase.
H
H
Yeah
cni
itself
is
also
a
layer
three,
as
opposed
to
layer,
two,
at
least
from
the
specification
perspective.
Of
course,
it's
an
executable
people
can
do
whatever
they
want
behind
it,
but
the
interface
itself,
the
spec
itself,
calls
for
for
layer,
3.
D
D
Correct,
but,
but
within
the
context
of
what
cni
does
right
when
it
creates
those
network
interfaces,
it
also
allows
for
configuration
of
if
you
want
to
use
whereabouts
or
something
like
that,
it
can
let
you
plug
in
an
address
allocation
management
system.
H
Correct
and-
and
it
returns
to
kubernetes
dns
information
as
well
to
go
into
the
to
go
into
the
pod,
so
it
doesn't.
Actually
the
file
system
does
not
exist
yet
when
cmi
runs
so
there's
nowhere
for
it
to
actually
drop
drop,
a
resolve.conf,
but
it
does
return
dns
information
which
can
then
be
consumed,
which
is
going
to
produce
the
right
set
of
configurations
with
the
container
down
the
line.
D
So
you
know
what
I
would
add
to
the
first
definition:
when
we
say
creating
network
interfaces,
we
can
say
creating
network
interfaces
and
allocating
ip
addresses.
G
I
think
there
are.
There
are
two
other
aspects
that
we
need
to
touch
as
well.
Here,
maybe
it
is
very
intrinsic
and
we
don't
have
to
talk
about
the
network
overlay
network,
the
underlying
network
and
the
network
policies.
H
I
I
think
we
should.
We
should
say
that
kubernetes
itself
applies
additional
requirements
to
the
cmi
plug-in
in
regards
to
network
policy
and
and
service
management,
because
that
is
a
very
important
point.
Kubernetes
will
break
if
you
did
not
support
at
the
very
minimum
services
and
it
won't
break
if
you
don't,
if
you
don't
provide
policy,
but
it's,
but
it's
a
significant
detriment
to
a
runtime
environment.
If
you
don't
have
policy
attached
to
that's
a
a
good.
D
A
H
G
G
E
H
You
know
it's
do
we
know
if
cni
itself
is,
the
actual
spec
is
being
is
being
updated
because
that
that's
since
very
s,
that's
been
stable
for
a
very
long
time.
So
historically
they
haven't
added
these
additional
features
onto
it,
so
is,
is
that
just
the
additional
kubernetes
requirements
are
being
added
on,
or
is
it
specifically?
D
D
A
Thinking
about
this
in
the
historical
context,
so
if
it's
probably
not
relevant
at
this
point
in
time
at
the
beginning,
though
they
did
on
the
journey
to
stable,
I
feel
like
it
was
pretty
flexible
in
getting
it
to
where
it
needed
to
be,
but
then
once
they
hit
what
they
wanted.
It
definitely
sort
of
completely
ground
to
a
halt
and
obviously
that
standardization-
and
you
know
stable
portability-
is
its
big
selling
feature.
So
it's
not
something
they
want
to
mess
with
a
whole
bunch.
H
Yeah
and
interestingly,
from
a
historical
side,
tni
was
not
originally
designed
for
kubernetes,
but
instead
was
designed
for
corolla,
coroes
and
fleet,
and
they
tried
to
put
libmepric
in
first,
but
there
were
some
issues
there
we'll
go
into
it
for
what
happened
in
historically
at
this
point,
but
then
so
so
the
actual
cni
has
been
relatively
well
defined.
Since
then,
I
don't
think
it
had
chaining
within
it.
That
was
the
biggest
change
that
they
added
on
was
was
chaining,
but
it's
been
relatively
stable
since
2016-ish.
H
A
Think
trying
to
think
of
the
right
way
to
say
this,
so
we
have
the
decoupling.
It
has
facilitated
the
integration
to
things
like
kubernetes,
I'm
trying
to
find
some
way
to
capture
in
words.
A
This
notion
that
if
I
am
a
network
developer
as
long
as
I
build
against
the
specification,
I
have
a
expectation
that
my
stuff
is
just
gonna
work
which
we
know
isn't
the
case,
but
like
it's
not
like,
you
know
trying
to
think
like
the
nearest
example,
I
have
is
like
right
before
the
csi
came
about
and
like
doing
storage
there
for
a
while
was
a
nut
roll,
because
everybody
was
doing
it
very
very
differently.
D
A
Okay,
succinctness
is
not
my
strong
suit,
so
it's
helpful
to
have
you
guys
reel
me
in
okay,
so
this
is
a
little
weird
like
problem
it
addresses
versus
how
it
helps.
A
Just
management
is
a
lot
easier
with
the
concept
of
like
pod,
addressing
being
handled
for
you
things
like
that,
but
that's
not
cni
specific.
I
mean
it's
something
that
the
cni
helps
you
with
in
a
specific
implementation,
but.
H
H
H
The
group
was
the
multi-interface
group
and
the
people
from
calico
pushed
against
it
saying
they
had
no
intention
of
including
additional
interfaces
to
it,
but
instead
they
wanted
to
render
the
changes
as
part
of
the
same
interface,
so
whether
they're
quality
of
service
or
or
similar,
so
the
cni
itself
doesn't
even
call
for
a
specific
addition
of
an
interface.
It
primarily
is
it
deals
with
that
ip
that
id
connectivity
so
a
little
bit
of
a
nuance.
H
It
may
it's
one
that
we
may
not
care
about
capturing
here,
but
it's
when,
in
the
wording
we
should
be
careful
to
to
not
mislead
us
as
well.
D
Well,
we're
specific
about
kubernetes,
having
adopted
cna
cni
for
a
purpose,
so
that's
inside
kubernetes
now
extensions
other
things
that
people
do
with
cni
might
not
do
that,
but
kubernetes
itself.
I
think
that
sentence
is
correct.
That
kubernetes
has
adopted
cni
for
creating
network
interfaces,
allocating
ip
addresses
for
pods
and
other
resources.
H
D
A
Keep
in
mind,
this
is
just
going
to
be
a
pull
request
too,
so
we
don't
have
to
knock
it
out
of
the
park
on
our
first
swing.
Hopefully,
some
other
contributors
in
the
scene,
cf
glossary
2,
will
sprinkle
their
own
commentary
in
here.
H
A
Is
he
in
the
senior
working
group
get
just
tag
them
in
the
comments.
H
A
H
A
I'm
going
to
really
quick,
then
I
think
we
have
a
good
starting
point.
We
can
always
circle
back
next
monday
if
we
want
to
or
just
put
in
a
pr
to
the
glossary
and
see
how
that
goes.
I
do
want
to
give
the
last
15
minutes,
though,
to
the
glossary
which
full
disclosure-
I
am
a
little
bit
behind
on
since
returning
from
my
military
obligations,
so
you
guys
will
have
to
bear
with
me
and
kind
of
help
lead.
This
conversation
on
where
we're
at
right
now
with
this
is.
G
D
We
don't
have
too
much
time
to
discuss,
but
I
I
wanted
to
make
a
meta
comment,
maybe
that
I'm
a
little
frustrated
with
how
slowly
everything
is
going.
We've
been
discussing
this
pr
for
for
a
while-
and
I
know
I
I
put
a
whole
bunch
of
things
here
in
the
glossary
and
there
was
a
lot
to
discuss,
but
I
I'd
like
to
make
a
maybe
general
suggestion
for
the
group.
D
I
know
we're
using
github
and
pull
requests
for
for
collaborative
work,
a
lot
of
advantages
to
that,
but
look
guys
we're
not
we're
not
this
isn't
a
source
code
for
building
we're,
not
breaking
builds
when
we
accept
the
pr.
I
know
it's
important
to
accept
things
that
are
good,
but
there
might
be
some
sort
of
limits
to
how
much
we
can
discuss
everything,
because
this
group
can
can
wax
philosophical
about
almost
anything.
That's
that's
kind
of
what
we
do.
D
D
I
think,
if
we're,
if
we're
going
to
try
to
have
every
single
line,
be
absolutely
perfect
for
everybody.
I
I
I
just
don't
want
us
to
waste
so
much
time
on
on
the
individual
pr's
I
feel
like.
We
have
a
lot
of
work
to
do
and
I'm
not
trying
to
shove
my
pr
through
I'm
not
trying
to
do
that.
I'm
I'm
really
happy
with
all
the
discussions
going
on
around
this,
but
then
I
I
I'm
not
the
right
person
to
click
resolve
on
a
discussion.
D
There's
a
lot
of
things
in
that
discussion
that
are
worth
opening
up
to
the
group,
but
we're
tying
the
discussions
to
the
pr
as
well
in
a
way
that
just
makes
it
really
really
hard
to
get
these
pr's
through.
I
I
don't
know
what
people
think
about
that.
C
Pal,
I
appreciate
the
idea
of
trying
to
get
things
through
quicker
and
iterative.
I
think
some
of
the
early
stuff
on
all
of
this,
including
like
the
white
papers,
was
let's
get
stuff
out
and
get
discussions
about
it.
So
that's
good
one
thing
that
would
help
would
be
smaller,
specifically
when
there's
discussion
items.
So
this
one
isn't
this
particular
pr
is
one
filled
with
discussions
because
it's
the
glossary.
C
C
This
would
be
the
same
as
code.
If
you're
doing
a
massive
code
change
for
comments
and
code,
then
we
just
go.
It
looks
good,
don't
worry
about
it,
but
if,
if
it's
doing
something
like
changing,
how
a
storage
driver
and
and
the
kernel
is
working-
and
you
say
I've
refactored
all
of
it-
then
it's
going
to
get
blocked
for
a
long
time,
and
I
would
consider
this
the
same
thing
from
the
standpoint
of
what
we're
trying
to
do
is
communicate
with
kubernetes
community
kubernete
communicate
with
all
of
the
networking
community.
C
A
A
That
way,
if
there's
things
that
you
know
aren't
going
to
have
a
war
and
peace
discussion
written
about
them,
then
we
can
just
get
them
pushed,
but
like
this
was
what
happened
the
first
time
I
made
a
thing,
as
I
had
like
three
or
four
definitions
at
once.
I
ended
up
like
whacking
the
rest
of
them,
because
you
know
there
was
three
definitions
that
would
have
gone
through
with
no
issue,
but
then
there
was
one
that
tied
up
the
entire
pr.
A
So
we've
got
this
one
here,
I'm
open
to
what
the
group
thinks
on
I
mean
the
simple
thing
tell
is
just
get
five
people
to
review
it
and
accept
it,
because
then
it
meets
the
quota
that
we've
set
forth
right,
but
I
would
say
in
the
future
like
this
is
something
I've
learned
and
I
think
we're
seeing
it
here
is.
I
would
just
break
every
single
definition
any
time
it
involves.
D
Yeah
I
appreciate
that
I'll
try
to
do
that
in
the
future.
A
few
of
these
glossary
terms
speak
to
each
other,
so
they
might
at
night
might
not
have
been
a
separate
pr
for
each,
but
it
could
have
been
broken
up
into
a
few
pr's
for
sure.
Anyway,
we
are
where
we
are
with
this.
I
I
I
resolved
a
few
conversations.
I
wonder:
if
we
can,
we
have
some
time
we
can
go
over
what's
remaining
and
see
if
there
are
any
more
objections
or
desires
to
fix.
D
I
also
went
ahead
and
squashed
all
the
commits,
and
so
so
now
everything
is
a
single
commit.
Instead
of
the
mess
it
was.
D
D
So
I
I
think
the
the
big
one
might
be
about
pnf's.
I
think
there
are
a
few
comments
about
how
the
pnf
definition
could
be
fixed.
If
anybody
would
like
to
reword
it
right
now,
I'd
be
happy
to
do.
C
C
Jeffrey,
can
you
bring
up
the
view
where
it
shows
that
the
other
view
where
it
just
shows
all
the
definitions
might
be
easier
under
files.
B
C
File
changes
and
right
right
at
the
top
below
the
review.
G
G
D
H
I
would
probably
add
that
it's
generally
other
than
the
api
that
it
exposed
it
exposes
it's
generally,
a
black
box
to
the
to
the
consumers.
H
H
A
People
tell
you
they
do,
but
typically
it's
them
working
with
the
bus
at
the
various
vendors
like.
If
you
look
at
something
like
I
don't
know
like
a
tofino
chip
or
one
of
these
third-party
asics,
that
you
know
you
can
shift
and
lift
resources
to
different
things.
I
want
more
in
this
tcam
or
something
like
that.
I
would
say
you
sort
of
do,
but
you
know
if
you're,
not
the
one
writing
like
the
embedded
code
yourself,
you're
kind
of
kidding
yourself.
A
If
you
really
think
you
are
looking
that
deeply
into
it,
but
I
I
don't
think
I
don't
think
you
could
call
it
a
black
box,
though
just
because
I
think
most
vendors
would
complain
about
how
those
of
us
on
the
provider
side
demand
that,
like
their
bu
representative,
gets
on
a
call
and
like
unpacks
every
little
secret,
because
we
want
to
know
how
it
all
works
and
drive
them
crazy.
So.
D
I
D
I
use
the
word
encapsulation
in
isolation
and
to
me
that's
that's
what
it's
about,
whether
it's
in
a
black
box
or
an
open
box
black
box
is
almost
a
derogatory
term.
Isn't
it
every
vendor
would
like.
H
That,
well
I
I
don't.
I
don't
tend
to
see
it,
but
maybe
others
do
because
like
if
I
I
can
create
a
linux
system,
and
I
can
say
it's
based
on
red
hat
and
all
I
do
is
give
you
a
very
specific
interface
to
use
it.
You
know
how
everything
works
inside
of
it,
but
your
your
actual
capability
to
use
and
effective
are
entirely
encapsulated
by
that
api.
So
to
you,
it's
a
black
box
from
a
usage
perspective.
Despite
the
fact
you
know
how
all
the
internals
work
so
that
that
was
the
mentality.
E
A
So
I
think
that
that's
a
future
discussion
where
we
talk
about
like
if
you
have
a
good
interface,
maybe
people
should
just
consume
it
versus
trying
to
reverse
engineer
it,
but
as
far
as
pns
go,
though,
does
anybody
with
the
modifications
that
are
in
place
now
have
any
like
burning.
D
K
D
I
to
me
the
main
issue
is
that
it's
discrete,
that
is,
it
is
separate.
The
pnf
is
a
box
that
is
not
part
of
your
cluster.
It's
not
part
of
your
your
cloud.
D
How
it's
built
internally,
I
think,
is
beside
the
point
almost
of
the
the
definition
of
the
pnf
right.
If
it's
open,
if
it's
closed,
if
it's
allows
you
to
install
things,
the
point
is
that
it's
not
part
of
your
cloud.
I
think
I
think,
for
the
kind
of
discussions
that
we
have
in
the
cnf
working
group.
That
is
the
defining
feature,
in
my
view,.
B
Be
honest,
it's
dan
from
bell.
I,
the
previous
definition
that
tal
did,
I
think,
is
actually
the
right
good
enough
one
because-
and
I
think
the
answer
does
the
same
thing
is:
oh,
it's
wired,
underneath
whether
it's
using
all
kind
of
open
principles
and
open
interfaces.
The
fact
that
you
don't-
and
you
know
actually
explain
it
really.
Well,
you
don't!
B
A
Yeah,
I
think,
I'm
personally
good
with
this
definition.
I
think
the
key
word
here
that
dal
put
in
is
discrete,
and
that's
really,
what
kind
of
makes
it
different.
You
know,
because
you
can
couple
cns
vnf's
whatever,
but
like
the
cons,
like
the
main
notion
in
my
mind-
and
this
is
sort
of
what
I'm
someone
else
was
saying-
was
the
fact
that
it's
discreet
means
that
this
software
is
designed
for
this
hardware
versus
you
know
the
vnf
and
the
cnf
are
trying
to
break
that
paradigm.
D
I'm
glad
I
do
you
know.
The
discussion
here
is
very
interesting.
The
the
issue
of
black
boxes
and
just
how
discreet
things
are
and
with
regards
to
mano,
is
an
important
one
that
I
think
we'll
have
to
continue
to
discuss,
because
this
black
box
idea
is
not
just
we
don't
just
see
it
in
pnf's,
we
see
it
and
the
etsy
approach
to
network
functions
generally
every
network
function
to
an
extent.
If
you
look
at
the
nano
architecture,
is
like
a
black
box
and
the
point.
K
D
You
supposedly
chained
together
these
black
boxes
into
a
network
surface
but,
as
I
keep
saying
this,
this
very
strong
disseparation
between
network
functions
and
network
services,
I'm
not
sure
it
always
serves
us
well.
E
A
I
have
to
drop
friends,
I'm
gonna
look
at
the
rest
of
this
pr,
but
yeah.
I
think
what
we
have
here
for
pnf
is
good
in
my
book.
I'm
happy
with
this,
so
I
will
look
over
the
rest.
Take
a
peek
at
taylor's
recent
thing.
I
kind
of
agree
with
him
that
cloudified
is
kind
of
wonky,
so
I'll
kind
of
work
asynchronously
with
the
group
here
have
a
great
week.
Everyone
and
I'll
talk
to
you
soon.