►
From YouTube: Kubernetes SIG Network 20170927 multi-network meeting
Description
Kubernetes SIG Network meeting from September 27th, 2017, specifically discussing modifications that the multi-network proposals would require to the Kube API.
A
All
right
recording
is
on
okay,
so
I
also
posted
the
agenda
into
the
chat.
If
everyone
could
see
that
it's
just
a
normal,
terminated,
Signet
work
agenda
documents,
I've
listed
various
documents
that
are
relevant
for
this
meeting
as
links
there
as
well.
Those
were
also
the
ones
that
went
out
in
the
meeting
invite
earlier
this
week
or
late
last
week
so
and
then
I
also
wrote
some
goals
down
there
for
what
at
least
I
hope
to
get
out
of
this
meeting,
and
if
anybody
else
has
some
feel
free
to
add.
A
Basically,
both
the
ZTE
Intel
proposal
and
the
highway
proposal
proposed
adding
a
network
field
to
the
service
object
and
then
the
network
object
would
get
a
cluster
site
or
service
site
or
field
that
would
then
be
used
to
allocate
the
cluster
IP
for
various
services,
so
they're
at
least
fairly
close
in
that
regard.
I
also
took
the
Intel
ZTE
document
and
made
some
modifications
to
that,
including
adding
some
other
aspects
that
I
thought
would
be
affected
by
the
change
which
include
endpoints
and
the
CRI
container
runtime
interface
API
between
the
cube
cubelet
and
the
run.
A
Excuse
me
the
container
runtime
plugins
themselves
and
then
also
DNS
probably
needs
to
be
addressed.
So
I
added
a
note
or
two
about
that.
Does
anybody
else
have
any
general
comments
at
the
start,
or
should
we
just
kind
of
jump
into
what
people
think
about
various
documents?
I,
don't
know.
If
anybody
have
comments.
B
A
B
I,
don't
think
any
of
them
touch
on
all
of
the
things
that
they
would
need
to
address
in
order
to
make
this
work,
but
it
is
exactly
what
I
feared
it
would
be,
which
is
an
enormous
amount
of
complexity
that
we
add
effectively
to
every
resource
involved
in
anything
related
to
workload
or
network
right.
We
touch
pods
service
endpoints,
we're
gonna
have
to
deal
with
ingress.
We're
gonna
have
to
deal
with
DNS
we're
going
to
deal
with
network
policy.
B
This
sort
of
flies
counter
to
the
direction
that
things
are
headed
with
things
like
SEO
and
service,
mesh,
node
ports
and
host
ports.
I'm
completely
unclear
how
those
things
could
possibly
work
in
this
world.
I
don't
actually
even
know
how
I
would
try
to
implement
this
on
a
platform
like
GCP,
so
I
have
a
lot
of
questions.
I.
Just
ran
through
a
lot
of
things,
but
I
can
run
through
them
a
little
bit
more
slowly,
but
the
biggest
number-one
concern
is
like
am
I
the
only
one
who's
concerned
about
the
amount
of
complexity
here.
A
C
C
A
Yeah
I
mean
I
am
also
somewhat
concerned
about
the
complexity,
but
at
the
same
time,
I
see
a
lot
of
complexity
being
added
to
other
parts
of
cube
to
address
certain
use
cases,
and
there
has
been
a
ton
of
interest
from
various
point
or
parts
of
the
community
around
this
kind
of
thing,
and
we've
been
talking
about
it
for
18
months,
and
so
you
know
if
we
don't
I'm,
not
I'm,
not
saying
that
this
proposal
is,
you
know,
obviously
the
be-all
end-all
of
proposals.
I'm.
A
C
I'm
gonna
be
clear
just
to
clarify
in
previous
statement.
I
definitely
want
to
find
a
way
we
that
we
can
enable
the
people
who
want
to
do
this
to
do
this,
but
I
don't
want
to
have
that
way.
Impose
additional
burdens
on
you
know
the
kind
of
majority
of
users
who
just
want
a
single
network.
The
way
networking
works
today,
mm-hmm.
D
C
B
I,
don't
want
to
imply
that
we
should
do
nothing
at
all
here.
My
concern
is
that
the
complexity
in
the
designs
is
sort
of
interested
and
I.
Don't
if
we're
gonna
try
to
solve
these
use
cases,
I,
don't
know
that
there's
going
to
be
an
answer
that
has
less
complexity.
I
was
hoping
that
that
somebody
had
a
creative
idea
that
I
hadn't
come
up
with
yet
yeah
and
there's
actually
some
good
stuff
in
these
Doc's
that
I
hadn't
thought
about,
but
they're
more
like
adding
complexity.
B
What
I
had
thought
about
then,
taking
away
like
there's
a
lot
of
nuance
that
even
I
didn't
consider
when
I
had
sort
of
brain
modeled
this
so
I
guess
thanks
for
that,
I'm
Casey's,
point
I
think
know.
If
there's
anything,
we
do
here
like,
let's
assume
for
a
minute
that
we
do
something
anything
non
zero.
B
B
The
real
question
that
I
have
is
we
we
we
talked.
We
have
been
talking
about
this
for
18
months,
but
we
have
a
massive
selection
bias
going
on
too
right
and
the
people
that
we're
talking
to
in
this
call
and
in
this
SIG
are
obviously
slanted
towards
wanting
this
I
am
in
a
wondering
if
the
right
way
to
suss
it
out
might
need
to
flesh
out.
You
know
one
of
these
docs
into
the
the
complete
doc
and
then
share
it
with
people
from
the
workloads
side
of
the
world
and
see
how
hard
they
recoil.
B
A
B
I,
without
putting
a
ton
of
thought
into
the
answer
there,
because
I
have
only
had
seconds
I,
don't
know
that
there's
gonna
be
any
model
that
we
can
develop
here
that
doesn't
impose
burden
on
a
lot
of
the
rest
of
the
kubernetes
developer
ecosystem
yeah,
that's
very
true
like
it
is
touches
pods
that
touches
services.
It
touches
controllers,
it
touches
the
way
DNS
works.
It
touches,
at
least
in
some
of
the
proposals.
The
way
a
namespacing
is
used,
there's
a
lot
there.
B
Do
we
want
to
talk
about
the
details
of
the
specific
proposals
or,
like
I,
don't
know
how
to
how
to
proceed?
I
hate
being
the
guy
who
says
I'm
scared
of
this?
Let's
not
do
it,
but
I
like
I,
have
a
bunch
of
more
concrete
points
that
I
would
like
to
ask
and
just
see,
if
had
answers
for
sure,
why
don't
we
start
there?
Okay,
so
what
do
I,
putting
on
my
other
hat
for
a
minute
as
a
Google
cloud
person?
B
What
happens
when
I
have
a
platform
that
doesn't
offer
this
like
I,
don't
particularly
want
to
offer
overlays
on
top
of
overlays
on
top
of
overlays?
How?
What
do
I
do
to
my
users?
How
do
I
express
sorry
you
can't
do
that
here
and
doesn't
that
break
the
portability
promise,
but
this
networking
becomes
a
application
construct
in
this
model
right
every
application
the
world
will
start
to
specify
what
they
want
out
of
their
private
network.
I.
E
This
is
Ben,
Bennett
can
I
just
take
a
swag
at
that.
Isn't
the
guarantee
we
make
today
that
if
you
have
an
interface,
you
can
talk
to
any
other
pod.
Modulo,
obviously
filtering
come
from
the
network
policy
stuff,
but
wouldn't
we
make
that
same
rule
about
a
second
network,
we've
basically
say
you're
guaranteed
to
be
able
to
talk
to
the
other
thing,
this
things
in
this
network,
but
we
don't
necessarily
guarantee
it's
isolated
so
effectively
you
get
to
pod
IP
addresses
on
the
same
network
you
have
today.
B
D
It
seems
to
be
the
problem
here
is
that
to
create
a
network,
you
have
to
put
two
things:
you
have
to
create
the
actual
physical
network
or
Sdn
or
whatever,
and
you
have
to
create
some
network
objects
to,
let
you
use
it
and
what
TenPoint
is
really
as
I
understand.
It
is
that
you
can
create
those
Network
objects
that
point
in
the
hyperspace.
They
refer
to
things
that
just
don't
exist
and
there's
a
more
general
point
of
how
do
you
make.
B
That's
an
interesting
way
to
position
it
like.
If,
if
network
is
an
administrative
construct
right
and
I
as
the
cluster
administrator
can
decide
which
networks
are
available
to
my
cluster
and
then
I?
Let
you
consume
them,
that's
one
step,
but
it
sounds
like
from
readings
proposals
that
most
people
are
proposing
network
as
a
really
application,
II
construct
and
some
of
the
use
cases
sort
of
demand.
A
That
that's
interesting,
because
I
actually
was
not
thinking
of
it
like
that
at
all,
I
was
thinking
of
it
more
as
the
administrative
side,
so
that
you
know,
if
you
create
a
network
object,
then
some
network
plugin
would
be
off
listening
for
this
kind
of
thing
and
creating
all
the
backing
resources
that
would
be
ready
when
that
network
was
used,
wouldn't
necessarily
be.
You
know
some
kind
of
ethereal
random
concept.
B
A
So
I,
don't
necessarily
know
if
that
would
be
network
per
channel.
It
was
more
about.
You
have
one
network
where
all
the
video
comes
in
and
particulars
might
subscribe
to
one
particular
channel
and
then
transcode
that
and
spit
that
out
on
more
of
a
were
facing
Network.
If
that
makes
sense.
So
essentially,
I
was
thinking
in
that
particular
case,
two
to
three
different
networks:
a
control
playing
network
for
managing
these
things
and
then
an
ingress
network
in
an
egress
network,
but
not
per
channel
or
per
app
necessarily.
A
A
A
Logical
we're
not
I,
don't
know
what
you
want
to
call
it.
A
poor,
logical
network.
So
is
much
more,
you
know,
administer
administrative
defined
networks
and
when
you
say
performance
you
mean
network
performance
or
you
mean
like
yes,
yep
network
performance,
so
you
know
like
10
or
40
gig
per
second,
both
on
the
ingress
side
and
the
egress
side
in
the
best
cases
and.
B
B
G
It
will
ask,
for
you
know
the
same
capability,
but
in
a
self-service
manner,
as
opposed
to
a
provider
provision
manner,
so
it'll
essentially
open
the
door
to
sort
of
the
full
thing.
I'm
not
saying
that's
necessarily
good
or
bad
thing,
I'm.
Just
saying
that
if
you
just
take
one
use
case
where
a
provider
provision
Multi
network
is
okay
soon
soon
enough,
there'll
be
other
use
cases
where
either
more
flexible
provider
provision
networks
or
self-service
application
provision
networks
will
then
be
requested.
That's.
B
That's
exactly
what
I
believe
to
that:
it
is
an
attractive
nuisance
and
it's
sort
of
the
thing
that
people
have
done
in
other
systems,
because
it
makes
a
lot
of
sense,
sort
of
at
the
VM
level
and
I'm
sort
of
still
I've
had
this
nagging
voice
in
my
head.
That
says
this
is
different.
This
is
not
VMs
and
relying
on
the
same
techniques
is
questionable.
A
Also,
to
be
fair
in
a
lot
of
the
cases
that
I've
run
into
at
least
some
of
the
limitations
are
mainly
performance-based.
It
would
be
great
if
you
could,
you
know,
create
kind
of
logical
or
virtual
networks
for
all
of
these
use
cases,
but
unfortunately,
the
performance
issues
at
this
point
event
I
mean
can't.
B
A
B
Like
concretely
Google's
built
a
really
complicated
system
that
we
wrote
a
paper
about
a
while
back,
called
bandwidth
enforcer,
which
builds
this
complicated
token
tree
of
policies
and
what
we
get
is
effectively
different.
Quality
of
service
assigned
to
different
jobs
and
to
different
port
ranges.
And
it's
I'm
going
to
say
again
it's
very
complicated,
but
it
actually
works
in
practice.
Really
well
with
a
single
network.
A
B
A
B
A
B
Mean
it
is
except
that
I'm
trying
desperately
to
keep
performance
where
performance
matters
like
today,
the
performance
hit
of
container
networking
is
already
too
high,
and
a
lot
of
people
are
working
to
get
away
from
that
I.
Don't
think
that
imposing
more
performance
loss
for
the
sake
of
a
very
niche
user
base
is
gonna,
be
the
right
trade-off
when.
B
In
order
to
offer
sort
of
application
level
networks,
sort
of
a
unbounded,
cardinality
I
have
to
move
into
the
software
domain
right,
but
GCP
is
already
sort
of
in
the
software
domain
and
you
know
so
is
Amazon,
so
Dodger
I,
don't
think
we
from
those
groups
are
here.
I,
don't
know
the
details
of
their
their
network
infrastructures,
but
these
things
are
already
in
the
software
domain.
So
adding
a
software
domain
on
top
of
a
software
domain
is
guaranteed
to
be
a
performance
hit.
D
D
Extra
special
networks-
that's
the
network
tied
into
some
other
and
third
party
customer
system
or
whatever,
which
is
a
physical
cable
being
plugged
in
to
an
inch
of
interfaces
on
certain
of
your
nodes
that
certain
of
those
nodes
cannot
can
be
connected
to
so
I.
Don't
understand
why
that's
performance,
it's
kind
of
parallel
to
the
existing,
so.
D
B
So
I
think
what
you
just
described
was
sort
of
where
we
started
this
conversation,
which
was
like
extra
networks
and
that
I
can
actually
buy
as
a
model,
because
I
see
the
application
of
it
and
I
think
it
you're
exactly
right.
It's
sort
of
not
an
oftenly
provision,
sort
of
thing,
and
so
the
impact
of
you
know,
reprovision
those
sorts
of
networks
is
probably
acceptable
and
you
don't
need
to
do
it
with
any
real
performance.
Loss
like
a
platform
like
GCE
could
easily
accommodate
that.
B
F
B
B
A
B
Me
capture
that
in
the
notes
then
I
mean
we
I
feel
bad
for
this,
but
we
wait.
We.
This
was
part
of
a
big
part
of
Joe
Gees
original
doc,
which
was
what,
if
we
just
said,
there's
always
a
default
network
and
then
there's
extra
networks.
You
can
join
extra
networks
to
get
special
access
to
special
links,
and
that
seems
reasonable
to
me.
H
So
this
is
heavy
speaking
from
hallway
I'm
here
with
Karen
I.
Just
wanted
to
add
a
point
that
this
was
supposed
to
be
an
incremental
type
of
work,
in
the
sense
that
most
of
the
components
that
you
just
mentioned,
the
they
are
designed
to
be
added
in
an
optional
way,
in
the
sense
that
if
there
is
the
risk
of
performance,
hits
for
some
particular
application
and
the
user
has
freedom
whether
to
take
that
optional
piece
or
not.
H
The
only
piece
that
would
make
sense
to
be
the
mandatory
piece
was
the
service,
because
in
that
case,
if
there
is
no
service
support,
support
who
Montanez
working,
then
there
is
no
point
but
other
than
that,
like
even
the
logical
network
definition
or
the
plugins.
Those
are
all
the
options
that
can
be
basically
added
upon
request.
They
don't
have
to
be
a
part
of
their
vanilla
solution.
B
Except
people's
bundles
of
their
application,
configs
becomes
non
portable
right.
If
an
application
is
depending
on
this
networking
functionality
to
work,
they
can't
pick
up
their
application
and
move
it
to
another
platform
with
reasonable
confidence
that
it
works
and
just
concretely,
we
made
a
very
similar
guarantee
with
ingress
where
we
said
you
know
it's
implementation
to
find
and
network
policy
actually
to
some
degree.
A
To
kind
of
circle
back
around
to
some
previous
conversations
long
ago,
we,
you
know
ked
talked
about
how
adding
extra
networks
he
said.
I
remember
that
adding
extra
networks
doesn't
necessarily
require
API
changes
in
cubes,
so
the
contention
at
that
point
was
why
don't
know
plug
in
just
go.
Do
it
themselves
with
like
custom
resource
definitions
and
all
that
kind
of
thing,
and
by
doing
change
cube,
that's
there's
a
strong
argument
there,
so
yeah
I
mean
in
sort
of
in
response
to
that.
You
know
we
started
looking
into
well.
A
B
B
B
G
You
can
ask
a
question
here,
so
you
mentioned
that
not
just
GCE,
but
literally
all
of
the
cloud
providers.
I
mean
networking
already
at
a
software
layer,
and
then
you
know
you
don't
want
to
incur
the
penalty
of
overlays
or
overlays,
but
it
seems
like
the
use
cases
that
are
driving.
This
requirement
are
primarily
going
to
be
targeted
at
non-public
cloud
deployments
and
if
we
edge
type
of
applications
where
it
is
perfectly
possible
to
have
multi
networks
that
are
not
implemented
as
virtual
overlays
on
top
of
a
common
single
pool
of
networking.
G
So
if
the
goal
of
this
sale
is
to
enable
non-public
cloud
deployments
of
communities
which
I'm
sure
it
is
we
just
you
know,
we
should
let
that
go
through
in
the
sense
of
it's
just
not
particularly
applicable
to
public
clouds,
and
it's
not
going
to
be
I
mean
we
do
we
need
to
have.
You
said
that
you
want
some
applications
to
run
equally
everywhere.
B
Let
me
know
if
I'm
way
off
here,
I
understand
what
you're
saying,
but
the
litmus
test
for
for
this
for
every
topic
sort
of
in
the
space
is.
Does
it
touch
the
quran'
is
api?
If
the
answer
is
yes,
then
we
need
to
be
confident
that
the
feature
that
we're
proposing
is
reasonably
implementable
on
most
or
all
of
the
providers
that
we
know
we
support,
there's
a
lot
of
weasel
words
in
that
sentence.
B
Right,
obviously,
I,
don't
know
all
the
platforms
and
Caretti
supports,
obviously
most
or
all
doesn't
really
mean
much,
but
I
can't
in
good
faith,
add
a
feature
that
I
know
doesn't
work
on
the
majority
of
public
clouds
right
or
that
works,
but
only
very
badly.
That
would
require
significant
imposition
on
those
I.
Can't
I
can't
add
that
to
the
public,
API
and
we've
shot
down
other
features
in
storage
space
and
in
networking
space.
It's
always
an
infrastructure
lamp
right,
but
we've
shot
down
other
features
that
are
not
easily
implementable.
C
B
A
B
That
I'm,
seeing
in
terms
of
cloud
providers,
is
much
more
towards
containers
as
a
native
thing,
and
if
we
impose
overlays
and
Sdn
inside
the
cloud
space
we're
going
to
break
the
evolution
of
cloud
providers
right
Amazon
just
launched
last
two
weeks
ago
that
their
that
their
AOL
bees
can
target
arbitrary
IPS,
which
I
have
to
presume,
is
targeting
towards
containers.
Right.
Like
that's
an
awesome
feature,
this
will
break
that.
G
Certain
things
are
going
to
be
more
amenable
to
edge
deployments
and
certain
things
are
going
to
be
more
amenable
to
public
cloud
large
deployments,
but
the
software
stack
has
to
zoom
ibly
support
both
types
so
use
cases.
This
goes
back
to
discussions
on
multi-tenancy
and
so
on,
right
with
the
relevance
of
which
might
be
different
in
a
small
edge
cloud
versus
in
a
big
public
cloud
and
so
on.
Yes,.
B
B
That
problem
isn't
yet
solved
and,
honestly,
truthfully,
the
solutions
to
that
may
lend
themselves
towards
this
anyway
and
now
I'm,
looking
on
the
room
and
I
realized
that
our
multi-tenancy
people
are
not
here
and
oh,
it's
way
too
early
they're,
not
even
awake.
Yet
we
should
probably
make
this
conversation.
It
start
to
include
the
people
who
are
looking
at
multi-tenancy
right
and
I
know
Danny,
you
sort
of
wear
both
hats.
A
little
bit
right,
yeah.
G
B
So
the
the
big,
the
the
line
to
take
there
is
kubernetes
can't
absorb
every
functionality
for
every
workload
we
have
to
be
extensible.
We've
been
focusing
a
lot
on
extensions
and
extension
mechanisms
in
the
last
year,
but
we've
come
to
be
discussing
through
this.
Excuse
me
hiccup
through
this
conversation
is
deep
changes
to
the
core
of
kubernetes.
B
Talking
about
changing
pod,
changing
service,
changing
endpoints,
those
sorts
of
changes
will
be
met
with
a
lot
of
skepticism
from
people
who
don't
need
or
can't
those
changes,
whereas
if
we
can
build
solutions
here
that
are
based
entirely
around
custom
resources
or
around
aggregated
API
is
or
around
that
help
us
annotations,
then
we
can
at
least
say
for
those
edge
deployments
where
you
need
some
extra
complexity.
You
can
take
on
the
burden
of
that
complexity
and
for
the
core
systems.
Where
you
don't
need
that
complexity.
B
B
The
truth
is
if
it's
implemented
as
annotations
or
CR,
G's
I
actually
care
less
about
the
portability,
because
you
can't
take
an
arbitrary
sort
of
extension
and
expect
it
to
run
correctly
on
every
cloud
way
right
that
part
of
the
contract
is
it's
not
part
of
the
core.
Therefore,
I
don't
guarantee
its
portability,
and
even
strictly
speaking,
you
can
talk
to
Sega
architecture.
Services
are
not
exactly
core,
but
they're
sort
of
the
next
layer
and
sorry
cores.
Not
the
right
word
nucleus
is
the
right
word.
Services
are
in
that
first
electron
ring
I,
guess.
B
There
I
have
to
think
about
to
answer
that
there's
a
ton
of
people
who
are
doing
interesting
things
with
CR
DS,
as
you
well
know,
it's
hard
to
say
whether
they're
defining
them
as
standards,
one
that
pops
to
mind
is
Sto
they're,
probably
too
early
Diddley
defined
as
a
de
facto
standard,
but
they
are
using
things
like
CR
de
to
do
a
really
interesting
data
model
and
build
on
top
of
kubernetes.
That
offers
a
standard
for
which
there
is
nobody
else
competing
so
I
guess
that
makes
it
de
facto.
A
Yeah
I
mean
if
we
went
down
the
road
of
defining
annotations
and
custom
resource
definitions
for
all
this
stuff
and
then
sort
of
agreed.
This
is
the
Signet
work
de
facto
standard
for
how
you
do
multi
networks.
I
guess
I
mean
that
in
your
words,
kind
of
solves
the
portability
problem,
but
I
don't
really
see
how
that
makes
like
any
less
of
a
difference.
With
respect
to
the
cube
API,
because
you
know
a
lot
of
people
would
start
using
that
standard
implementing
that
standard
and
depending
on
it,
but.
B
Those
annotations
and
those
extensions
are
not
kubernetes
extensions.
They
are.
We
come
up
with
a
different
name
for
them
right.
They
would
not
be
part
of
the
kubernetes
api
specification.
They're,
not
part
of
the
portability
now
part
of
the
problem
that
we
have
here
is
some
of
the
things
we're
sort
of
hand
waving
about,
don't
exist
yet
right.
The
conformance
is
just
a
budding
thing
right
now,
so
you
know
conformance
is
sort
of
defining
conformance
levels
and
what
that
really
means
is
still
being
defined.
B
So
it's
hard
to
say
this
is
not
part
of
the
conformance
level,
because
the
conformance
level
isn't
really
a
thing
yet,
but
this
would
not
be
part
of
a
conformance
level
right.
You
can
be
completely
kubernetes
conformant
at
the
highest
communities
level
and
not
support
multi
network,
and
that's
totally
I
mean.
A
B
A
I
mean
that
was
my
major
concern
with
that:
we'd
have
to
basically
have
a
bunch
of
other
code,
that's
under
the
sig
umbrella.
That
would
implement
these
things
that
everybody
would
want,
and
you
know
that
could
obviously
would
not
necessarily
be
part
of
kubernetes
itself
right,
because
it
would
deal
with
custom
annotations
that
aren't
part
of
the
cube
API.
A
B
Worth
looking
at
yeah
I
mean
III,
think
that's
worth
considering
as
to
how
those
things
could
possibly
be
made
to
work.
I
don't
have
clear
answers
to
them
right
now,
because
I
don't
have
that
I
think
the
the
list
of
them
in
my
head.
But
you
know
the
first
couple
are
obviously
challenging
right:
how
to
extend
endpoints
controller
or
replace
it.
I
mean
replace
it
isn't.
It
is
an
answer
right
well,.
A
B
Think
getting
some
other
viewpoints
on
this
topic
would
be
fascinating.
I
am
a
little
worried
that
it's
still
very
abstract,
and
you
know
if
we
brought
in
Bryan
or
Eric
or
I,
don't
know
how
deeply
clayton's
reviewed
this
stuff,
but
bringing
in
some
of
those
guys
or
the
Joe's
and
the
Brendan's
who
aren't
part
of
sig
network
at
all.
They're.
Definitely
part
of
the
overall
textual
need
getting
their
feedback
on
this
or
the
depth
of
the
integration
required
here
would
be
insightful.
B
I,
don't
want
to
stop
anybody
from
going
off
and
doing
work
right,
I
think.
The
thing
that
makes
open-source
really
interesting
is
that
anybody
can
go
off
and
do
anything
they
want
and
there's
no
promise
that
we'll
take
it,
but
they
can
go
off
and
prove
that
we're
wrong.
I
I'd
also
don't
want
to
tell
people
to
do
things
that
might
be
a
waste
of
their
own
time.
B
B
A
A
B
I
think
that
would
probably
be
worthwhile
I
mean
I
read
over
all
these
Docs
and
they
were
85%
overlapping.
They
there
were
some
points
of
disagreement.
Well,
you
know
whether
networks
were
names
based
or
not.
Names
based-
and
you
know,
some
included
use
cases
and
some
just
went
straight
into
design.
But,
interestingly,
we
have
a
steering
committee
meeting
today.
In
fact,
so
maybe
I'll
bring
this
topic
up
just
as
a
as
a
float
just
to
see
what
the
initial
feedback
is.
Yeah.
A
That
would
be
awesome.
You
know
if
you
can
just
gonna,
raise
the
visibility
of
this
question.
That
would
be
great,
okay,
more
eyes
on
it
sure,
because
I
think
it
I
mean
we
can
keep
going
around
in
the
echo
chamber
of
cig
network
for
as
long
as
we
want
and
I'm,
not
sure
that's
going
to
get
us
as
far
as
getting
other
people
mentioned
earlier
to
look
at
it.
H
B
B
A
A
A
B
Also
encourage
that,
if
we're
going
to
look
at
having
network
objects,
that
are
the
sort
of
the
source
of
truth,
that
we
try
to
describe
them
in
a
way
that
isn't
specific
to
cien
I,
like
most
of
the
proposals,
end
up
exposing
a
lot
of
the
details
of
CNI
into
the
network
object.
It
would
be
nice
to
abstract
that
a
little
bit.
You
know
as
an
example.
We
with
the
storage
and
storage
classes
and
persistent
volumes
right
network
and
network
class
might
be
something
to
worth.
Looking
at.
H
Okay,
I
was
going
to
suggest.
Maybe
we
can
isolate
the
resources
that
can
be
implemented
using
custom
resource
definition
and
the
ones
that
essentially
have
to
be
in
the
code
that
require
of
API
change.
So
in
that
case
we
know
that
what
is
the
complexity
of
the
must-have
stuff
versus
the
complexity
of
the
ones
that
can
be
done
using
CR
D.
B
That's
a
good
question:
I,
also
wonder
if
we
can
get
away
with
for
the
least
for
the
sake
of
demonstration,
not
modifying
some
resources
but
actually
making
new
resources.
That
are
what
would
be
the
new
version
of
this
like,
instead
of
modifying
endpoints
and
endpoint
controller,
saying
we're
gonna
make
new
multi,
endpoint
or
multi
net
endpoint
and
a
multi
net.