►
From YouTube: Network Plumbing WG Meeting 2018-01-18
Description
Network Plumbing WG Meeting 2018-01-18
B
Sorry
is
that
better
much
okay,
so
anyway,
well
we'll
get
started,
recording
is
on
and
if
nobody
else
has
any
other
topics,
we'll
just
kind
of
go
through
the
list
of
outstanding
questions
that
I
put
down
again.
If
you
come
up
with
other
things
to
talk
about
today,
chop
them
in
the
document.
The
agenda
document,
which
is
in
the
blue
jeans,
sorry
in
the
zoom
chat,
so
to
start
off
with
what
actually
is
the
network
objects?
You
know
I
think
Mike.
You
have
talked
a
lot
about
network
attachment.
B
I
think
we
should
come
up
with
a
nice,
concise
definition
of
exactly
what
we
would
like
to
put
into
the
CR
D
to
some
degree.
Yes,
I
think
it
is
a
network
attachment
and
this
the
custom
resource
is
a
description
or
definition
of
how
to
create
an
attachment
for
a
given
exactly
it
is
not
a
network,
it
is
not
an
attachment.
It
is
a
description
of
how
to
make
it
a
test,
although
I
would
point
out
that,
due
to
the
way,
CNI
is
structured.
B
One
of
these
things,
the
result
of
one
of
these
things,
could
result
in
multiple
attachments,
I
think
as
you
define
them
like,
because
I
think,
as
you
define
an
attachment,
it
is
either
an
IP
address
or
an
interface.
Is
that
correct?
As
in
if
you're
talking
about
a
layer,
3
Network,
then
it's
an
IP
address,
but
if
you're
talking
about
a
nonlinear,
3
Network,
then
it's
an
interface
or
a
MAC
address
I.
A
Think
it's
important
thing
to
be
clear
about
on
I
think
we
have
some
choices,
I
kind
of
think.
The
sense
of
the
group
is
we
want
to
focus
on
a
network
interface
except,
of
course,
for
the
people
who
care
about
DB
TK
in
ways
that
doesn't
use
network
interfaces
I'd
like
to
set
the
non
network
interface.
Part
of
the
discussion
just
put
that
in
a
separate
box,
I
do
think
it's
important
to
support
stuff
like
fidonet
that
doesn't
use
network
interfaces.
A
B
Fair
enough,
although
I
can
think
of
scenarios
with
more
Sdn
type
networks,
where
the
Sdn
handles
the
attachment
to
multiple
logical
networks,
as
opposed
you
know
on
that
all
goes
through
one
interface,
and
in
that
case
you
can
think
of
a
case
where
the
interface
gets.
Multiple
IP
addresses
and
each
one
of
those
that's
theoretically
an
attachment
to
a
given
network.
A
C
So
but
as
do
I
suggested,
if
I'm
just
calling
it
CNI
plug-in
because
if
we
don't
actually
have
any
requirements
of
it
at
all
other
than
that
at
least
one
of
the
plugins
comes
up
with
an
IP
address
for
us
to
use
like
a
traditional
plug-in
would
use
true,
and
as
long
as
it
does
that
the
spec
has
no
reason
to
care
whether
it
creates
a
network
interface
or
not,
or
it
creates
2
or
0.
Or
you
know.
If
we
get
back
a
usable
IP
address
from
it,
then
it's
done
its
job.
A
Even
if
we
haven't
get
converged
on
with
whether
one
that
we're
attaching
it
means
one
network
interface
or
one
IP
address
or
exactly
what
it
means,
I
think
we
do
have
a
concept
here
of
network
attachment
that
we're
circling
and
trying
to
nail
down
and
I
think
it
is
about
the
right
kind
of
abstraction
all
right.
We
want
one
invocation
of
one
of
these
things
to
make
one
network
attachment.
Ok,.
A
I'm
not
sure
that's
the
right
way
to
characterize
it,
so
mm-hmm
go
ahead.
Yeah
I've
always
found
CNI
to
be
a
little
bit
schizophrenic
in
that,
in
some
sense,
it
talks
about
a
plug-in
is
a
binary
that
you
invoke
and
then
there's
this
file
that
gets
fed
on
studying.
And
if,
if
that,
we're
all
that
we're
to
it,
then
the
you
can
think
of
what
gets
fit
in
is
constant
in
and
if
Ike,
the
spec
I
think
calls
it
something
like
a
network
configuration
right.
A
So
that's
really
a
description
of
what
you're
connecting
to,
but
in
fact
in
kubernetes
and
in
everybody
else's
mind.
As
far
as
I
can
see,
this
file
that
gets
fit
in
oncidium
is
also,
and
now
in
the
spec
house.
Well,
this
file
that
gets
fed
in
is
not
only
fed
in,
but
it's
read
by
things
that
invoke
seeing
that
plugins
to
discover
stuff
about
the
capabilities
of
seeing
my
plugins.
A
It's
kind,
if
you
think
of
this
plug-in,
asked
what
we're
about
a
way
to
making
that
work
attachment.
It
is
telling
the
higher
layers
some
things
about
this
way
of
making
in
that
work
attachment,
as
well
as
giving
input
to
scene.
I
plugin
helping
to
tell
it
how
to
make
this
seem
that
the
network
attachment
yeah.
B
B
A
A
B
A
A
B
A
B
A
C
Think
until
we
come
up
with
a
final
answer,
I
mean
if
we
rename
it
now,
we
have
to
either
also
rename
every
reference
to
it
and
everybody's
comments
all
throughout
the
spec,
or
have
comments
that
no
longer
use
the
same
naming.
So,
let's
just
let's
just
leave
it
as
it
is
until
they
come
up
with
its
final
name.
Sure.
A
B
A
B
A
Sorry,
let
me
try
that
again,
okay,
but
the
two
directions
we
could
go
is
one
is
have
two
annotations
per
pod.
One
is
spec
of
a
collection
of
annotations
and
one
is
status
of
a
collection
of
annotations.
The
other
is
have
a
collection
of
objects,
each
one
of
which
describes
the
spec
and
status
of
one
attachment
of
that
pod.
A
B
C
C
A
B
But
then
the
second
point
there
is
that
the
thing
that
receives
that
status,
as
in
essentially
the
the
plugin
as
I
just
implementing
the
specification,
would
not
have
to
write
that
back
to
the
api
server
as
an
object.
It
would
only
need
permissions
to
update
a
given
pot.
I
don't
know
if
that
makes
a
material
difference
for
security
policy
for
kubernetes,
because
most
network
plugins
that
do
these
things
would
have
a.
What
is
a
service
account
or
something
like
that
anyway?
That
would
give
them
permissions
to
update
most
things
but
I.
B
A
It
is
more
bookkeeping.
The
virtue
of
it
is
for
the
sake
of
clients
who
want
to
look
things
up
in
the
other
direction.
If
you
have
a
network
or
thing
that
describes
how
to
make
attachments
to
a
network
I'm
heavy.
These
separate
objects
makes
it
easy
to
find
all
the
things
that
have
actually
used
that
method
to
make
an
attachment.
B
Right
are
there
any
security
implications
either
way,
for
example,
if
it
was
a
separate
object,
we
could
apply
different
security
policy
to
it
right,
as
opposed
to
as
a
pod.
Anything
needs
to
be
able
to
read
a
pod,
whereas
annotations
could
also
be
read
by
anything
that
can
read
the
pod
right,
I
think.
B
A
B
F
B
C
You're
talking
about
annotation
versus
CRT
now,
but
where
do
we
imagine
this
data
going
when,
if
we
weren't
a
CRT,
if
we
really
actually
fully
integrated
with
this,
would
we
expect
this
to
be
a
field
in
pod
status?
Yes,
so
if
we're
gonna
do
that
I
feel
like
it
makes
more
sense
to
have
it
as
an
annotation
on
pod,
rather
than
creating
as
a
separate
API
object.
Actually.
A
I
should
agreed
yeah
I
mean
this
distinction
is
the
distinction
that
can
be
carry
carried
forward,
so
yeah
I
think
it's
even
you
know.
If
we
succeed
in
getting
this
idea
into
mainstream
kubernetes,
it's
still,
you
know,
makes
sense
to
have
separate
objects
because
again
it's
the
easier
security
story,
but,
more
importantly,
it
makes
it
easier
to
look
up
things
by
that
unless.
F
We
want
to
update
the
network
object
or
the
the
CRT
object
with
all
the
IP
address
information,
then
in
that
case
we
can
say
that
if
you
need
to
type
information,
you
just
need
to
create
this
object
rather
than
I
doing
a
get
part
and
getting
one
ID
papers
from
food
and
other
IP
addresses
from
or
from
the
CRT.
So
if
the
CIT
can
have
all
the
information
about
the
networking.
For
that
part,.
A
Well,
I
was
proposing
an
object
that
has
information
about
all
the
networking
for
one
pod,
suggesting
I'm,
not
quite
sure
when
it
goes
far
as
advocating,
but
I
will
say
that
I've
had
I
have.
Colleagues
who
advocate
this
approach,
we
define
two
kinds
of
CR
ds1
describes
as
we've
been
discussing
how
to
make
a
network
attachment.
The
other
describes
in
inspecting
status.
The
one
particular
network
attachment
of
one
particular
pod.
B
B
A
I'm
saying,
instead
of
a
user,
one
way
we
could
go.
Is
a
user
writes
an
annotation
on
a
pod,
saying
here
my
desired
attachments
for
this
pod?
The
other
way
we
can
go.
Is
the
user
creates
a
collection
of
objects
for
those
pot?
The
pod
each
one
described
is
one
desired
attachment
for
the
pod.
In
both
cases,
the
user
is
creating
the
Declaration
of
us
desired.
What
the
plugin
does.
Is
it
either
updates
the
annotation
on
the
pod
that
has
that
all
the
statuses
or
it
updates
the
status
of
each
of
those
individual
attachment
objects.
B
B
A
It
would
work,
I,
guess,
there's
a
couple
of
ways
you
can
simply
doing
it.
Let's
say
no
simple,
indirect
way
would
be
I
plug-in
that
once
to
say,
make
well
say
this:
well,
okay,
we're
getting
into
other
parts
of
the
discussion
which
is
who
is
responsible
for
making
this
happen.
Let's
just
be
vague
about
that
for
the
moment.
The
thing
that
makes
these
happen
right
one
way
simplest
most
direct
way
to
do.
It
is
when
it's
time
to
test.
A
So,
let's
say
when
it's
dang
it:
okay,
okay,
I'll,
be
specific,
go
for
it.
Let's
say
it:
okay,
let's
say
for
the
sake
of
argument
that
the
attachments
on
the
pod
is
not
dynamic.
So
at
pod
creation
time
somebody
has
to
find
all
the
desired
attachments
for
upon
and
that's
somebody
or
something
could
do
a
list
operation
with
a
suitable
filter
that
would
return
the
attachment
objects.
Each
attachment
object
would
refer
to
a
pod
and
one
of
these
other
CRTs
that
describes
how
to
make
a
network
attachment
right.
A
B
A
B
A
B
B
An
annotation
that
lists
so
as
the
current
proposal
has
it
there's
an
annotation
on
the
pod
and
it
just
lists
the
names
of
the
network
attachment
descriptions.
What
you're
proposing
is
that,
instead
of
that
annotation,
there
would
be
a
pairing
custom
resource
that
would
list
B,
pod
and
possibly
using
label.
Selectors
I
think
you'd
mentioned
in
the
Google
Doc,
no
okay,
so
it
would
be
a
per
pod
object
that
lists
the
pod
somehow
and
then
also
the
network
attachments
that
that
pod
one-one
attachment
okay,
one
attachment.
G
B
I
think
what
and
correct
me
if
I'm,
wrong,
Peter
I,
think
what
you're
saying
is
that
if,
in
the
current
spec,
if
you
don't
specify
the
annotation
to
select
any
network
attachments,
then
you
just
get
the
default
networking
right.
What
you're
saying
is
that
if
you
do
the
same
thing
in
Mike's
scenario,
would
there
be
one
object
per
pod
or
not
right?
Yes,.
G
B
A
Right
yeah
I
agree
with
that.
No
I'm
more
concerned
now
to
think
about
it.
I'm
sorry
I,
didn't
think
about
this
earlier
I'm
thinking
about,
as
you
say,
flow
of
just
creating
a
pod.
If
you
do
want
multiple
attachments
for
a
pod,
if
we
do
have
them
as
separate
objects,
do
you
create
these
separate
objects
first
or
later,
yeah?
That's
there's
no
good
answer.
There
both
have
problems.
B
A
Yeah
I
think
we
can
they
get
the
downside.
I
would
just
say
the
downside
is
really
the
the
couple
of
the
upside
of
the
way
with
the
upside
of
making
them
separate
objects.
Is
it's
easy
to
query
not
only
for
all
the
pairing
objects
that
are
relevant
to
a
given
pod,
but
all
the
peering
objects
that
are
relevant
on
the
other
side,
yep
to
a
given
network
attachment
Factory.
So.
C
A
Really,
wouldn't
want
to
do
that.
That's
gonna
have
a
ability
problem.
If
you
get
lots
of
attachments
and
the
cost
of
updating
each
one
is
o
N
and
it's
really
painful
a
better
answer.
I
think
would
be
to
create
a
separate
indexing
service
so
that
there's
something
that
would
efficiently
serve
that
query.
Based
on
a
cache
in
memory.
B
A
B
Okay,
so
next
up
was
the
default
plugging
discussion
and
that
there's
a
couple
different
issues
here
and
the
first
one
is
which
attachments,
IP
and
other
details
get
reported
to
kubernetes,
because
cuba
only
allows
one
at
this
point.
The
second
one
is
which
one
should
get
the
default
route
inside
the
pods
namespace
and
perhaps
one
and
two
should
be
the
same
one,
maybe
not,
and
then
the
third
one
is
what
should
be
used
for
health
checking
by
kubernetes,
and
I
know
at
least
for
the
health
checking
discussion.
B
A
Not
sure
I
think
we're
kind
of
going
off
the
rails
here.
You
know.
We've
already
decided
right
that
the
spirit
of
what
we
want
here
is
there's
one
main
network,
and
then
we've
got
these
side
cars
and
you
know
we
have
to
be
I'm.
Sorry
I
have
to
dig
rest
a
little
bits
like
about
this
word
didn't
work
because
I
think
it
trips
us
up.
Sometimes
we
need
to
say
network,
we
think
Ethernet
or
virtual
Ethernet.
A
Then
sometimes
we
has
something
deliberately
strictly
higher
level
concept,
which
is
just
all
the
IP
addresses
that
can
come
up
to
each
other,
and
so
I
want
to
focus
on
that
latter,
one
and
I've
been
using
the
word
or
the
term
a
plane
of
IP
connectivity
instead
of
network
just
to
avoid
confusion
right.
So
what
we're
trying
to
do
here
and
I
thought
we
all
agreed
on
this-
was
that
the
design
is
there's
one
main
plane
of
network
of
IP
connectivity
right.
It's
the
one
that
you
get.
A
If
you
don't
say
it
use
any
of
this
special
stuff,
it's
the
default
one,
it's!
What
Kubb
will
do
is
what
services
will
use
it's.
What's
going
endpoints
it's
what
all
the
rest
of
Kubb
is
going
to
use
right.
So
let
this
so
I
thought
we'd
agreed
that
there
would
be
that
for
every
pot.
Every
pot
participates
in
that
and
we're
talking
about
an
annotation
or
whatever,
to
make
additional
sidecar
attachments
to
additional
sidecar
networks,
which
may
or
may
not
be
separate
planes
of
connectivity
that
IP
connectivity,
okay,.
G
A
Mind
of
IP
connectivity,
I
think
I
thought
we'd
already
agree,
yeah
what
gets
on
this
default
plane
of
IP
connectivity
plus
optionally
other
attachments,
which
might
also
provide
off,
become
activities
into
the
same
plane
or
not.
That's
a
detail
that
we
don't
need
to
fix.
I
mean
it
is
set
in
one
way
or
the
other.
H
G
B
I
hadn't
necessarily
thought
about
that
yet
either.
So
this
is
I,
don't
have
any
particular
problem
with
it.
I
think
the
only
issue
might
be
if
we
eventually
talk
about
overlapping
IPs
and
things
like
that.
If
you
have
a
sidecar
Network
that
all
that
overlaps
with
the
IP
range
of
the
default
Network,
then
we'd
have
to
start
doing
some
dances,
yeah.
A
H
B
H
So
this
is
not
just
the
NFC
world,
so
think
of
like
a
like
a
VPN
gateway
that
somebody
implements
here
or
somebody
who
wants
to
call
back
home
right
like
using
a
VPN
like
me
to
end
up
with
like
the
same
thing,
if
you
have
like
address
injection
coming
in
right,
so
like
somebody
to
like
decide
to
use
the
same
10.0
network
like
that,
kubernetes
is
using,
for
example,
so
I
think
it's
probably
more
common
than
like
I
at
least
initially
tore
off,
but
I
think
it.
We
need
to
support
it.
Yeah.
G
F
G
D
B
I
guess
kind
of
what
I'm
hearing
is
that
most
people
don't
have
a
problem
with
the
default
Network,
which
kubernetes
sort
of
right
now
always
attaches
to
a
pod.
Having
that
default
network
always
attached
to
pods
here
as
well,
and
this
annotation
would
be
additional
sidecar
networks
and
the
overlapping
IP
discussion.
We
need
to
continue
talking
about
it,
but
it
does
not
have
any
particular
effect
on
the
annotation
and
this
behavior,
because
it's
already
a
problem
with
that
behavior
and
it
would
also
be
a
problem
with
sidecar
networks.
Anyway,
right
yeah.
C
B
B
F
B
What
happens
right
now
is
kubernetes
just
looks
in
etsy,
CNI
netd
and
the
first
file
that
it
finds
sorted
a
schematically
is
the
one
that
it
considers
the
default
network.
So
what
this
means
is
that
you
have
to
be
very
careful
about
how
you
name
things.
So,
for
example,
you
would
need
to
name
something
like
whatever.
B
Somewhere,
well,
it
does
for
the
purposes
of
how
this
plug-in
gets
called,
because
to
actually
get
this
meta
plug-in
called
so
assume
we're
talking
about,
for
example,
Malta's
or
CNI
genie
doesn't
matter.
You
have
to
put
a
Malta
sconce
into
Etsy
CNI
net
D
sorting
earlier
than
any
other
config
in
that
directory,
and
then
kubernetes
will
find
it
and
then
nay's
will
call
multi-source
Eni
geni,
which
then
implements
the
specification.
If
you
do
happen
to
have
other
files
on
disk
say,
for
example,
I
don't
know
that.
C
Answer
here
is
just
to
say:
if
you
use
the
network
object,
then
it
bypasses
normal
CNI
search
paths
and
whether
kubernetes
is
directly
implementing
the
motifs
like
behavior
or
or
it's
actually
calling
out
to
multis.
It
just
happens
correctly,
regardless
of
what
c,
and
I
config
files
you
have
installed
with
what
file
means.
Okay,.
B
So
in
that
case,
then,
we
need
to
talk
about
the
specification
from
the
sorry
in
the
network.
Spec
struct,
because
currently,
as
it
is,
there's
three
different
ways
to
specify
how
to
call
a
CNI
configuration
or
configuration
list.
One
is
to
embed
the
config
into
the
network
spec.
The
other
one
is
to
specify
the
plug-in
binary
names
specifically,
and
if
you
do
neither
of
those
things,
then
it
currently
will
look
on
disk
for
a
config
file.
I,
don't
think
that's.
A
A
conflict
or
a
problem:
okay,
before
we
go
further,
it
just
means
you
have
to
be
careful
about
how
you
name
things:
I,
think
what
Dan
Winship
said
means
it's
a
non-issue,
but
before
we
continue
with
this
I
think
you
skipped
of.
Some
of
them
belongs
with
the
earlier
discussion.
Okay,
regarding
the
default,
which
is
that
currently
see
my
plugins
are
making
the
one-and-only
connection,
and
so
they
normally
set
a
default
route
and
provide
DNS
client
configuration,
but
for
the
sidecars
we
want
them
to
not
set
a
default
route
or
provide
dns
client
configuration.
B
Be
clear
they
could
provide
DNS
client
information,
but
that
information
would
potentially
be
thrown
away
because
it
does
not
materially
affect
anything
inside
the
containers
network
namespace.
It's
essentially
only
reported
back
to
kubernetes
for
Malta's
genie,
etc
in
there
is
strucked,
but
where
I
think
you
are
correct
is
that
plugins
currently
do
potentially
modify
the
default
route
inside
the
namespace
in
certain
circumstances,
and
that's
what
we
need
to
be
careful
about.
Well,
he'll,
be
out
here.
Okay,
let's
see
so
with
the
basically
two
things
cube
currently
doesn't
care
about
DNS
information
from
CNI
plugins
anyway.
A
B
Yeah
I
know-
and
it's
been
like
that
for
years
and
I'd
like
to
change
that
and
I
looked
into
that-
and
there
were
some
complications
around
backwards-
compatibility,
there's
a
whole
number
of
ways
that
you
can
tell
cube
to
set
up
DNS
by
default.
I
think
it
copies
resolve
common
container
other
times.
You
can
have
it
add
its
own
DNS
servers
so
that
you
actually
go
to
cluster
DNS
as
opposed
to
your
hosts,
resolve
comp
and
there's
a
couple
other
options.
B
I
think
you
can
actually
embed
name
servers
into
like
the
cube
config
itself,
like
through
command
line
and
stuff
like
that
too.
But
another
option
we
could
add
is
you
know,
use
the
information
from
CNI,
and
that
would
be
great
because
then
you
know
the
default
network
could
return.
Dns
information
that
would
actually
get
used
and
I
would
solve
a
lot
of
our
problems
around
multi-tenant,
DNS
and
things
like
that.
B
A
A
G
The
C&I
spec
as
I
recall,
assumes
that
you
call
a
CNI
plug-in
once
for
each
network
interface,
which
I
don't
think
anyone
actually
does
in
practice,
but
I
think
that's
what
the
spec
assumes
in
an
inscription
which
we
think
with
in
principle.
The
cni
spec
wouldn't
allow
every
single
plug-in
to
set
a
default
networks
that
wouldn't
make
sense,
so
we
may
actually
be
okay
here,
although
in
practice
I
bet,
there's
EMI
yeah.
A
F
B
Unless
that's
how
it
works
right
now,
because,
typically
you
don't
run
multiple
network
attachments
in
this
way,
so
I
think
I
said
in
the
comments
in
the
doc
too.
That
there's
definitely
some
room
to
clarify
this.
There
are
workarounds,
though,
for
example,
whenever
multi
sword
in
ER,
whatever
runs
one
single
cni
configuration
or
config
list,
it
could
somehow
enforce
default
route.
A
Yes,
I
think
the
simplest
and
probably
best
for
the
reason
that
Peter
mentioned
in
the
scene,
I
kind
of
begs
this
question
and
it
really
is
properly.
You
see
my
question
right.
Cni
needs
to
get
more
explicit
about
whether
the
the
collar
of
a
plugin
expects
this
plug-in
to
establish
a
default
route
or
not
yep,
and
in
lieu
of
that,
or
until
that
I
think
the
easiest
way
to
work
around.
It
is
simply
by
actually
using
plugins
that
behave
in
a
convenient
way.
Yeah.
B
B
A
There
seems
to
be
a
misunderstanding
here
sure.
So
and
again
you
know
this
network
object.
Terminology
is
so
confusing
yeah
like
attachment
description,
okay
right
so
at
least
in
the
use
case
that
I've
been
working
on,
which
is
again
it's
VMs
in
pods
and
for
a
traditional
VM
type
of
service,
in
which
the
user
option
has
the
options
to
specify
IP
address
and
the
MAC
address
and
some
cost
constraints
on
each
attachment.
So
you
know
those
are
per
Testament
details
that.
A
B
So
I
mean
the
option
here
is
that
we
could
add
additional
attachment
or
sorry
attachments
annotations
to
the
pod
that
are
specified
by
this
document
that
describe
things
like
requested
IP
address
Mac.
That
kind
of
thing
QoS
is
already
sort
of
handled
by
some
things
in
kubernetes,
although
it's
not
particularly
descriptive,
there
are
bandwidth
annotations
already
in
cube
that
are
kind
of
de-facto
standard,
really,
where
yep,
let
me
find
those
for
you.
B
B
They
got
specified
so
that
the
internal
kubernetes
networking
code
and
cube
net-
that's
really
essentially,
the
predecessor
to
cube
net,
could
implement
some
bandwidth,
stuff,
okay,
they're,
pretty
limited
again
they're.
Basically
the
egress
and
ingress
bandwidth
limits.
So
you
say
this
pod
gets
five
Meg!
No
more!
You
know
mm-hmm.
A
B
A
A
B
G
One
fairly
short
thing
was
mentioned
that
services
wouldn't
be
effective
of
these
additional
sorry,
I'm
gonna
call
them
networks.
Clack
I
haven't
quite
got
my
Hydra
and
grab
that
you
tariff
that.
So
we've
got
a
additional
interfaces
in
our
object.
Will
the
IP
addresses
for
those
appear
in
services,
ie
DNS
or
is.
G
B
The
moment
the
specifications
specifically
states
that
it
is
up
to
specific
plugin
implementations,
whether
these
sidecar
networks
have
any
interaction
with
services
and
other
components
of
the
cube
api.
We
had
a
long
discussion
about
how
this
would
impact
services
last
fall
and
out
of
that
discussion,
it
seemed
like
there
would
be
quite
a
lot
of
impact
and
we
weren't
quite
ready
to
start
changing
the
cube
api
in
those
ways.
B
G
My
service
is
all
I'm
really
referring
to
service
discovery.
That's
all
I
care
about
which
is
DNS
and
that's
okay,
so
I
mean
the
rest
of
service
is
all
very
nice,
but
we
don't
rely
on
that,
but
we
do
usually
rely
on
DNS
lookups.
You
know,
but
it
sounds
that
we
could
in
that
snot
really
a
big
not
having.
That
would
be
a
problem,
but
it's
not
that
we
don't
have
it
it's
just
that
we
have
to
do
some
work
to
implement
it
in
our
plugins,
which
I
think
is
fine.
Yeah,
correct,
yeah.
B
A
A
A
G
We
are,
it
may
well
be
that
I've
met
I
mean
I,
missed
and
I've
missed.
Most
of
these
calls
so
far,
so
I
may
well
have
just
missed
where
this
was
discussed,
which
is
in
which
you
know
I'm
sorry
but
I'm
happy
with
a
model
where
we
need
set
up
so
endpoints.
We
we
have
to
write
some
code
ourselves
to
say:
go
scan
through
objects,
read
some
annotations
and
write
at
the
endpoints,
a
tab
and
a
trace
writer
and
control.
G
G
A
G
Am
NOT
interested
in
the
bouncing
from
this
point
of
view,
all
I
want
is
DNS
and
so,
from
the
point
of
view
of
my
use
case,
right
I
can
get
my
use
case
by
writing
some
code
from
what
we
got.
But
that's
where
I'm
going
from
this,
why
I'm
kind
of
happy
I'm,
saying
fine,
we
I,
don't
prom
I
understand
as
soon
as
you
start
trying
to
use
kubernetes
load
balancer.
You.
F
B
A
B
When
you
say
extra
end
points,
what
do
you
mean?
Do
you
mean
end
point
objects
in
the
cube
API,
or
do
you
mean
extra
attachments?
I
bet
extra
input
objects
in
the
cube,
API
I
I
would
leave
that
up
to
the
plugins,
because
if
they
want
to
then
I
mean
essentially
like
I,
don't
see
the
difference
between
putting
this.
You
know
the
sentence
in
that
I
had
put
in,
which
is
it's
up
to
the
implementations
and
just
leaving
that
out
completely.
B
A
G
C
G
A
In
the
endpoints
explicitly
managed
and
I
think
that
is
not
forbidden
by
what
I
proposed.
What
I
proposed
to
say
was
that
the
plugin
doesn't
make
endpoint
objects.
I'm.
Sorry,
I
didn't
say
that
what
I
meant
is
plug
in
the
scene.
I
invoke
in
the
scene
I
plug
in.
Does
it
make
extra
endpoint
objects?
If
you
want
something
else
that
makes
extra
endpoint
objects
for
your
own
purposes?
Nothing
would
stop
you
from
doing
that.
Yeah.
G
And
I'm
no
I,
don't
think
it'd
be
right
for
the
plug-in
to
do
it,
because
there
are
too
many
loose
cases
of
you.
I've
leet,
some
IP
addresses
all
the
rest
of,
or
is
something
that
can
scan
periodically
and
catch
all
those
leaks
and
say:
oh
I've,
seen
credit
that
that
could
be
smarter
about
it.
So
I
mean
I'm,
I'm,
quite
happy
with
leaving
it
fairly
vague
and
leaving
it
to
us
to
do
stuff
because
reality
is
we're.
Gonna
end
up.
Writing
a
separate
thing.
G
A
B
C
So
people
who
want
to
say
is
for
now
at
least
the
service
controller
is
not
expected
to
pay
attention
to
network
objects.
I
mean
automatically
created
service.
Endpoints
will
not
reflect
anything
that
we
do.
We
don't
say
anything
about
whether
somebody
else
is
going
to
create
manually,
create
endpoints,
because
we
have
no
reason
to
say
anything
about
that.
Yeah.