►
From YouTube: Multi-Network community sync for 20230906
Description
Multi-Network community sync for 20230906
A
All
right
welcome
everyone
at
the
multinetric
community.
Sync
today
is
September
6th.
We
are
kicking
off
new
month
today.
I
finally
got
into
some
of
the
action
items
I
had
from
last
week
to
kind
of
walk
through
those
and
then
I
see.
Peter
has
some
questions.
Let's,
let's
answer
those.
If
you
have
any
other
questions,
any
other
topics
you
want
to
touch
over
here.
Let's
just
add
your
your
items
under
to
the
agenda.
A
A
So
I
tried
out
the
failure
cases
so
to
remind
there
was
a
question
on
how
would
we
behave
if
we
specify
in
the
pods
pack
items
that,
for
example,
doesn't
exist
or,
let's
say
network,
is
not
ready
so
what
to
do
with
such
cases?
How
do
we
report
the
failure
and
I
tested
this
on
a
volumes
example?
So,
basically,
you
can
attach
a
volume
to
a
pod
spec
and
then
attach
a
mount
into
a
specific
container.
A
So,
basically,
I
try
to
attach
a
non-existing
config
map
and
basically,
what
how
is
being
reported
is
through
events.
So
I
am
planning
to
just
follow
the
same
code
paths
and
same
template
for
any
and
all
errors
for
pod
networking
so
basically
use
the
kubernetes
events
on
the
Pod
to
let's
say
state
that
pod
name
pod
network
name
full
does
not
exist
right
or
is
not
ready.
So,
basically,
that's
that
what
I'm
thinking
of
of
doing
around
that
any
questions
to
this
I
think
this
is
kind
of
straightforward.
A
If
not,
then
the
next
one
is
the
test.
Infra
cni,
so
something
that
I
brought
up
last
week
in
terms
of
having
some
sort
of
enablement
for
test
continuous
integration
in
kubernetes,
apps
to
and
I
particularly
try
to
avoid
word
reference
implementation,
because
it's
not
that
I
don't
want
to
create
a
reference
implementation
for
this
feature
because
I
don't
think
that
will
be
possible
which
maybe
too
much
work
this.
A
What
I'm,
trying
to
assign
for
is
or
kind
of
design
put
in
the
dock
is
just
to
create
a
testable
mechanism
for
the
API
right,
so
that's
kind
of
where
I
want
to
kind
of
Define.
The
quality
of
this
feature
so
I
talked
with
Antonio
and
basically,
what
is
happening
is
that
there
is
something
called
kind
cni
that
he
created
to
enable
the
testing
in
the
kind
cluster.
A
But
after
talking
with
him,
he
recommended
to
create
something
of
our
own,
some
sort
of
simple
cni
that
we
could
have
and
will
be
semi-independent
from
the
main
KK
repository
and
kind
as
well,
because
I
think
kind
has
like
a
it's
a
another
project.
So
we
probably
wouldn't
want
to
put
anything
over
there.
A
A
So
that's
something
that
I'm
gonna
write
down
and
propose
in
the
cap
to
do
something
similar
like
this.
Any
questions
to
this
peace.
B
Sorry
so
a
couple
of
things,
both
of
which
I
hope
are
quite
sure,
so
I
was
interested
in
versioning,
so
part
of
the
when
we
present
this
to
the
various
other
sakes.
One
of
the
arguments
is
going
to
be
well,
it's
an
alpha
thing
and
then
we'll
evolve.
It
and
it'll
make
its
way
into
beta
and
into
core
in
over
time.
A
Anyone
because
to
be
honest,
I
I,
don't
know
either.
This
is
something
that
I'm
planning
to
learn
as
we
go
to
be
honest
with
you,
I
don't
know
myself
and
basically
I
I
count
on
the
Note
team
on
the
API
API
team,
to
tell
us
that
so
yeah,
that's
a
one
unknown
I,
don't
know
how
that's
gonna
work,
I
hope
I
saw.
Maybe
I
will
say
this.
A
The
dri
work
I
saw
in
the
code
that
the
dri
team,
the
dri
project,
has
a
Flags
Alpha
Feature
Feature
Feature
Flags
around
the
fields
in
some
ways
of
the
Pod
object,
spec
I
would
imagine
we
definitely
would
do
the
same.
I
need
to
that's
a
fair
point.
I
need
to
make
sure
that
we
add
a
flag
mention
a
flag.
Let
me
just
yes
sometimes,
but
basically
I
would
imagine
that
there
is
some
ways
there
have
some
means
to
kind
of
guard
section
of
the
Core
specs
where
we
can.
C
A
I
would
imagine
those
would
be
featured
Flags
everywhere
across
the
board
right,
so
cubelet
we'll
see
I'm,
not
sure
whether
we're
going
to
introduce
any
changes
in
cubelet
I
think
we
might
for
the
default
network.
But
let's
I
think
we
will
have
some
so,
but
mainly
there
will
be
Flags
in
KCM
every
every
element
we
component.
We
will
touch
right,
we'll
have
the
flag,
so
API
server
definitely
would
have
to
have
pod
spec
changes
so
feature
flag.
A
There,
then
the
KCM,
the
cube,
control
manager
and
probably
cubelet
here
so
basically
every
place
where
we
touch
you
would
have
to
expose
that
feature
flag
right.
If
I'm
not
mistaken,
the
the
the
the
way
the
feature
flag
is
designed
in
kubernetes.
There
is
one
argument
and
you
provide
a
key,
a
key
value,
a
map
over
there,
and
basically
we
just.
We
will
just
support
a
new
key
value
map
in
that
flag
right.
So
that's
how
it
kind
of
implemented
in
kubernetes.
A
B
A
It
might
be
yeah,
I,
I,
hope,
the
the
the
the
feature
flag
works
for
apis
as
well.
I
saw
the
array
doing
it.
I
hope.
That's
that's
and
that's
something
that
can
be
changed.
I,
don't
know
to
be
honest
and
let's,
let's
maybe
that's
something
that
we
we
probably
can
ask
the
API
team
when
we're
gonna
go
present
our
stuff
right,
yeah,
and
that
may
be
one
of
the
questions
to
them
in
terms
of
Alpha
and
those
version.
A
I
think
that's
we'll
see
how
we
go
right,
how
we
will
have
Alpha
V1
at
the
at
this.
If
we
wanted
to
get
this
finger
cross
in
December,
then
it
will
be
Alpha
V1
and
then,
depending
on
how
we're
gonna
kind
of
work
on
this.
That's
that's
we're.
Gonna,
move
the
the
the
the
the
stages
accordingly
right
alphabet
and
then
eventual
EGA,
just
simple
V1,
but
let's
see
how
that
goes.
A
Maybe
there
as
we're
gonna
start
playing
around
with
this
ourselves.
We're
gonna,
see,
there's
some
other
changes
are
required,
so
yeah
we'll
see
and
they're.
Basically,
this
boy
is
down
as
well
to
the
phasing
that
we
want
to
introduce
to
this
whole
thing
right,
so
maybe
the
API
team
will
have
something
to
say
here.
They
won't
like
that.
A
I'm,
not
sure,
though
I
talked
about
this
with
maybe
not
about
the
versioning
themselves
itself,
but
about
the
phases
of
the
cap,
and
that
was
basically
the
proposal
from
Team
on
this,
and
she
said
that
we
it's
okay
to
just
having
a
single
cap
and
then
keep
updating
it
with
things
right.
So
basically,
that's
that's
the
agreed
path
for
this,
we'll
see
how
that
goes
with
the
other
six.
But
here
that's
what
I
have
a
mandate
for
kind
of
okay.
B
B
B
It's
all
going
to
be
okay,
because
we
can't
back
it
out
with
a
feature
of
flag
percent,
but
we
can
so
I'm
I'm,
I'm,
happy
that
sounds
good
and
and,
as
you
say,
we'll
talk
to
if
we
talk
to
the
relevant
sticks,
we'll
we'll
find
out
more
okay,
the
other
question
I
had
and
this
this
is
really
just.
Expectations
of
we've
got
a
dock
with
the
Pod,
Network
object,
and
so
on.
Do
we
have
a
timeline
or
expectation
for.
A
A
So
I
was
about
to
update
you
on
that,
so
I
I
think
went
through
like
half
of
the
log
already
so
I'm
getting
there
I,
that's
my
kind
of
on
top
of
my
my
kind
of
priorities
list
to
get
it
sorted
up.
We
have
a
you
know
a
day
off
yesterday
or
two
days
ago.
So
that's
a
bit
shorter
week,
but
I
will
I
I
said
I
will
do
it
by
the
end
of
the
last
week.
A
A
I
am
finding
out
some
new
details,
but
I
don't
think
I
will
just
write
him
down
in
the
dock
and
then
everyone
can
call
it
out
if
anything,
but
that's
that's,
basically
the
the
gist
of
it
right
so
I'm
trying
to
get
it
sorted
out
by
end
of
the
week.
Okay,.
B
B
A
But
it's
it's
a
really
good
start
very,
it's
quite
outdated
as
well
right,
because
we
slightly
tweaked
the
Pod
Network
and
then
the
the
the
the
network
attachment
object
is
as
well
there.
So
it's
it's
not
defined,
but
basically
you
can
free
to
just
create
PRS
against
this
branch
and
and
we
can
I
I
need
to
get
it
to
it
as
a
like
update
the
rebase,
the
branch
as
well.
So
right
now,
it's
very
outdated.
A
So
it's
like
from
half
a
year
ago
and
I
haven't
touched
it
since,
but
I
think
we
can
start
doing
something
over
here
right
on
on
that
part
to
just
add
stuff,
I
think
at
least
this
can
be
someone's
starting
point
on
what
I'm
thinking
right,
how
this
can
be.
A
B
That's
good
there's
a
possibility
if
we
do
some
prototyping
we'll
have
to
write
some
of
the
the
basic
code
ourselves,
in
which
case
I'll
just
make
sure
I
feed
that
back
I,
don't
know
how
far
we'll
get
on
that
yeah
yeah.
Okay,
that's
great
thanks
for
that.
A
All
right
Dave
is
that
right,
Dave
yeah.
D
D
Yeah,
my
name
is
Dave
I
I'm
part
of
the
key
native
community
and,
like
I'll,
give
you
some
context
on
the
kind
of
the
Gateway
routability,
so
essentially
in
Canada.
We
have
a
use
case
where
hey
we
want
some,
this
one
resource.
We
have
eventually
trickles
down
and
starts
programming,
a
bunch
of
networking
resources.
So,
for
example,
if
you're
using
istio
is
your
networking
layer,
programs,
virtual
Services,
if
you're
using
Contour,
it
does
HP
proxies
we're
looking
to
adopt
the
giveaway
API.
D
But
one
of
the
features
that
it
kind
of
lacks
is
this
concept
of
what
kind
of
termed
routability
like,
for
example,
I
want
my
K
native
service
to
only
be
accessible
within
the
cluster
only
as
an
example.
So.
B
D
That
kind
of
like
cluster
local
visibility,
because
by
default,
these
Services
K
native
services
are
exposed
via
the
Ingress
to
the
public
internet
via
like
some
load,
balancer
IP,
so
I
kind
of
went
to
the
Gateway
project
created
like
a
proposal
kind
of
got,
bought
it
buying
from
everyone
about
like
some
of
the
mechanics
there
and
then,
when
I
went
to
the
Sig
Network
for
a
final
review.
The
feedback
was
some
terms.
I
included
was
like
public,
which
kind
of
referred
to
hey.
D
D
D
The
network
and
I'm
just
trying
to
I'm
just
here
to
kind
of
like
Get
thoughts
and
feedback,
because,
like
so
far
when
I
skimmed,
what
you've
done
so
far,
you've
seem
to
be
addressing
like
pod
networks,
I
think
gateways
kind
of
map
more
a
little
bit
to
like
kubernetes
Services
type
load.
Balancer,
where,
like
you,
might
want
to
get
an
IP
for
on
different
networks
via
like,
for
example,
a
little
balancer
IP
for
a
VPN
load,
bouncer
IV
for
Amazon,
public
internet,
etc,
etc.
A
D
A
From
our
kind
of
these
groups
discussion,
we
didn't
get
to
kind
of
address,
Services
themselves
right,
I
think
this
is
why
folks
are
asking
about
it
and
to
give
you
some
because
I'm
not
saying
that
we
internally,
we
did
some
kind
of
experiments
or
discussions
on
this.
We
didn't
this
in
this
group,
but
we
eventually
will
on
talking
about
how
to
handle
services.
A
But
maybe
this
is
where,
because
you
say,
routability
Global
to
private.
One
of
the
aspects
of
this
is:
do
you
how
you
will
treat
with
treat
that
against
the
Pod
Network
right,
because
we
are
introducing
pod
Network
as
another
dimensional
trying
to
as
an
another
dimension
of
the
pots
right
today,
pods
one
dimension,
for
it
is
which
namespace
it
is
in
right
and
now
the
next
one
will
be
on
which
network
it
is
on
right.
A
So,
and
this
is
and
basically
I'm
not
sure
whether
this
is
specific
to
your
add-on
to
the
Gateway,
but
because
it
will
be
applicable
to
services
and
then
gateways
themselves
as
well
right
how
those
will
behave
with
those
Concepts
right
with
the
Pod
networking
and
and
all
that
in
your
case,
I'm.
Not
sure
whether
there
is
anything
specific
because
public
versus
is
there
anything
else
on
that
or
just
public
versus
you.
D
Open
the
Gap
essentially
like
to
me
public
sort
of
maps
to
just
when
you
request
the
type
load,
balancer
right
and
then
by
default,
you'll
get
like
a
public
IP.
Maybe
oh
did
I
link
go
back
and
open
the
Gap
Maybe.
Oh.
A
D
B
D
So
like
for
my
use
case
is
really
just
cluster,
but
generally
like
public
to
me
is
just
like:
hey
I
get
an
IP,
that's
essentially
accessible
on
public
internet
kind
of
what
like
blow
bouncer
sort
of
does
now
private
is
the
interesting
one
I
think
and
like
where
you
can,
because
I
know
each
implementation
has
or
I
guess.
Cloud
provider
has
annotations
that
let
you
configure
the
resulting
load,
balancers
up
fronts,
the
service,
so
maybe
like
that.
D
That's
sort
of
like
I
think
why
private
would
maybe
be
considered
like
VPC
only
and
then
for
me.
I
just
care
about
cluster
like
I.
Just
I
just
want
a
Gateway
that
is
not
accessible
outside
the
public
internet
and
even
maybe
the
private
one,
but
I
think
the
the
interesting
thing
here
is
like
okay.
Well,
if
how
do
I
make
this
compatible
with
what
you're
going
to
do
in
the
future?
D
Yeah
I
wonder
if
it's
then
like
change
rather
than
like
have
readability,
it's
maybe
just
like
a
network,
and
that
is
then
I'm
just
kind
of
like
solutioning
here
right
right
in
order
to
make
this
work
with
the
whatever
you
come
up
with
the
future.
Maybe
is
just
a
reference
to
some
kubernetes
resource
and
it's
abstract,
and
so
maybe
like
are
you
gonna,
have
such
a
thing
as
a
service
network
or.
A
A
show
we
we,
what
what
at
least
I'm
thinking,
right
and-
and
basically
this
is
something
that
we
still
need
to
discuss
within
our
group,
but
but
basically,
what
I
would
think
is
all
the
objects
would
have
pod
Network
field,
eventually,
maybe
even
added
to
metadata.
Why
not
right
now,
but
just
going
crazy,
but
but
something
like
that
right.
So
basically,
service
is
a
namespace
to
name
and
then
in
a
spec.
It
belongs
to
optional.
A
It
can
be
assigned
to
a
specific
Network
I,
don't
see,
and
that's
something
that
we
can
do-
we're
probably
gonna
discuss
in
the
future.
But
I
would
not
see
a
service
belonging
to
multiple
networks.
Right
I'm,
not
sure
whether
that
would
work
right,
you
would
have
if
anything,
I
would
imagine.
A
network
belongs
just
to
one
pod
Network
right.
That.
D
A
How
do
we
want
to
position
Gateway
because
Gateway
means
basically,
if
like
going
into
like
networking
terms,
this
is
basically
Gateway
is
a
router
right,
so
should
a
Gateway
allow
me
to
connect
to
multiple
pod
networks
at
once.
That's
another
another
thing
to
consider
right,
or
should
it
be
like
service
words,
but
let's
say
let's
assuming
that
that
service
belongs
just
to
a
one
pod
Network.
Should
the
Gateway
be
only
limited
to
that
as
well
right.
A
So,
basically,
then,
if,
if
we
just
go
the
simple
way
and
let's
say
Gateway
is
assigned
to
a
specific
pod
Network,
the
public
private.
What's
not
is
within
that
Network,
because
that's
what
you
do
today
anyway,
right
you
have
a
cluster
connected
to
a
network
and
then
I
make
it
either
this
the
default
Network,
which
we
call
it
out
in
our
our
spec
in
our
dock,
where
the
today's
network
is
just
default
right
and
within
that
Network,
you
call
a
Gateway.
Are
they
public
or
private
right?
Is
that?
A
Okay,
when
you're
going
to
have
multiple
networks
and
to
still
keep
to
the
same
concept
right,
I
gonna
have
this
Gateway
assigned
to
a
specific
pod
Network
and
then,
within
that
Network
I
define
whether
it's
public
or
private
I
think
that
they
ask
you
to
come
here,
because
your
other
infrastructure
right,
we
are
defining
some
sort
of
a
networking
infrastructure
as
well
here.
I.
Think!
That's
why,
since
you
introduced
this
one
because
initially
I
was
thinking,
why
why?
A
Why
are
you
coming
here
and
now
I
see
because
you
adding
this
field
I
think
this
is
looking
like
future
wise
I
think
this
would
get
expanded
by
another
object.
A
field
called
pod
Network
right
that
this
this
guy
is
this
Gateway
will
be
assigned
to
a
specific
pod,
Network
I.
A
Don't
think
we
can
decide
it
that
today,
because
that's
I
don't
think
within
our
our
group
that
make
we
can
decide
that
this
is
something
that
we
probably
would
have
to
Ace
for
a
stock
with
service
folks
and
then
the
Gateway
API
folks
coming
from
us
right
and
how
we
would
assign
things
I.
D
Think
the
other
thing
I
would
think
about
a
use
case
would
be,
as
a
user,
I
might
just
say,
I'm
using
a
service
and
then
I'm
creating
a
route
for
my
kubernetes
service,
but
then
I
might
want
to
attach
it
to
multiple
gateways
that
technically
would
surface
it
like
hey
make
this
accessible
to
the
public,
make
this
accessible,
also
to
like
VPN
or
like
cluster
local,
and
just
like
my
private
network,
does
that
use
case
still
fit
in
the
kind
of
model.
A
It
does
it
does
when
we
talk
within
and-
and
here
you-
you
can
think
of
networks
right,
because
you
need
to
you
think
about
underlying
networks
and
what
because
pod
Network
represents
not
only
it
can
represent.
The
Panic
on
implementation
can
can
represent
different
things.
That's
a
that,
unfortunately,
is
a
thing
we're
trying
to
be
kind
of
flexible
here,
but,
as
I
mentioned,
it's
it
can
represent
a
it
represents
a
network
to
which
a
handle
for
a
network.
A
However,
you
define
the
network
for
your
implementation
and
then
pod
can
attach
to
that
right
and
you
can
reference
it
through
a
pod
or
anything
else
right
and,
let's
take
a
a
more
simple
case.
Let's
assume
the
Pod
Network
represents
a
a
VLAN
in
your
network
in
your
infrastructure
right,
so
basically
one
network,
one
VLAN
right
and
basically
your
your
nodes
have
access
to
all
the
vlans.
In
that
case,
what
does
the
Gateway
means
in
your
in
your
situation?
Right?
Let's
assume
that
situation
right,
so
your
gateway
public.
A
Does
it
mean
that
I
can
attach
to
multiple
pod
networks
and
and
basically
route
between
the
vlans?
Does
that,
but
that,
as
as
well
assumes
you
have
routability
between
the
vlans
in
your
infrastructure,
because
kubernetes
definitely
will
not
will
not
enable
that
routability,
we
don't
want
to
go
there
unless
you
of
course
create
your
own
some
sort
of
virtual
function.
That
will
do
the
routability.
But
you
see
the
you
see
the
problem
here
or
maybe
not
the
problem
or.
D
Yeah
I
think
the
private
one
is
a
bit
more
complex
but
like
how
let's
just
go
and
talk
about
kubernetes
Services,
if
I
in
the
same
scenario,
if
I
have
a
bunch
of
pods
on
a
network,
if
I
create
a
kubernetes
service,
type
load,
balancer,
let's
say
on
gke.
Would
that
then
give
me
a
public
IP
yeah.
A
Definitely
right,
but
but
if
if
we
were
to
and
I'm
speculating
here
because
as
I
I
don't
want
to
impose
on
anyone,
but
let's
let's
get
give
some,
let's,
let's
put
some
kind
of
frame
so
that
we
can
discuss
this
right.
So
let's
assume
that
the
service
is
assigned
to
a
network
right,
so
my
load,
balancer
type
service,
is
advertised
with
a
VIP
from
that
specific
Network.
So
let's
say
my
V.
A
My
pod
network
is
VLAN
100,
so
basically
my
VIP
will
be
from
VLAN
100
and
it's
reachable
within
that
VLAN
only
right,
unless,
of
course,
my
infrastructure
allows
routing
and
what's
not
that's
another
aspect,
but
let's
assume
it's
all
isolated
right.
So
let's
say
it's
it's
limited
within
that
specific
VLAN
scope
right.
So
basically,
if
I
create
a
a
a
service
level,
bouncer
type
service
on
a
pod,
Network
VLAN
100.
This
is
where
the
the
VIP
will
be
broadcasted
in
right.
So
all
the
pods
connecting
to
that
pod
Network
have
access
to
it.
A
But
if
there
is
a
routability
between
between
a
specific
vlans,
I
should
be
able
to
reach
out
because
it's
an
LD
right,
so
basic
lb
type
service.
So
this
is
external
right,
so
it's
like
because
basically
what
you
can
treat
those
spot
networks
as
separate
clusters.
Look
at
that!
That's
each
Network
is
a
separate
cluster
within
one
cluster,
because
that's
what
it
is.
Basically,
if
you
Implement
your
networks
are
as
fully
isolated
right.
So
basically,
each
network
is
a
separate
cluster
and
basically,
how
do
internet
interconnect
multiple
clusters
through
through
lb
type
Services
right?
A
So
basically
you
create
a
lb
service
and
then
I
can
reach
across
right.
If
you,
if
I,
have
route
between
the
vlans,
but
that's
what
it
means
right.
We
definitely
in
that
case,
you
would
want
to
prevent
cluster
IP
leaks
right,
because
that
lb
lb
type
service
will
still
have
cluster
IP
and
basically
I
would
assume
in
in
let's
say
still.
A
A
There
is
passed
as
a
and
I'm
talking
about
the
cluster
IP
in
this
specific
case,
but
for
VIP,
then
that's
all
lies
on
the
load
balancers,
but
on
cluster
IP
itself,
Antonio
is
going
to
create
an
object
which
will
Define
your
your
service
cider,
and
it's
the
one
today
it's
through
arguments
in
in
KCM
and
and
cubelet,
but
there
will
be
an
object
for
that
eventually,
and
then
what
we
are
thinking,
I
would
think
is
to
provide
a
pod
Network
to
that
object.
A
Right
and
then
I
can
create
multiple
of
those
serviceiders
per
each
pod.
Network
and
I
will
have
separate
a
separate
serviceiders
per
Network,
and
then
the
cluster
IPS
has
its
own
ranges
and
what's
not,
and
of
course,
then
the
question
will
be
whether
I,
isolate
or
not.
That
of
course
depends
on
the
implementation
of
the
Pod
Network
right,
that's
assuming
it's
still
isolated.
Then
you
can,
one
thing
you
should
be
doing
is
not
leaking
endpoints
right
because
you
should
not
be
able
to
access,
even
if
it
is
with
the
same
cluster.
A
I
should
not
be
able
to
access
endpoints
from
the
other
guy
because
they
are
cross
networks,
of
course,
depending
on
how
you
want
to-
and
this
is
all
that
ball
Downs
to
implementation,
where
okay,
my
networks
are,
provide
full
isolation
and
then
then
I
I
cannot
access
cluster
IPS
from
between
networks
right,
but
that's
one
of
the
approaches.
So
having
that
in
mind,
how
do
we
then
treat
the
Gateway
right?
Is
the
Gateway
still
just
within
the
boundaries
of
that
pod
Network
and
then?
A
Basically,
because
that's
what
you
do
today
right
and
now
we
need
to
Define
how
the
Gateway
would
behave
when
we
have
multiple
pod
networks
right
today
you
have
just
one
default
and
you
basically
don't
think
about
it,
because
you
have
just
one
network
and
you
work
within
confinement
of
that.
What
if
you're
gonna
have
multiplied
that
right?
Do
we
want
to
Define
that
gateway,
then,
as
assigned
to
specific
pod,
Network
and
and
works
within
the
confinement
of
that?
A
If
you
want
to
do
that,
this
is
the
easiest,
because
this
is
what
you
do
today
and
the
only
thing
what
you
need
to
provide
is
the
isolation
insulation
between
Deadpool
Networks.
Or
do
we
want
to
do
more?
Is
the
Gateway
doing
something
more
distant?
That
and
that's
the
probability
question
right
now:
I,
don't
think
we
have
a
answer
I
we
have
so
I
have
some
ideas
personally,
but
I
don't
think
we
have
an
answer.
I'm
keep
talking
all
the
time.
Anyone
else
has
some
opinions.
Some
thoughts
on
this
from
the
group.
A
I
think
that
Dave
I
don't
think
from
our
point.
This
is
just
I'm
trying
to
provide
you
with
where
we
are
heading
so
that
you
are
aware
of
of
what
we
are
trying
to
introduce
I.
Think
from
the
point
of
view,
what
you're
doing
here,
you're,
okay
and
basically
but
but
it
will
be
okay,
it
will
be
non-invasive
if
we
assume
that
it's
the
isolation
is
in
place
between
four
networks
right,
so
our
service
is
assigned
to
a
specific
pod,
Network
and
then
indirect.
The
Gateway
is.
Is
that
as
well?
A
If
that,
if
we
assume
that
and
let's
say,
Gateway
cannot
be
shared
across
multiple
pod
networks,
then
basically
you're
you're,
all
fine
right,
because
then
we
would
just
add
to
the
Gateway
infrastructure.
We
would
add,
another
field
called
pod,
Network
right
and
and
then
it
will
be
if
it's
not
specified.
It
basically
means
it's
it's
using
the
default
network,
but
when
it's
specified
it
means
it's
assigned
to
a
specific
to
a
specific
Cod,
Network.
A
A
So
the
enum
of
relatabilities
will
be
within
the
confinement
of
a
specific
pod.
Network
I
see
that's
what
I
would
imagine,
but
the
question
is:
is
that
okay
with
you,
because
you
you
tell
me,
I
I
gave
you
an
example
of
pod
Network
being
a
VLAN
and
let's
say
you
have
that
connected
to
your
cluster
and
what,
if
that's
confinement
of
this
those
those
this
Anum
is
within
that
pod
network?
Is
that
good
for
what
you
want
to
do,
or
are
you
assuming
public?
D
Think
I'm
more
naive
view
because
I
not
a
networking
expert
so
like,
for
example,
cluster
to
me,
might
just
imply
like
the
default
Network.
So
I
just
get
like
just
a
question.
Private
I
would
just
assume.
Then
it's
probably
like
I
guess,
would
a
VPC
on
like
gke
or
AWS
map
to
a
pod
Network
in
yeah.
A
D
Then
then,
I
would
associate
that
in
order
to
have
private
I
would
have
to
to
make
this
extensible
with
what
you're.
What's
incoming
I?
Would
then,
for
this
sort
of
routability,
enum
I
would
just
say:
oh
I
need
to
associate
this
Gateway
with
a
network.
D
I
think
it's,
the
sort
of
like
the
inbound
routing
like
VPC
I,
would
say.
I
know,
doesn't
have
they'll
give
the
load
balancer
an
IP
that
I
that
is
not
accessible
on.
A
The
internet
versus
okay,
so
so
it's
internal
versus
external.
What's
your
private
versus
public?
Okay!
That's
that's!
Yeah,
I'm
familiar
with
that,
so
basically
within
VPC,
and
maybe
to
the
group.
If
you
know
someone
is
not
familiar
in
VPC,
even
if
you
connect
your
your
VMS
in
a
specific
vpcs,
you
can
all
the
IPS
within
the
vpcs
are
internal
right.
So
basically
a
private
would
mean
I
can
act
as
a
cluster
from
outside
a
cluster,
because
there
is
that
aspect
right.
A
How
do
I
access
a
service
from
outside
the
cluster
with
a
VM,
so
basically
I
would
create
a
service
which
is
basically
load,
balancer
type
still,
but
as
a
private,
so
it
gets
a
VIP
which
is
within
that
VPC
range
right,
so
that
will
be
private.
And
that's
that's!
That's
perfectly
fine!
The
question
here
to
you,
Dave,
is
in
public.
A
You
attach
basically
the
external
ip2,
specific
VPC
and
and
forward
it
to
that
specific
VPC.
Is
that
the
goal
is
there
a
case
where
you
would
for
public
I
would
want
to
attach?
Because
that's
what
you
do
today
right,
a
cluster
is
created
in
one
single
VPC
and
you
route
traffic
to
that
single
VPC
right
from
external
yeah.
Is
there
any
way
their
case
where
you
would
want
to
route
to
other
vpcs
or
multiple
vpcs
I'm?
Just
because
of
that.
D
That's
where,
like
I
for
me,
no
I,
don't
have
that
use
case,
but
I'm,
not
the
person
to
ask
because,
like
I,
only
just
want
the
like
cluster
local
incredibility,
but
in
that
mind
I
would
just
assume.
Would
you
define?
A
Very
important
yeah
we
could,
if
you
could
do
that
yeah.
That's
that's
a
good
point
yeah
and
then,
then
that
will
fit
within
the
modeling
that
I'm
thinking
where
it's
it's
bound
right,
Gateway
and
services
bound
within
that
pod,
Network
right
then,
and
if
your
pod
Network
ends
up
being
multiple
vpcs,
that's
up
to
you,
of
course,
but
yeah
I,
I
yeah,
that's
fair
yeah!.
D
And
then
so
yeah,
because
I
assume
you
can
have
multiple
pod
networks
and
then
maybe
this
higher
level
structure
that
like
merges
to
and
then
all
I
think
I
would
need
here,
is
like
a
reference
to
that
Network
or
some
some
way
to
Signal
like
this
is
a
thing
so
I
think
that
covers
like
the
private
case
cluster
you
could
say,
might
be
the
default
and
then
but
would
for
the
public
network.
Does
that
mean
Cloud
providers
would
need
to
create
sort
of
like
a
pod,
Network.
A
So
we
we
are
covering
a
case
where
you
don't
have
to,
if
you
don't
want
to
support
it
when,
if
we
let's
say
in
two
years
finger
crossed,
we
we
GA
the
feature
right
in
kubernetes
and
the
plan
is
to
ensure
that
it's
completely
Backward
Compatible.
So
we
you
don't
have
to
do
anything
extra.
A
If
you
want
to
you,
can
leverage
the
feature
to
some
to
your
advantage,
but
if
you
don't
want
to
you,
don't
want
to
keep
it,
as
is
there
will
be
a
means
where
it
will
auto
create
a
default
Network
and
you
are
done
right
and
and
basically
that's
what
the
only
thing
that
you
needed
is
the
default
Network
to
be
created,
and
then
everything
else
is
just
attaching
to
that
right.
The
way
it
is
today
right,
so
nothing
changes
from
your
point
of
view.
A
It's
just
and
now
you
have
a
default
Network,
which
the
one
that
you
connect
to
is
just
there
is
an
object
for
it
and
everything
that's
kind
of
is
created
as
you
do
it
today
in
let's
say
two
years
it
will
be
attached
to
that
default
network,
but.
A
Is
no,
there
is
no
so
that
aspect
of
the
network,
the
network,
is
what
whatever
it
is.
You
you
define
what
it
is
you
cannot.
You
cannot
from
kubernetes
specify
that
it's
public
or
external,
the
basically
network,
is
and
and
you
Etc
handle
to
the
network.
If
that
network
is
public
or
not
that's
up
to
you
right
to
the
implementer,
but.
D
I
guess,
for
example,
how
like?
How
do
I
and
if
we're
like,
if
I'm,
repeating
the
same
questions,
then
let
me
know
maybe
just.
D
So
because,
like
cluster
IP
to
me,
when
you
say
the
network,
then
I
I
it
attaches
to
a
default
Network.
I
then
don't
know
how
do
I
specify
I
want
like
a
cluster
IP
equivalent
and
also
sort
of
this
like
load
balancer
IP
equivalent,
where.
B
D
In
theory
like
different,
they
have
different
reach
abilities
so.
A
If
you
don't
want
to,
you,
don't
have
to
use
them
and
leverage
those
networks
right
you
you
do
still
into
it
as
just
there
is
a
kubernetes
cluster.
It
has
some
Network
and
that's
it
and
from
your
point
of
view,
if
you
don't
care
about
multiple
of
those,
you
don't
have
to
right.
So
you
don't
have
to
worry
about.
Oh,
do.
A
I
have
to
create
another
service
either
per
Network,
because
that's
by
default
everything
if
it
doesn't
specify
a
network,
it
will
belong
to
the
default
one
right,
that's
what
it's
that's,
what
what
I
would
imagine
being
where
you
don't
have
to
it
should
be
Backward
Compatible.
So
whatever
you
do
today,
Gateway
doesn't
have
any
pod
Network
field
in
it.
Basically,
that
means
it
belongs
to
the
default,
because
that's
what
it
is
today
right,
everything
that's
being
created
today
in
today's
kubernetes
cluster
belongs
to
the
default
Network.
D
A
Load
balance
I
said
it's
up
to
the
load
balancers,
how
they
manage
the
IPS
and
then
whether
they're
gonna
support
the
multiple
pod
networks
and
that's
separate.
So
that's
that's.
We.
We
don't
control
that
right.
You
can
have
either
the
specific
cloud
provider
load,
balancers
or
let's
say
F5
right
and
they're
gonna
do
their
own
stuff
and
then
what
IPS?
They
support.
It's
up
to
their
the
lbs
implementers
implementations.
D
Yeah
and
that's
where
I
kind
of
view
like
it
feels
like
service
Community
Services
have
this
Duality,
where
they
have
like
the
local
network
and
the
external
network,
and
and
that's
where
I
feel
like
the
public
ties
to
the
external
network
right
and
that's
what
I
was
wondering
if
it's
going
to
be
that
external
network
is
going
to
be
realized
in
the
cluster
in
some
way,
maybe
it
doesn't
have
the
ciders
defined
or
something
like
that,
but
just
for
the
sake
of
like
being
able
to
say
hey
this
kubernetes
service,
not
only
am
I
like
attaching,
let's
say
to
the
default
pod
Network
that'll
give
me
the
cluster
IP
that
I
want
I'm,
also
attaching
to
this
external
network,
which
will
then
potentially
give
me
the
load,
balancer
IP.
D
D
I
see
yeah.
D
Still
view
this
extinct
like
to
me,
it
sounds
like
you're
saying
there
is:
whenever
I
want,
like
a
public
IP
it,
it's
still
part
of
a
network
right.
A
What
basically
a
port
network
is
in
their
implementation
right
and
the
easiest
will
be.
Let's
say
like
you,
what
you're
saying,
let's
say
it's
a
VPC
right.
My
pod
Network
basically
equals
to
A
and
translates
to
a
cloud
provider's
VPC
right.
Let's
say
that
right
and
then
then
that
can
be
one
of
the
approaches
right.
A
But
I
think
this
is
the
easiest
in
terms
of
just
to
kind
of
then
then
fit
it
here
and
then
fit
it
with
all
the
services
and
and
the
gateways,
and
all
that
and
then,
because
I
think
you
you're
treating
a
network
as
a
different
IPS
that
different.
That
is.
This
is
different
network.
No,
so
that's
as
a
whole
right
in
the
this,
in
this
terms,
think
as
a
network
as
the
whole
kind
of
package,
where
you
have
the
service
cider
IPS,
the
public
IPS
from
LDS,
the
VPC,
the
the
Pod
IPS.
D
D
A
A
D
Doesn't
no
so
the
way
I
kind
of
phrase
this
is
the
scope
of
of
these
gets
reduced,
so
public
has
the
largest
scope?
Okay!
D
Exactly
so,
then,
if
you
request
a
public
address
or
sorry,
a
Gateway
with
public
routability
I
actually
might
end
up
with
three
addresses.
I
might
get
my
public
IP,
my
private
IP
and
then
my
cluster
IP
in
my
list
of
addresses,
and
then,
if
I
am
trying
to
hit
this
service,
I
might
know
which
IP
to
hit
because
I
like
if
I'm
on
the
cluster
I
would
use
the
cluster
IP
rather
than
the
public.
A
A
A
A
We
confine
it
to
a
single
range
per
family,
pod
Network,
so
each
pod
Network
can
have
two
ranges,
one
for
Preparation,
so
V4
and
V6
right
and
that's,
but
that's
just
just
for
the
ipam
for
pods
right
and
let's,
let's,
let's
make
sure
that
we
don't
limit
because
pod
Network
then
right
now,
that's
what
we
did
in
our
camp
right-
and
this
is
where
we
kind
of
stopped,
but
eventually
what
I
would
Envision
is.
A
Eventually
we
have
to
define
a
pod
network
will
will
have
to
be
have
a
service
either
and
then
your
load
balancer
will
have
to
have
their
own
set
of
range
of
ips
for
the
public.
Ips
same
goes
for
the
private
range
as
well,
so
basically,
a
port
network
will
have
to
contain
all
those.
So
this
is,
as
you
see,
they've
as
we
mentioned
here,
pod
network
will
be
a
field
in
this
object
that
will
be
trickled
down
to
all
the
other
objects
right
and
basically,
maybe
your
your
load.
A
D
Funny
is
I
initially
did
this
proposal
not
at
this
like
higher
level
contract
for
Gateway
routability,
but
had
it
a
bit
more
low
level
like
give
me
an
address
of
this
scope
kind
of
like
the
way,
and
then
that
was
sort
of
push
back
against
so
I,
because
it
kind
of
makes
it
seem
like
based
on
this
discussion.
Pod
network
will
have
potential
different
address
ciders
for
different
things.
A
D
The
same
yeah
and
and
I
think
then
like
what
this
Gateway
standard
should
be,
then
is
specify
the
network
and
specify
the
types
of
addresses
I'm.
Looking
for
and
I
think
that's,
maybe
what
like
Tim
was
pushing
back
on,
where
it's
like
a
private
and
public
are
really
ambiguous.
But
if
you
define
these
types
of
addresses
in
multisig
than
the
two
projects
would
work
well
together.
A
Yeah,
so
one
one
question
is
one
aspect
to
this:
is
private
versus
public
I'm
not
sure
how
that
fits
into
on-prem
environments?
It
works
perfectly
into
Cloud
environments,
but
how
does
that
fit
into
on-premise
environments
right?
Because
I
don't
think
there
is
a
concept
of
private.
So
is
that
too
much
of
a
Maybe
not
maybe,
then
private
equals
public
there
right,
yeah.
D
C
A
D
Like
you
could
have
a
network
of
public
addresses
and
stuff
like
that
and
I
think
for
me
to
be
able
to
move
this
forward,
I
guess
with
the
Sig
Network
approvers,
it
would
have
to
be
like
if,
if
you
define
the
types
of
addresses
or
even
if
they're
arbitrary,
like
the
Pod
Network
defines
them
like
hey.
These
are
my
public
addresses
like
as
a
operator
might
do
that
then?
Maybe
maybe
then
that's
the
way
that
this
Gap
could
could
move
forward
and
then
be
compatible
with
what
you're
going
to
land
in
the
future.
A
I
I,
don't
think
you
you
have
to
worry
yourself.
That's
the
thing
I
would
that's
something
that's
we
did
as
well
right
because
ipam
is
very
intricate
to
the
the
platform
provider
right
so
or
the
infrastructure
admin
right.
A
A
Should
that
guy
be
fully
aware
of
what
which
cider
is
what
right?
I?
Don't
think
they
should.
They
should
be
just
classes
where
you
have
here
private,
public
and
cluster,
and
then
that
should
automatically,
in
the
background,
be
result
right,
I
I,
that's
my
thinking.
A
Okay,
what
did
exactly
team
push
back
on
here
or
he
pushed
back
on
this
or
he
pushed
back
on
those?
The
initial
approach
that
you're
saying.
D
With
them,
I
think
especially
like
public
and
private
are
is
hard
to
Define
kind
of
like
even
like
you
mentioned,
like
your
on-prem
example
like
what
does
public
mean
there
and
then
the
other
thing
was
those
become
especially
hard
to
Define
when
combined
with
the
incoming
kubernetes
multi-network
initiative.
D
So
it's
it's
way
like
I
think
to
the
way
you
describe
things
as
classes.
I.
Wonder
then,
if,
like
if
the
Pod
Network
had
these
abstract
classes
that
could
be
defined,
then
maybe
then
that's.
What
is
the
contract
between
infrastructure
and
the
admin
operator
like
I?
Don't
know
right,
like
yeah
I,
think
that's
where
they're
just
trying
to
connect
the
two,
because
there's
these
squid
it
seems
similar
so
but
yeah.
D
A
D
Thanks
for
the
time,
no
worries
yeah.
A
All
right
we
are,
we
are
all
at
the
time
I
think
we
went
through
most
of
the
things
game
action.
Our
topics
as
I
said
I
will
try
to
get
the
refactoring
done
by
end
of
the
week.
I
will
definitely
I
will
ping
everyone
on
the
slack
Channel
when
I'm
done
to
let
you
know
that
that
I'm
I'm
done
and
you
can
start
reviewing
and
adding
stuff,
and
what's
that
all
right?