►
From YouTube: CNCF Networking Working Group Meeting - 2018-02-27
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
C
A
I'm
running
around
the
conference
trying
to
find
a
place
to
to
park
for
a
moment,
I
think
I
found
one.
Oh,
so
we've
got
Brian
Shawn
Deepak
she
she's
yeah
and
Mike.
D
A
A
A
Okay,
hey
fair
enough,
let
let's
call
it
so
sore
we're
eight
minutes
and
Ken
has
been
duly
warned
I'm
pumped
to
to
hear
about
this.
E
Okay.
So,
as
your
network
is
a
very
rich
hyper,
scalable,
reliable
virtual
network
that
we've
offered
for
many
years
now
for
virtual
machines
and
that
we
have
brought
two
containers
recently,
it
offers
lots
of
capabilities
right,
starting
here
on
the
left.
Here,
virtual
machines
or
containers
can
be
accessed
through
public
IP
addresses
directly
from
the
Internet.
E
We
provide
load
balancing
we
provide
Ackles,
your
front-end
access
gets
a
DNS
name,
so
customers
can
access
your
service
over
the
Internet
using
DNS
names.
We
provide
DDoS
protection,
then
moving
here
on
the
right
on
the
virtual
network.
We
offer
a
very
rich
virtual
network
for
for
your
virtual
machines
and
containers
in
the
cloud.
So
basically,
customers
can
specify
their
own
IP
address
ranges
their
own
subnets.
Then
they
can
specify
security
groups
by
which
they
can
specify
isolation.
Policies
like
this.
This
security
group
is
not
allowed
to
that.
E
Talk
to
that
chaotic,
repent
and
so
forth,
we
support
rich
service
training
through
a
construct
that
we
call
user-defined
routes
which
basically
lets
customer
control,
how
traffic
from
one
virtual
machine
to
another
virtual
machine
flows
and
whether
that
needs
to
be
sent
over
through
a
virtual
appliance
before
it
gets
routed
to
the
destination.
So
so
we
have
a
rich
suite
of
capabilities
in
in
the
private
network
that
customers
can
can
set
up
in
in
the
in
the
cloud
environment.
E
We
have
a
very
rich
ecosystem
of
network
appliances
in
the
azure
marketplace
that
leverage
these
capabilities
to
insert
themselves
into
customers
virtual
networks.
We
offer
back
in
connectivity
to
customers
on-premise,
so
this
may
be
through
point
to
site,
VPN
or
VPN
gateways.
We
also
offer
private
peering
through
various
ice,
peas
and
providers
that
we
have
relationship
with.
We
call
that
solution
Express
route,
so
this
is
a
brief
primer
of
on
the
virtual
network
that
we
offer
to
our
customers
today,
right
now,
let's
look
at
as
your
CNI
right.
E
So
so,
when
we
started
looking
at
see
CNI
and
how
we
do
container
networking
for
Azure
right,
there
were
a
few
things
that
that
we
observed
right.
One
is
we
want
to
leverage
the
Azure
Sdn,
which
already
is
a
very
evolved
ico
system
and
bring
those
same
Sdn
capabilities
to
containers,
because
when
we
talk
to
customers
they
essentially
need
similar
kind
of
network,
and
so
even
more,
but
some
of
the
core
constructs
around
virtual
networks
load,
balancing
DNS
they
need
for
containers
as
well.
E
We
realize
that
that
in
the
container
world
there
are
multiple
orchestrators
that
exist
today
to
manage
those
containers,
so
we
have
to
integrate
with
with
these
various
orchestrators.
Of
course,
we
are
running
in
the
in
the
guest
environment.
For
us,
VM
is
equivalent
to
a
guest,
so
the
operating
system
there
can
be
either
Linux
or
Windows.
So
we
need
to
support
both
scale
is
a
big
deal
for
containers.
E
E
Sure
a
v-net
is
a
virtual
network
that
a
customer
specifies
through
a
template
that
hey
I,
want
to
deploy
these
10
VMs
in
in
these
two
subnets
here
is
the
IP
address
space
that
I
want
to
use
for
my
my
workload
in
the
cloud,
so
the
define
subnets
they
define
address
ranges.
They
define
their
security
policies,
they
define
their
routing
policies,
so
it's
basically
customers
own
private
network
for
their
their
compute
workloads
in
the
cloud
right.
A
E
So
right
now
V
net.
So
far
what
I've
described
to
you
is
underlay
in
the
sense
that
it
is
implemented
at
the
hose
and
exposed
to
the
VMS.
Now,
when
we
start
talking
about
containers
right,
there
are,
of
course,
overlays
can
exist
inside
the
VMS,
as
well
as
we
have
an
underlay
at
the
host
level
and
both
can
coexist.
E
Provisioning
time
of
less
than
a
second
is,
is
a
requirement
that
again
containers
life
in
the
virtual
machine
world.
The
provisioning
times
can
be
longer
the
one
one
key
thing
here
is:
as
we
talk
to
our
customers.
Customers
are
not
purely
container
customers
or
purely
VM
customers
right.
They.
They
may
have
some
workloads
that
are
containerized
and
then
some
other
part
of
the
infrastructure
that
is
not
containerized.
E
So
so
what
we've
noticed
for
a
lot
of
our
customers
is,
is
they
have
a
mix
of
both
container
and
and
and
VM
based
workloads,
and
they
don't
want
to
to
build
two
separate
networks
that
are
completely
oblivious
of
each
other?
For
these
scenarios
they
want
a
seamless
single
network
that
spans
across
both
swear.
Containers,
can
talk
to
VMs
and
VMs
can
talk
to
containers.
They
want
to
specify
their
policies
in
a
consistent
way.
E
So
one
of
the
the
motivation
for
us
to
leverage
as
your
Sdn
has
been
to
make
sure
customers
get
a
consistent
experience,
no
matter
in
what
form
they
deploy
their
compute
workload
in
the
cloud,
whether
it
be
as
your
functions
or
lambdas
or
containers
or
VMs
right.
We
want
to
give
a
consistent
network
and
a
consistent
way
of
managing
policies
for
the
network
to
our
customers.
So
let
me
move
on
to
next
slide
here.
E
Ok,
so
now,
I
start
digging
more
into
into
how
we
leverage
CNI
in
and
how
we've
built
this
functionality
right.
So
at
the
top
level,
I
show
ACS,
which
is
as
your
container
service
engine.
This
is
the
service
that
today
creates
orchestrators
or
launches
orchestrators
on
behalf
of
customers.
So
customer
comes
to
acs
and
says:
hey,
create
a
kubernetes
cluster
for
me
or
create
a
DCOs
clustered
for
me.
E
So
acs
engine
creates
that,
but
then
that
the
way
we
plug
Sdn
into
that
Orchestrator
is
by
leveraging
CNI
right
and
for
that
we've
written
a
few
plugins
right.
There
is
a
network
plug-in
which
basically
attaches
container
interfaces
to
the
underlying
as
your
virtual
network
that
is
implemented
at
the
host
level
right
and
it
it
routes
traffic
on
the
virtual
network.
Right
then
there
is
the
item
plug-in,
which
is
responsible
for
allocating
IP
addresses
for
these
for
the
containers
through
CNI.
One
key
thing:
I
want
to
call
out
here
right.
E
So
the
the
experience
that
we've
offered
to
the
customers
is
it's
one
IP
address
space
that
they
can
manage
for
VMs
and
containers,
whereby
a
subset
of
that
address
space
is
delegated
for
container
use,
but
the
other
space
may
be
used
for
VMs.
That
way
they
have
a
single
network
and
VMs
and
containers
can
freely
communicate
with
each
other,
rather
than
then
building
the
two
networks
and
they're
using
their
address
space
completely
in
isolation,
in
which
case
case
they
become
islands.
So
Deepak.
A
E
Yeah
yeah,
it's
it's
it's
along
the
same
lines.
It's
done
in
our
item,
plug-in
and
DHCP
is
the
protocol
we
used
to
assign
IP
addresses,
but
how
we
make
the
reservation
is
through
our
API,
where
customers
can
say
hey
this
address
spaces
is
reserved
for
VMs,
or
this
address.
Space
is
delegated
for
containers
and.
E
Is
that
so
that's
part
of
the
azure
platform
already,
so
we
we
run
DHCP
server
for
all
the
virtual
machines
and
for
containers
in
the
cloud
it's
running
on
the
host,
and
it's
talking
to
our
network
controller,
which
is
telling
hey
on
this
VM
here
is
the
IP
address
that
you
should
respond
to
when
you
get
a
DHCP
request
from
the
VM
or
from
the
container.
So
our
network
controller
sis
is
provisioning
or
DHCP
server
to
respond
with
the
right
IP
address.
E
E
One
is
single
tenant
container
clusters
right,
so
so
a
customer
comes
and
says:
hey
I
want
to
to
create
my
container
cluster,
and
we
we
provide
managed,
Orchestrator
experience.
What
that
that
means
is
what
I
call
it
here
IKS
as
your
coordinator
service?
What
it
does
is,
rather
than
customer
running
their
own
Orchestrator
v,
as
your
platform
is
running
Orchestrator
on
behalf
of
them
right,
so
customer
does
not
need
to
deal
with
managing
and
running
their
own
own
Orchestrator,
so
think
of
it
as
orchestrated
as
a
service
or
or
container
yeah.
E
So
so
you
in
this
scenario,
the
containers
will
also
be
connected
to
v-net
via
as
your
CNI
and
then
v-net
policy
will
be
available
to
containers
right.
So
the
second
scenario,
which
is
the
next
step
of
the
previous
scenario
here
we
are
taking
the
next
step
where
we
are
offering
container
as
a
service
customer,
doesn't
even
need
to
take
what
arcus
traitor
they
get
to
use.
They
just
say:
hey
I
need
X
number
of
containers
and,
as
your
container
service
is
the
one
that
is
responsible
for
creating
these
multiple
containers
on
behalf
of
medius
customers.
E
E
E
Okay,
so
so
similar
model
is
therefore
as
your
functions
where
customer
doesn't
care
about
what
function
Orchestrator
is
being
used.
They
just
care
about
deploying
in
and
and
running
their
functions
as
your
web
apps,
which
is
similar
along
the
lines
of
hey
customer
just
cares
about
deploying
their
website.
The
the
orchestration
and
management
of
those
websites
is
left
to
a
central
coordinator.
E
Right
so
then,
the
the
next
scenario
which
I've
alluded
to
before
right,
so
these
containers
that
get
created
right
in
the
azure
v-net
customer
wants
rich
policies
to
to
connect
and
protect
these
containers
and
the
workload
that
is
running
on
those
containers
and
they're.
Asking
us
for
four
similar
constructs
that
they're
used
to
for
their
virtual
machines-
and
these
are
pretty
standard
constructs-
are
very
austere
right.
There's
the
security
group
notion
there's
the
notion
of
service
chaining,
VPN
and
private
peering
capabilities.
E
There
is
another
interesting
trend
that
we
are
seeing.
This
is
service
endpoints,
whereby
customers
like
if
they,
even
when
they're
running
sheer
on
a
shared
pass
service
like
storage
or
sequel
DB.
As
a
service
which
is
a
multi-tenant
service,
they
still
want
those
endpoints
to
be
isolated
and
protected
from
other
customers,
so
they
want
even
the
endpoints
of
those
shared
services
to
be
projected
inside
their
V
net
and
to
be
locked
down
to
the
virtual
network
right.
E
So
this
is
I
would
say
even
the
next
step
of
containers
where
the
service
may
not
be
containerized,
but
still
it
has
to
provide
isolation
at
the
network
level.
On
a
per
per
customer
basis
by
by
isolating
them
into
different
V
nets,
load
balancer
DNS,
not
write,
customers
want
all
all
those
those
capabilities
and
they
want
the
policies
to
be
common
for
VM
and
container
workloads
right.
So
based
on
these
scenarios,
here
are
the
the
requirements
that
that
we
would
like
to
work
with
CNI
and
see
how
we
can
address
these.
E
These
requirements
right,
one
is
around
the
policies,
so
I
know
right
now.
Kubernetes
has
a
way
to
specify
policies,
and-
and
maybe
that's-
that's-
okay-
that's
totally
fine,
but
we
would
like
to
be
able
to
bridge
the
capabilities
of
the
underlying
platform
with
the
orchestrator
specific
policies,
so
that
so
that
a
customer
may
be
able
to
use
any
number
of
our
guest
raters
and
may
be
able
to
specify
those
policies
in
an
Orchestrator
specific
language,
but
down
below.
E
We
are
able
to
translate
that
into
a
common
way
of
rationalizing
those
policies
and
implementing
those
policies
and
managing
those
policies.
Otherwise,
as
a
public
cloud
platform
provider,
we
we
have
to
write
a
plug-in
for
each
Orchestrator
for
the
implementation
of
policies.
What
CNI
has
done
awesome
at
is
in
terms
of
network
virtualization,
where
we
are
able
to
plug
in
to
multiple
orchestrators
in
a
standard
way
to
implement
virtual
networks.
E
B
E
Think
this
notion
of
grouping
and
tagging
is
fairly
generic.
So
if
we
defined
a
standard
way
to
grope
and
tag
and
then
specify
apples
based
on
those
groups
and
tags
in
a
standard
way
in
CNI,
then
I
think
that
will
meet
the
requirements
of
various
orchestrators.
Now
different
orchestrators
may
choose
to
do
more
advanced
stuff,
but
at
a
bare
minimal
level.
I
think
if
we
define
the
notion
of
grouping
and
tagging
that
will
that.
D
Here
is
basically,
if
you
have
a
basic
way
of
describing
tags
and
memberships
to
those
tags
for
workloads,
and
then
policies
can
be
specified
for
those
tags
themselves
like
this.
This
tag
means
you
should
have
access
to
Internet
or
you
should
have
put
this.
This
load
should
be
behind
a
load
balancer
on
this
way,
and-
and
this
is
the
end
point-
I
won't
expose
too.
So
all
those
things
can
be
put
be
specified
on
the
tags
and
then,
when
the
container
itself
is
our
part
itself
is
instantiated,
it
just
subscribes
to
membership
to
attack.
D
B
D
D
What
what
we
want
to
like
what
we
are
trying
to
propose,
but
this
this
call
what
we
are
trying
to
do
is
is
basically
establish
the
common
requirements
that
we
are
trying
to
address.
If
we
can
agree
to
or
or
have
some
commonality
in
this,
it
makes
sense
to
extend
the
discussion
to
actually
submit
a
formal
PR
on
the
CNI
project
itself.
From
the
spec
point
of
view,
how
do
we
want
to
extend
it
yeah?
D
E
We
want
to
bring
these
topics
to
this
forum
so
that
we
can
all
discuss
and
debate
and
decide
what
is
the
best
path
forward
and
if
PR,
you
think,
is
the
best
way
to
move
forward,
we
can
submit
a
PR
if
you
think
you
need
more
discussion
about
the
merits
of
it
or
where
to
do
it.
If
it's
better
to
do
in
in
in
networks
sake,
we
can
take
it
there.
E
So
so
we
are
really
looking
to
partner
with
all
of
you
to
decide
how
to
take
this
forward,
because
these
are
requirements
that
our
customers
are
putting
on
us
and
we
can
meet
those
requirements
in
a
very
specific
way
and
maybe
Amazon
will
solve
them
in
Amazon.
Specifically,
what
we
are
trying
to
do
here
is
is
in
the
spirit
of
openness.
We
want
to
bring
it
to
this
forum
so
that
we
can
all
have
have
open
discussions
around
hey.
Customers
are
putting
these
requirements.
What
do
we
do.
E
E
Yes,
you
are
right
to
the
CNC
of
working
group
so
that
we
can
take
an
approach
that
is
Orchestrator
independent,
but
again
we
authorized
kubernetes
is
making
lot
of
progress
here.
So
so,
if
it,
we
would
love
for
CNC.
If
working
group
to
take
this
on
and
define
these
capabilities
in
an
Orchestrator
independently.
G
H
I
Brian
this
is,
this
is
Christopher.
You
know,
having
you
know
like
you,
you
know
have
to
support
multiple
orchestrators
in
a
security
policy
model.
I
would
say
that
the
difference
is
once
you
get
into
the
details
between
our
various
orchestrators
model
security
groups
versus
labels
versus
tags
versus
profiles.
I
You
know
I
I,
it
may
be
doable,
but
the
way
I
look
at
it.
It
would
require
major
surgery
on
any
number
of
orchestrators
to
try
and
get
to
a
common,
a
common
model,
but
that
just
might
yeah
we've
been
living.
This
dream
for
the
last
couple
of
years
and
and
I
would
say
that
there's
not
a
once.
You
get
down
to
the
what
not
only
what
the
data
model
is,
but
what
the
people
who
use
that
Orchestrator
expect
that
data
model
to
drive
there
are
differences
that
are
not
just
cosmetic
but.
E
I
First
of
all,
I
guess
you
know
everyone
who
is
try
to
build
a
generic
networking
and
security
model
as
a
single
thing
and
I'll
paint
Nova
Neutron
daylight.
Any
number
of
other
of
these
platforms
of
pride
could
make
the
all-singing.
All-Dancing
model
of
networking
have
have
died
under
the
wake
of
their
complexity.
Right.
So
first
thing
you
know:
let's
separate,
does
this
belong
in
CNI
versus?
Should
we
have
a
common
policy
model,
I
sort
of
like
what
Ken
did
you
know,
I?
I
Think
we
can,
you
know,
does
that
belong
and
CNI
does
that
belong
in
something
else?
It
is
one
discussion
right
and
answering
simple
keep
it.
You
know,
there's
different
things
and
each
one
of
those
have
a
different
data
model,
except
for
I,
then
plan
to
cram
them
all
into
one
big
nasty,
I,
guess
I'm,
showing
where
my
opinion
is
on
that
one
big
nasty
plug-in.
I
I
So
that
that's
the
first
disconnect
right
is
in
some
platforms,
those
things
are
dedicated
and
they
have
dedicated
semantics
around
them.
Like
a
security
group
is
sort
of
like
a
VLAN
in
a
lot
of
cloud
orchestrators,
whereas
labels,
you
know,
role,
equals
database
server
or
stage
equals
prod
or
application
equals.
I
I
E
I
think
the
scope
of
what
we
do
in
networking
will
not
be
to
associate
semantics
with
the
label
itself,
but
it
will
be
more
to
to
be
able
to
apply
policies
with
those
labels
from
a
networking
perspective.
Now
those
labels
can
have
other
semantics
outside
of
of
networking,
and
that's
totally
fine
but,
like
you
may
say,
front-end
back-end,
that's
fine
right,
but
then,
when
you
want
to
apply
network
policies
using
the
same
labels
is
where,
where
networking
comes
in.
D
It's
expressed
to
policy
control
and
policy
controller,
takes
it
and
enforces
it.
So
first
thing
is:
let's:
if
we
can
come
to
a
standard
specification
on
how
Orchestrator
expresses
those
policies
to
network
controller
that
will
help
us
create
an
Orchestrator
independent
way
of
expression,
of
that
two
controllers,
and
then
we
can
independently
evolve.
I
I
D
New
T
is
sure
well
I,
agree
with
that.
I
think
there
are
few
patterns
in
what
different
orchestrators
used
to
describe
this
commonality
of
different
workloads.
They
run.
We
can
look
at
those
patterns
and
come
through
how
to
come
to
a
common
language
specification
for
policy,
in
that,
if
we,
if
we
come
to
a
conclusion
that
there
are
so
many
different
variations,
there
is
no
way
to
standardize.
D
B
H
E
F
B
F
I
A
In
part,
because
CPI
is
there's
a
smaller
subset
of
those
doing
and
running,
containers
that
that
would
really
would
truly
be
in
need
of
seen,
I
being
multi
container
Orchestrator,
certainly
any
anyone
who
providing
containers
as
a
service
for
providing
container
orchestration,
and
then
you
know
large
enterprises
that
have
gotten
multiple
teams
that
have
just
one
up
the
nice
things
and-
and
it's
almost
like
solving.
That
problem
is
almost
sounds
like
decent
startup
idea
or
and
I'm
sure
something
that
mr.
Marvin,
which
is
he's
not
Marvin.
I
F
So
I
know
Chris.
You
had
a
few
you're
going
to
talk
about
some
ipv6,
but
before
we
transition
over
to
that
I
think
in
terms
of
like
the
kinda
cpio
network
policy
group,
some
of
the
silver
low
balancing
stuff
and
service
chaining
aspects
you
have
on
here,
Deepak
yep
I,
know
I
miss
it
in
working
with
you
to
kind
of
come
up
with
sort
of
some
proposals
in
terms
of
how
that
would
maybe
be
mapped
in
other
others
on
this
college
want
to
work
on
that.
I
I
I've
got
but
yeah
I'd
be
interested
in
at
least
participating
to
a
certain
event,
specific
name
Student
Service
chaining.
What
people
are
asking
for
in-service
training,
because
I'm
hearing
sort
of
different
things
as
people
move
into
containers
about
is
his
classical
service,
chaining,
still
relevance
or
not,
I'd
be
interested
in
hearing
might
as
well
before.
F
F
I
Okay,
so
I
didn't
have
a
whole
lot
of
time
to
put
stuff
on
slides,
I
apologize
by
doing
this
a
little
bit
for
those
of
you
who
don't
know,
I've
also
done
a
couple
of
v6
transitions
on
fairly
large
networks
in
the
past
and
have
the
scars
of
doing
ops
area
in
the
IETF
Oh
code.
Networking
group
co-chairs
so
I've
lived,
the
dream
of
getting
the
world
to
be
six
will
happen
next
year.
We
promise
this
time,
so
there
are
some
talking
to
folks
who
are
actively
interested
in
in
v6.
I
F
I
I
When
I've
talked
customers
about
that
couple,
things
come
up
or
potential
users,
one
there's
a
class
of
folks
out
there
who
believe
that
it's
lying
are
probably
correct,
that
a
/,
64
/
host
will
probably
blow
out
their
v6
allocations
pretty
quickly
at
this
size
of
cluster
and
kind
of
some
folks
who
are
at
the
tens
of
thousands
of
hosts
and
all
of
a
sudden
that
that
Infinite
address
space
in
v6
starts
disappearing,
pretty
quickly.
The
other.
I
I
Some
of
the
things
that
we're
seeing
from
customers
as
they're.
Looking
at
v6
automatic
configuration
of
infrastructure,
they
would
really
prefer
not
to
have
basically
the
infrastructure,
Auto
configure
itself
autodiscover
itself
set
up
keyrings
if
necessary,
etc,
and
we're
also
seeing
a
strong
request
for
v6
infrastructure.
People
want
to
preserve
their
v4
addresses
for
external
services
or
for
legacy
workloads.
So
they
would
really
like
to
run
a
v6
only
infrastructure,
but
that
exposes
v4
services
and
supports
legacy
before
workloads.
I
I
Each
node
new,
given
LT
network,
is
slacked
from
a
slash
64
in
a
private
cloud
environment
that
could
be
a
top
of
rata
Iraq
or
the
top
of
Rapids
and
l3
switch.
It
could
be
an
AZ
or
the
equivalent
in
a
public
cloud,
etc.
But
you
know
every
l2
physical
infrastructure
or
thing
that
maps
to
physical
infrastructure
is
a
single
slash.
64.
Each
node
would
then
use
DHCP
v6
to
request
a
prefix
delegation
of
a
larger
than
/
64
block.
For
that
node.
I
I
Each
of
those
grid
in
SAS
64
blocks
could
be
a
slash,
ATS
last
96,
something
along
those
lines
for
that
node.
This
does
raise
an
interesting
thing.
We've
got
now
a
huge
amount
of
addresses
per
node.
Do
we
ever
recycle
the
addresses,
or
do
we
use
addresses
once
and
throw
them
away,
at
least
until
a
you
know,
at
some
point
at
the
heat
death
of
the
universe,
and
that
gives
us
some
traceability
for
workloads
right.
I
Every
time
you
get
a
new
workload
you
get
new
IP
address
and
log
events
to
a
given
IP
address
for
a
given
IP
address
are
therefore
traceable
to
a
specific
instance
of
a
workload
rather
than
an
ephemeral
thing
at
some
point
in
time.
If
you're
going
back
and
looking
at
logs
later
possible
dressing
side-effect,
each
node
would
then
implement
host
local
addressing
for
its
containers.
I
Scale,
containerized
environment,
you
know?
Is
there
more
of
a
okay,
you
know
I
put
something
out
there
and
well
you
know
what
else
do
what
other?
What
other
ideas
do
people
have
about
dealing
with
v6
infrastructure,
acceptor
I
think
it
goes
without
saying
that
until
we
have
parity
in
any
orchestration
system
with
v4
and
v6,
we're
not
gonna,
be
able
to
really
move
forward
and
I
know
we're
working
on
that
pretty
hard
in
kubernetes.
I
I
I
It
is
somewhat
it
is
somewhat
different,
but
I
have
some
concerns
about
if
people
want
to
do
v6
only
infrastructure,
how
we're
going
to
manage
that
at
large
scale.
So
I'm,
you
know
I
think
what
people
have
been
talking
about
before
and
blogging
about
before
have
been
sort
of
yell.
We've
got
v6
up
and
running
on
kubernetes,
but
you
know
how
we're
going
to
manage
this
at
scale.
I
B
I
B
Yeah
it
it
feels
like
someone
should
should
try
it
in
real
life
and
report
back
I
mean
I,
guess
I,
guess,
there's
a
danger
that
the
term
when
I
trying
to
say
if,
let's
suppose
the
64
per
hostess,
is
a
really
bad
thing
like
you
say,
and
let's
also
suppose
that
becomes
wildly
popular
amongst
early
adopters.
You
know
that
right.
I
I
B
I
I
I
I
Think
that's
when
we
run
into
managing
very,
very
sparse
IPAM
tables,
but
if
the
local
sea
and
I
plug
in
could
then
do
slack
to
the
top
of
rack
that
might
be
another
that
might
be
another
option
and
then
the
sly
64
is
coming
out
of
the
top
of
rack
switch
or
the
border
router
or
whatever
is
doing
the
Rd
within
that
l2
zone.
That's
another
way
of
doing
that's
another
possible
way
of
doing
the
front
yeah.
B
I
Know
we
start
talking
about
this.
Can
what,
at
the
beginning
of
the
year,
sometime
last
year
of
you
know,
it's
v6
one
of
the
things
that
we
want
to
sort
of
help
provide
some
guidance,
I,
think
I,
remember
memory
being
faulty
the
can
I've
been
talking
about
you
is.
Is
there
a
best
current
practice
or
actually
should
we
document
something
right?
I
You
know
this
might
be
the
starting
point
of
a
BCP
for
v6
I'm,
not
sure,
there's
a
standard
to
be
promulgated
here.
As
far
as
you
know,
this
is
the
way
you
must
do
it.
You
know
this
is
you
know
people
are
oh
I,
be
v6.
Here's
CMS
CNCs
suggested
mechanisms
to
do
that
both
for
adopters
and
for
people
writing
infrastructure
that
goes
into
CN
CF.
You
know
des
cloud
native
infrastructures,
my
view
it's
more
of
a
document
than
a
promulgated
standard,
especially
since
we're
not
a
standards
organization
right.
I
Now
we
could
see
about
doing
testing
some
of
this
in
real
life
and
seeing
what
works?
What
doesn't
I
think
the
problem
is
going
to
be.
The
the
problems
are
gonna
pop
up
at
scale,
I
think
anything
work
at
small
scale
right.
It's
going
to
be
testing
it
at
big
enough
scale
where
we
can
possibly
start
defining
issues.
I
I
B
I
F
Does
anyone
still
awake,
oh
I,
think
such
let's
throw
it
I
know
it
over
time.
So
I'm
start
with
them,
and
in
two
weeks
I've
sent
some
emails.
I've
been
healthy,
into
which
we've
made
a
little
bit
of
progress
on
the
services
piece
and
on
the
v6
idea
around
no
kind
of
a
position
statement,
some
kind
of.