►
From YouTube: Kubernetes SIG Network bi-weekly meeting for 20210401
Description
Kubernetes SIG Network bi-weekly meeting for 20210401
A
Cool
we're
recording
now
this
is
kubernetes
network
from
april
1st,
2021
tim,
you
want
to
start
us
off
sure.
Will
you
should
be
able
to
share.
B
Everybody
can
see
that
looks
good,
all
right
all
right.
I've
got
the
list
of
this
week's
issues
down
to
five.
I
responded
to
a
few
myself
and
the
rest
are
looking
okay.
First,
one
is
a
test
fail
around
sctp
connectivity
and
I
have
no
other
information.
I
did
not
click
through
on
this
one
volunteers.
C
B
That's
right,
as
I
was
trawling
through
one
of
the
types.go
files,
I
noticed
that
there's
still
a
comment
in
some
of
the
network
policy.
Structs
that
says
this
is
a
beta
field
as
of
1.8,
so.
B
All
right,
I
filed
this
issue
this
week
because
there
was
an
open
pr
in
this
area
and
there's
been
lots
of
great
discussion,
which
was
exactly
the
reason
why
I
filed
the
issue.
B
It
doesn't
really
need
to
be
discussed
here,
but
I
thought
it
might
be
worth
spending
a
minute
on
the
issue
at
hand
for
anybody
who
hasn't
read
it
is
we
have
some
logic
in
cube
proxy,
specifically
docker
shim
and
in
cube,
sorry,
cube
proxy
and
in
cubelet,
docker
shim,
which
for
node
ports
and
host
ports
respectively,
will
try
to
open
the
port
on
the
host
os
before
it
installs
any
sort
of
port
forward,
the
idea
being
to
not
steal
ports
from
other
applications
that
might
already
be
running
in
the
case
of
a
node
port.
B
It
doesn't
actually
not
take
the
port.
It
just
complains
about
it.
In
the
case
of
a
host
port,
it
actually
will
fail
to
launch
the
pod
if
it
can't
claim
all
the
host
ports
that
the
pod
has
asked
for,
and
this
is
nice,
if
you
know,
for
example,
your
pod
claims
host
port
22
and
you
won't
actually
steal
the
traffic
away
from
the
ssh
demon.
B
The
problem
is,
the
logic
is
kind
of
twisty
and
in
cube
proxy,
all
we
do
is
complain
about
it
and
in
cubelet,
if
cubelet
itself
restarts,
we
don't
actually
go
through
this
process
again,
and
this
is
a
long
open
bug,
so
it
sort
of
mitigates
the
value
of
it
and
we
could
either
fix
the
bug,
like
figure
out
how
to
do
this
properly
or
just
abandon
this
idea
as
not
super
useful.
B
My
fear
is
that,
even
if
we
fix
the
bug-
and
we
make
an
out
of
process
port
holder
daemon-
it
doesn't
actually
fix
the
attack
vector
because
all
I
have
to
do
is
make
the
ssh
demon
crash
right,
send
it
thousands
of
connections
and
it,
and
then
I
can
steal
its
port
and
it
can
never
get
it
back.
B
E
At
what
point
do
we
say
that
cluster
administrators
should
have
better
control
over
their
clusters
and
not
let
things
claim
ports
below
a
certain
number,
like
I
mean
yeah,
there's
probably
legit
reasons
to
have
your
pods
on
22
in
some
clusters,
but
certainly
not
for
most
clusters,
and
it
feels
like
that
should
be
an
explicit
decision
and
then,
after
that,
you
kind
of
assume
that
everything
else
is
managed
by
cubelet
and
that
cubelet
or
something
or
network
plug-in
could
arbitrate.
After
that.
F
B
No
host
ports
are
whatever
the
heck
you
want,
but
you
know:
host
ports
in
general
are
a
very
dangerous
feature
and
so
they're
one
of
the
first
things
that
security
conscious
admins,
should
probably
turn
off
via
policy.
We
don't
make
it
easy
to
turn
it
off.
There's
pod
security
policies
and
there's
like
gatekeeper
and
those
sorts
of
things
right
like
go.
B
Do
it
yourself,
so
one
option
here
would
be
like
other
things
that
we've
talked
about
with
these
super
dangerous
features
to
offer
a
admission
controller,
but
what
we
don't
want
to
do
is
invent
a
new
policy
language
where
we
say
users
x,
y
and
z,
are
allowed
to
use
ports
a
b
and
c,
so
it
would.
B
If
we
were
to
do
anything,
it
would
be
very
coarse
right,
like
don't
use
it
at
all
or
only
use
it
on
cluster
namespaces
only
allow
it
on
namespaces
labeledx,
or
only
allow
it
on
ports
higher
than
1024
or
whatever,
whatever
like
one
or
two
rules,
we
would
want
to
institute,
and
that
would
leave
everybody
else
to
go,
and
do
it
yourself
is
that
useful?
Is
that
worthwhile.
D
B
Well,
I
mean
the
problem:
there
is
it's
on
by
default,
so
the
default
has
to
be
open,
and
so
we
can
give
people
a
muzzle
for
their
foot
gun.
There's
no
good
analogy
here,
a
cork
for
their
foot
gun,
but
they
still
have
to
opt
into
that
and
like
most
of
these
admission
controllers,
if
you
go
through
a
hosted
provider,
you
don't
have
access
to
those
flags
in
the
first
place.
A
Don't
pod
security
policies
already
allow
control
over
whether
pods
can
specify
host
ports.
B
E
Sorry,
dan,
you
broke
up
for
a
second
for
me.
I
don't
know
if
it
was
just
me.
Yes,
sir,
no,
it
was
not
just
you,
it
kind
of
seems
like
cubelet
should
take
care
of
pods,
but
it's
harder
or
it's
a
longer
stretch
to
say
that
cubelets
should
take
care
of
pods
and
everything
to
do
with
host
stuff.
E
B
So
lockhee
says
in
the
chat:
there's
a
psb
coming
back
as
a
different
interface,
yet
to
be
named,
there's
a
kept
for
that.
So
maybe
the
answer
here
is
do
like
just
not
do
anything
just
this.
Isn't
this
isn't
an
urgent
issue?
I
don't
know
how
many
people
have
had.
I
have
never
heard
of
a
customer
who
had
this
explode
on
them.
You
know
I
don't
that's
not
to
say
it
hasn't
happened.
I
just
have
never
heard
about
it.
B
Anyway.
We've
spent
enough
time
on
this.
We
can.
We
can
carry
this
back
into
the
bug
on
one
hand,
I
would
love
to
just
delete
this
code
because
I
find
it
of
dubious
value.
B
On
the
other
hand,
if
we
can
make
it
valuable,
then
cool,
let's
keep
it
right
like
going
back
to
the
idea
of
ip
and
node
port
allocations,
like
maybe
this
falls
into
the
same
category
as
a
node
port
allocation
and
the
the
savvy
cluster
admin
will
earmark
that
port
22
is
used
for
something
and
therefore
the
scheduler
won't
even
schedule
pods
that
try
to
use
for
22..
G
B
That's
true.
That
is
true,
though
we
we,
I
don't
know
what
th
the
cni
equivalent
of
the
host
port
forwarder.
Does.
I
don't
know
if
it's
smart
enough
to
detect
that
failure
and
then
fail
or
if
it
just
says,
nope,
I'm
fine.
C
B
B
This
one
is
for
the
record
one:
zero,
zero,
six,
four
three
and
please
feel
free
to
jump
in.
B
Issue
number
one:
zero,
zero,
six,
two
two
relaxing
constraints
of
pod
dns
config,
there's
actually
a
pr
to
go
with
this
one.
So
I
think
it's
reasonable
to
oh.
I
already
did
I
moved
marked
as
triage
accepted.
It's
basically
a
feature
request.
B
The
request
here
is
saying:
the
cubelet
has
legacy
restrictions
on
the
number
of
dns
search
paths
and
the
total
size
of
the
search
path,
which
date
to
really
really
really
old,
glib
sees
that
has
been
recently
fixed
so
that
those
limits
are
no
longer
true,
and
so
the
pr
is
proposing
to
just
remove
the
limits.
B
I
did
a
little
bit
of
that
myself
and
confirmed
that
at
the
worst,
some
of
these
resolvers
ignore
entries
after
six
or
after
eight,
but
none
of
them
exploded
everybody's
the
ones
I
tried
he's
tried
more
than
I
have
so
I'm
inclined
to
let
this
pr
proceed,
but
I
thought
it
was
worth
bringing
up
here.
Anybody
have
concerns
about
this.
G
The
comment
because
the
thing
is
there
are
a
lot
of
unknowns
and
my
my
concern
is
what
happens
in
if,
after
four
months
or
six
months,
somebody
comes
with.
I
don't
know
when
we
are
language
or
operative
system
or
whatever
thing,
and
it
fails
because
windows,
we
don't
know,
what
is
the
impact
of
teaching
windows
or
windows,
doesn't
care
about
the
gwc.
B
G
B
B
Too
small,
so
if
I
recall
the
pr
was
not
feature
gated,
so
maybe
that's
the
fix.
Here's
add
the
feature,
gate
and
start
it
in
alpha,
actually,
maybe
even
just
start
it
in
beta,
because
there's
no
new
api
on
it
right
and
just
start
it
in
beta
and
say
we're
going
to
leave
it
beta
for
at
least
two
releases.
B
G
B
Cubelet
provide
a
feature
to
configure
cni
through
config
versus
lexicographic
order.
We
talked
about
this
one.
I
think,
last
time
right
and
the
user
just
posted
an
update
a
couple
days
ago,
asking
for
what.
If
we
made
it
a
cubelet
config,
and
if
you
don't
specify
the
config,
then
we
fall
back
on
lexicographic
order
dan
you're
on
this
bug.
I
don't
have
a
strong
feeling
of
whether
this
is
worth
doing
or
not
your
thoughts.
B
E
Mean
the
other
thing
is
that
this
stuff
gets
pushed
out
to
the
run
times
now
anyway,
and
once
docker
shim
goes
away.
We're
not
really
gonna
have
a
capability
to
specify
any
kind
of
config,
so
it
seems
like
we're
kind
of
punting
it.
I
think,
there's
still,
you
know
kind
of
a
larger
open
discussion.
That's
been
on
ice
for
a
while
around.
E
Could
we
do
more
native
network
configuration
type
stuff
in
a
more
kubernetes
way,
like
should
something
like
this?
Instead
of
being
a
cubelet
option,
be
a
custom
resource
that
is
agreed
on
by
everyone
or
something
like
that,
but
I
don't
think
it's
probably
good
to
be
a
cubelet
option,
especially
going
forward.
B
H
B
Thanks
last
one,
which
we
looked
at
a
few
weeks
ago
and
cal
signed
up
to
look
at,
and
I
haven't
seen
any
updates
on.
This
is
the
following
of
probes.
Through
the
host
name
field.
B
A
Cool,
I
think
the
next
item
was
from
jay
around
keep
proxy.
J
But
then
I
took
a
look
into
the
issue
that
that
alejandra
stepped
down
and
chris
was
also
stepping
down,
because
I
guess
you
saw
that
he
was
moving
from
companies
and
wasn't
with
without
with
enough
time
probably
to
take
care
of
ingredients.
So
I
reached
a
theme
and
and
said
him
that
probably
we
should
take
a
look
into
this
and
discuss
about
the
risk
of
this,
because
I
don't
know
anymore
if
ingress
in
gynex,
which
is
probably
the
mostly
used
ingress,
is
maintained
by
someone.
B
So
I
think
it
is
a
huge
risk,
but
we
can't
require
people
to
work
on
it.
You
know
as
a
sig,
we
can
say
we'll
take
over
it,
but
that
doesn't
mean
anything
unless
we
have
people
who
are
volunteering
to
do
it.
I
don't
presume
that
that's
you.
J
Ricardo
yeah,
I'm
sorry,
I
I
think
I
can't
deal
with
that
into
my
plates,
otherwise,
probably
yeah.
I
have
some
time
sometimes
to
take
a
look
into
the
issues,
but
not
not
developing
anything
else.
There
are
some
folks
working
on
that.
I
can
probably
ping
them
and
see
if
they
they
want
to
become
a
maintainer
of
that.
K
Yeah,
unfortunately,
I'm
not
currently
actively
there.
I
thought
there
were
a
bunch
of
people
who
volunteered,
but
apparently
not.
J
B
Okay,
that
would
be
a
great
start
if
we
can't
get
people
volunteering,
it
might
be
worth
bringing
this
up
to
the
level
of
like
steering
and
just
letting
them
know
like.
I
don't
know
if
this
is
the
first
time
it's
happened,
but
it's
certainly
an
important
milestone.
Right.
Something
we
depend
on
is
effectively
abandoned.
B
What
what
should
we
do?
Should
we
try
to
find
money
to
hire
contractors
through
cncf?
Should
we
make
a
lot
of
noise
and
try
to
drum
up
new
new
people,
but
if
you
can
find
people
organically?
That
would
be
a
better
answer
right.
So
why
don't
we
push
this
to
the
agenda
for
like
two
weeks
or
four
weeks,
and
if
we
can't
figure
something
out,
then
we'll
figure
out
an
escalation
plan
sounds
good.
D
To
me
my
suggestion,
if
we're
putting
it
on
the
agenda
for
a
couple
weeks
from
now,
is
we
try
to
carefully
clearly
scope
what
it
is
like
links
to
the
repos,
what
it
is
we're
looking
for
people
to
say
yes
to
just
so
that
you
know
if
people
are
looking
to
get
involved,
they
can
figure
out
if
it's
something
that's
relevant
to
them.
B
B
I
don't
think
it
ever
happens.
Oh
damn.
A
Let's
not
go
with
that
approach,
then
I
think
govind
is
up
next.
A
L
Hey
thanks
for
the
time
everyone
I
hope,
everybody's
staying,
safe
and
healthy.
I
wanted
to
pick
up
our
fqdn
thread
that
you
know
we
talked
about
some
time
ago.
We've
gotten
some
really
good
feedback
from
the
community
on
the
one
pager.
I
guess
two
pager
that
that
I've
written
some
time
ago.
It
was
there's
so
much
feedback
that
I
didn't
know
how
to
process
it
and
put
it
all
into
a
doc
again.
L
So
so
we
we
thought
it
would
be
a
good
idea
to
maybe
make
it
more
actionable
by
putting
out
a
a
an
open
source
sort
of
never
policy
controller
that
just
generates
fqdn
or
never,
policies
based
on
an
fqdn
crd
that
we
have
proposed.
So
I
just
wanted
to
evangelize
that
with
the
the
group
here
with
the
community
wanted
to
start
getting
feedback
on
this.
L
You
know
as
to
what
all
of
you
think
and
provide
your
your
expertise
and
your
you
know
any
sort
of
gotchas
that
we've
sort
of
overlooked,
mind
you.
L
This
is
obviously
a
sort
of
a
very
basic
for
the
lack
of
a
better
word
version
of
what
we
intend
to
do
eventually,
but
I
think
the
more
important
point
here
is:
can
we
get
to
an
agreement
on
what
the
first
iteration,
at
least
of
the
fqdn
policy
crd
should
be
or
just
the
activity
and
policy
resource
should
be
so
this
is
linked
here?
It's
on
github,
it's
accessible
to
all,
and
I
very
much
encourage
feedback
directly
on
github.
Please
try
it
out,
please,
you
know,
give
it
a
whirl.
L
We've
gotten
some
good
response
from
some
customers
here
as
well,
and
it
sounds
like
we're
on
the
right
track,
but
I'd
love
to
get
more
validation
from
you
know,
folks,
in
this
group,
for
whom
this
is
bread
and
butter
stuff.
B
Goldman
is
I
just
click
on
the
link
and
it
takes
us
to
a
google
cloud.
Git
repo
is
the
intention
to
get
this
to
be
something
with
a
kate
studio,
suffix
that
the
community
supports,
or
are
we
just
looking
for
general
nodding
at
the
model.
L
Yeah,
so
the
intention
eventually,
as
you
said,
tim,
is
to
get
it
into
the
kubernetes.
You
know
you
know
open
source
model
so
that
we
actually
have
an
api
dedicated
to
fpb
and
policy.
This
is
just
a,
I
guess,
a
first
step
in
that
direction
to
build
consensus
so
that
we
know
what
the
the
model
should
be,
because
there
are
a
lot
of
semantics
here,
for
example,
what
type
of
dns
records
do
we
support?
L
Or
you
know
how
frequently
do
you
refresh
your
caches
and
things
like
that
which
were
hard
to
capture
in
a
two-pager?
To
be
quite
honest,
so
to
to
sort
of
keep
those
nuances
you
know
or
bring
them
more
saliently
across
we.
We
got
this
as
a
first
half,
but
the
intention
here
is
fully
to
get
this
into
kubernetes
networking
and
open
source.
Yes,.
L
E
Yeah
is
this
an
attempt
to
kind
of
poc
this
approach
with
the
possible
future
addition
to
network
policy,
or
do
you
envision
it
standing
outside
of
network
policy
forever.
L
L
That's
that's
really
the
the
goal
here,
whatever
makes
sense
from
a
taxonomy
slash,
topology
view
of
including
this
resource
into
kubernetes,
whether
it's
tucked
under
network
policy
or
maybe
a
peer
of
another
policy,
I'm
not
really
sure,
and
I
think
that's
something
that
hopefully
will
come
out
as
we
start
getting
some
validation
on
this
concept.
But,
yes
to
your
point,
it
is
very
much
a
poc
in
which
I
would
like
to
get
affirmation
from
the
community
at
large
to
say:
hey,
look!
This
actually
makes
sense.
Customers
want
this.
L
We
should
build
this,
and
this
crd
that
you
proposed
looks,
looks
about
right
for
what
we
want
to
capture,
at
least
for
the
v1.
B
B
J
Oh
god,
I
I
I
have
actually
answered
the
this
is
in
cluster
network
policy,
then
that
we
should
probably
have
that.
L
On
github
directly
on
the
the
page
that
I've
linked,
if
you
know,
if
you
wanted
to
report
issues
there
or
you
know,
leave
comments
that
would
be.
That
would
be
very,
very
useful.
You.
B
Could
also
so
good,
it's
it's
difficult
to
leave
comments
without
opening
a
pr
or
an
issue.
This
is
this.
Is
the
downside
of
github.
You
can't
look
at
somebody's.
B
Leave
comments
unless
there's
an
open
pr
on
it.
Gotcha
maybe
like
open
a
pull
request
on
the
readme,
and
you
have
to
go
touch
every
line.
I
don't
know
think
about
thinking
about
how
to
collect
feedback
on
it.
L
Oh
boy,
I
screwed
this
one
up.
Sorry,
I
I
thought
that
github
would
be
more
friendly
in
terms
of
feedback
collection
from
an
open
source
perspective.
I
should
have
given
it
more
thought:
yeah,
okay,
I'll
circle
back
on
this
and
maybe
update
the
the
github
page
so
that
people
know
how
to
report
feedback.
B
Can
you
hear
me
now
yep?
Yes,
sorry,
if
you
open
a
pull
request
on
against
your
readme
file
and
just
touch
every
line
of
the
readme
file,
then
anybody
can
come
and
comment
on
those
lines
and
then
it
then
it
behaves
more
like
a
google
doc
or
something.
L
Got
it
sounds
good
makes
sense
to
me:
okay,
I'll
I'll
work
on
that,
and
then
hopefully
that'll
be
intuitive
enough
for
for
folks
in
the
community
to
to
actually
leave
comments
there,
and
then
we
can
start.
You
know
moving
this
towards
converging
this
towards.
L
N
N
I
am
going
to
share
my
screen,
so
hi
everybody,
my
name,
is
laura,
I'm
from
sigma
multi-cluster
and
I
want
to
talk
about
multi-closer
dns.
I
put
a
bunch
of
links
in
here
to
a
doc,
some
slides,
which
I'm
going
to
share
some
of
the
slides
today
and
a
poll
request
I'm
going
to
kind
of
circle
back,
but
anyways
there's
a
lot
of
deep
lore
here.
I
guess,
if
you
want
to
get
to
all
of
the
previous
documents
and
stuff,
that's
that's
what
those
are
linked
to.
N
So
one
of
those
links
is
the
slide
deck
which
was
originally
made
for
to
talk
about
this
in
sig
multi
cluster.
So
there's
a
bunch
of
like
detailed
stuff
in
here
about
how
dns
operates
today.
That
might
be
like
old
news
to
everybody
here,
but
I'm
going
to
go
through
a
little
bit
just
to
explain
the
situation
and
then
spoilers
at
the
very
end.
There's
a
slide
that
says
questions
to
sig
network,
which
is
what
I'm
looking
for
expertise
on
so
basically
over
in
sig
multi-cluster
we're
working
on
multi-cluster
services.
N
We
have
a
kep
called
the
mcs
api
and
that
describes
how
multi-cluster
services
should
work
and
we
want
them
to
have
correct
dns
names.
Current
implementations
are
extending
them
in
kind
of
like
a
generally
agreed
upon
way,
but
it's
not
standardized
centrally
and
we
don't
have
full
parity
with
the
current
cluster
local
dns
specification.
So
there's
like
srb
records
and
ptr
records
that
are
described
for
cluster
local,
that
we
nobody
who's,
implementing
mcs
api
is
implementing
yet
because
we
haven't
decided
anything
about
it.
N
Yet
so
in
general,
based
on
the
original
spec,
we
need
to
support
a
records,
srv
records
and
ptr
records,
which
I
you
all
probably
know,
but
I
had
to
learn
about
and
the
two
types
of
services
that
we
support
in
multi-cluster
services
are
something
similar
to
cluster
ip
services
and
something
similar
to
headless
services.
So
we
call
our
multi-cluster
cluster
ip
services.
N
Cluster
set
ip
services,
so
you
might
see
that
around
and
then
we
kind
of
just
call
our
multi-cluster
headless
services
just
multi-cluster
headless
services,
but
we
need
to
do
something
similar
to
how
dns
works
in
the
normal
way.
Today,
for
example,
cluster
ip
services
get
things
like
blue.test.sbc.cluster.local
and
multi-clip
or
sorry
regular,
headless
services
today
get
things
similar,
blue.test.spc.cluster.local
or
more
specific
to
the
hostname
of
the
pod,
like
yellowdash1.yellow.test.svc.clustered.
N
So
just
a
little
reminder
there,
so
there's
some
slides
with
a
lot
of
words
on
it
and
then
diagrams,
which
are
better,
and
so
the
diagram
situation
for
cluster
set
ip.
That
we're
proposing
here
is
that
the
mcs
api
describes
that
there
should
be
this
cluster
set
ip.
That
is
effectively
the
front
to
all
of
the
multi
to
all
these
ips
back
here,
similar
to
a
regular
cluster
ip.
So
this
should
basically
have
a
very
similar
looking
dns
name
to
how
this
looks
in
cluster,
local
or
sorry.
N
How
this
looks
cluster
local,
but
the
difference
is
that
this
zone
here
says
clusterset.local,
so,
instead
of
blue.test.sbc.cluster.local,
this
is
blue.test.spc.clusterset.local.
This
is
what
goes
to
the
cluster
set
ip,
which
itself
can
get
into
the
pods
backing
that
service
and
cluster
a
or
cause
backing
that
service.
Then
cluster
b
on
the
headless
side,
we
needed
to
be
able
to
disambiguate
if
people
want
to
go
to
this
blue
pod
over
here
versus
this
blue
plot
over
here,
since
it's
possible
cluster
local
that
they
could
have
the
same
hostname.
N
N
So
the
big
difference
for
headless
multi-cluster
headless
services
is
still
with
this
different
zone
clusterset.local
and
then,
when
we
are
giving
all
the
individual
dns
names
to
each
individual
backing
pod,
we
disambiguate
them
by
their
host
name
and
then
also
the
cluster
that
they're
in
and
then
all
the
other
normal
stuff.blue.test.svc.clusterset.local.
N
So
some
possible
problems
that
we
thought
might
be
an
issue.
I
just
generally
want
some
confirmation
about.
If
we're
thinking
about
the
headless
service
dns
case
properly,
we
know
that
staple
sets
is
like
the
hot
thing.
N
We
really
need
to
get
this
right
to
make
sure
that
this
pod
dns
is
right,
and
then
we
also
have
kind
of
brought
up
that
a
pod's
idea
of
its
own,
fully
qualified
domain
name
is
going
potentially
could
be
an
issue
if
it
could
be
a
plus
multi-cluster
specific
one
versus
being
a
clustered
local
one.
N
So
happy
to
talk
more
about
any
of
the
background
in
the
slides
that
I
went
over
very
briefly
here,
just
to
introduce
the
situation.
But
here's
like
the
real
questions
that
I
want
to
ask
network.
So
one
thing
that
came
up
in
sig
multi
cluster
is:
should
we
even
include
srv
records
like
do
we
need
to
keep
parity
with
the
current
cluster
local
specification
with
srv
records?
Basically
this
boiled
down
to
do
people
use
this
or
not.
Should
we
implement
it
or
not,
so
some
people
say
they're
not
used.
N
Some
people
say
that
possibly
certain
applications
like
voip
are
using
them
so
kind
of
want
to
get
a
temperature
from
here
and
see
what
people
think
and
just
to
introduce
the
other
questions.
First
and
then
I'll
kind
of
open
the
floor
are
there
any
other
networking
related
tooling
that
we
don't
know
of
that
might
get
confused
by
pods
having
a
cluster
local
and
multi-clustered
dns
names
that
we
need
to
look
o2
for
aka
out
for
so,
basically
anything
that's
dependent
on
like
hostname
dash
fqdn.
N
Our
current
the
current
implementations
that
are
popular
basically
make
a
dummy
or
shadow
service
object
that
represents
the
cluster,
set
ipp
local
to
the
cluster,
so
that
already
gets
a
ptr
record
because
of
just
normal
cluster
local
dns
specification
implementation.
So
we're
planning
on
kind
of
leaving
that
alone
and
saying
you
know
the
reverse.
Dns
will
get
the
cluster
local
hostname
or
cluster
local
dns
name
area.
N
N
So
those
are
my
questions
and
then
general
feedback
is
also
welcome.
But
those
are
things
I
thought
might
be
worth
bringing
up
here
and
happy
to
take
them
now
or
talk
about
them
offline
on
either
the
pull
request
or
on
the.
A
B
I
was
one
of
the
people
who
said:
let's
not
do
srv
records
unless
we
really
know
what
we're
doing
with
them,
and
I
also
admit
that
I
don't
have
a
lot
of
personal
experience
with
srv
records
and
the
systems
that
tend
to
use
them.
So
other
input
would
be
really
welcome.
I
Regarding
the
srv
record
question,
the
only
input
I
have
is
that
I've
dealt
with
a
few
user
customer
issues
in
that
area
with
cube
dns,
where
we
were
not
doing
something
according
to
the
spec
and
the
response
was
interpreted
differently.
So
I
do
not
have
a
great
description
of
the
use
case,
but
I
do
know
that
there
were
users
sensitive
to
how
the
response
was
returned
and
were
doing
something
with
it
so
yeah.
I
O
I
N
Yes,
they
we
expected
that
they
would
point
to
cluster.local.
I
guess
I'm
just
generally
wondering
if
we
ever
need
to
make
that
change
or
if
there's
something
that
is
going
to
be
upset,
that
a
service
could
have
been
contacted
by
a
pod
elsewhere,
it's
kind
of
a
general
feeling
more
than
anything
super
specific.
So
I
kind
of
just
want
to
know
if
there's
any
tooling
that
that
people
know
about
that,
I
should
be
looking
into.
I
Got
it
the
only
thing
that
I
recall,
host
name
fqdn
being
relevant
for
is
there
were
some
kerberos
like
setups?
I
think
where
you
could
use
a
pod.
A
pod
would
try
to
reach
out
to
it
to
get
some
auth
ticket
and
there
would
be
some
checks
to
look
at
the
part
ip
do
its
ptr
record
and
make
sure
the
host
name
that
it
advertises
matches
that,
but
with
everything
sitting
with
cluster.local,
it
looks
like
it
will
be
unaffected.
B
That's
the
use
case
that
I
know
of
for
for
that
too,
somewhere
there's
a
proposal
that
I
think
I
lost
track
of
to
allow
users
to
just
put
whatever
they
want
as
they're
in
pod
fqdn
and
while
I
was
initially
really
against
it,
we
tried
to
set
up
all
the
structure
around
the
headless
service
with
a
name
that
matches
the
sub-domain
of
the
pod,
and
it
turns
out
that
for
some
users
that
just
isn't
enough-
and
they
just
want
to
be
able
to
put
whatever
they
want
in
that
field-
and
it
doesn't
matter
anyway
because
it's
only
going
to
show
up
inside
their
pods.
B
So
why
am
I
stopping
them
from
hurting
themselves?
And
I
guess
I
agree
with
that.
I
seem
to
have
lost
track
of
the
proposal,
so
I'll
have
to
go
back
and
find
it
and
and
see
if
that
moves
forward.
N
Great
well
happy
to
take
other
points,
but
I
will
just
point
out
that
there
is
this
pull
request
and
that
you
can
add
your
comments
to.
It
includes
some
changes
to
some
like
rash
rationale.
That's
the
word
in
the
mcs
api
kept
and
then
there's
this
whole
document.
Specification.Md,
that's
supposed
to
be
very
similar
to
the
existing
specification.md.
That's
in
the
kubernetes
dns
repo
for
everything,
but
if
you're
interested
the
doc
that
that's
derived
from
not
all
of
the
pros
is
over
there,
but
it's
possible.
N
It
might
be
a
little
bit
easier
to
read
because
I
highlighted
in
orange
exactly
where
I
deviated
from
the
existing
dns
specification.
So
if
you
don't
have
the
existing
vena
specification
memorized
by
heart,
that
might
be
like
an
easy
or
way
to
read
it,
but
either
way.
Looking
for
your
comments,
great
thanks.
A
Thanks,
laura
last
item
on
the
agenda
was
from
rahul
to
talk
about
a
cluster
cider.
P
Yeah,
that's
right
thanks
just
to
introduce
myself
really
quickly.
This
is
my
first
time,
speaking
up
at
one
of
the
meetings.
My
name
is
rahul
nice
to
meet
you
all.
I
work
in
gke
on
networking
and
yeah,
so
this
is
something
that
we've
been
kicking
around
and
we
wanted
to
basically
get
some
feedback
on
whether
this
is
an
idea
that
you
know
make
sense
for
entry
for
open
source
or
whether
this
isn't
something
that
we
really
want
to
get
involved
in.
P
The
idea
is
basically
currently
you
can
provide
a
cluster
cider
when
you
create
your
cluster
and
then
one
of
the
node
ipam
controller
goes
in
and
slices
it
up
into
a
bunch
of
smaller
sections,
sliders
that
are
then
sent
out
to
the
nodes.
There's
plenty
of
requests
from
users
on
being
able
to
expand
these
ranges
without
restarting
your
cluster
or
making
a
brand
new
one,
and
you
know
it
provides
a
lot
of
flexibility
and
general
improvements
to
scaling.
P
So
the
question
is,
you
know:
do
we
want
to
be
enhancing
the
entry
note
allocators
to
support?
You
know
multiple
discontiguous
cluster
siders?
That
can
be,
you
know,
added
or
deleted
during
the
life
cycle
of
the
cluster.
So
you
users
can
say
hey.
I
need
more
pods
just
throw
another.
You
know
slash
whatever
slash
18
at
your
cluster
and
now
you
have
a
lot
more
space
to
continue
scaling,
so
we
have
put
together
a
basic
cap
that
tries
to
outline
what
our
you
know.
What
are
the
goals?
P
What
are
the
non
goals
and
then
a
proposed
resource
that
we
think
you
know
we
define
what
a
cluster
cider
is.
You
can
create
more
of
them
and
then
we'll
write,
a
new
controller
that
would
actuate
and
do
the
partitioning.
That's
that's
sort
of
the
high
level
idea.
The
major
caveat,
I
think
that
we
want
to
point
out
is
we're
not
trying
to
say
kubernetes
should
be
in
the
business
of
general
ipam.
We
want
to
keep
this
very
focused
on.
We
have
a
range
allocator
that
already
exists.
P
We
want
to
make
a
incremental
improvement
to
it
to
say
you
can
provide
more
than
one
range,
but
we
don't
want
to
go
too
far
down
the
rabbit
hole
because
you
know
there's
there's
a
million
and
one
ways
to
do
things.
So
it's
it's
not
in
our
benefit
to
try
to
support
everything
right
off
the
bat
yeah.
So
I'm
curious,
if
folks,
have
any
comments,
obviously
either
right
now
or
you
know
in
the
cap
as
they
come
up.
P
The
latter,
so
we
want
to
improve
the
current
ipam
controller.
There's
there's
a
flag
that
you
provide
to
the
the
q
proxy
and
or
more
relevant
the
the
cute
controller
manager,
which
says
dash
dash
cluster
sider
and
that's
the
the
single
contiguous
slider
from
which
the
node
ipam
controller
then
allocates
pod
siders,
and
then
it
fills
in
the
node
dot
podsider
field.
So
we're
we're
not
touching
the
node.podsider
field
right
now,
we're
just
saying
that
command
line
flag
that
you
provide
to
cubecontrollermanager.
B
I
I
got
to
pre-read
this
proposal
and
I
I
have
a
lot
of
concerns
that
I've
already
expressed,
but
I'm
I
would
like
to
go
through
them
here.
This
is
our
last
agenda
item
right,
casey.
B
Very
smart
move
the,
but.
O
F
Hiro
lucky
in
high
level,
I
agree,
I'm
sure
tim's,
gonna
tim's
gonna
outline
some
things
that
I'm
probably
concerned
with
so
I'll
save
the
floor
for
him
to
too.
But
you
know
when
we
unpacked
dual
stack,
it
kind
of
came
up
a
few
times
that
maybe
we
could
take
the
initiative
and
we
decided
to
park
it,
but
we
we
had
always
wanted
to
to
do
that
as
well.
F
So
if
there's
any
way
we
can
help,
I
think
you
know
anish
is
on
this
call
who
worked
on
dual
stack
and
maybe
cal
as
well
might
be
able
to
help
if
not
implement
but
review.
But
I
think
for
the
question
of
is
it?
Is
it
worth
putting
in
open
source
kubernetes?
F
I
think
the
answer
is
yes,
because
a
lot
of
people
stub
their
toes
on
readdressing
clusters
and
the
answer
for
them
today
is
just
provision
a
new
cluster
and
the
thing
is
they
that
may
not
help
them
either
because
they
may
have
discontiguous
blocks,
and
you
know
it
doesn't.
So
I
think
in
general
it's
a
problem
and
it's
only
going
to
get
worse
as
clusters
get
bigger
and
people
use
kubernetes.
More
and
ipv6
isn't
always
the
answer.
F
It's
probably
a
lot
for
a
lot
more
ipv4
use
cases,
but
we
should
maybe
extend
it
so
that
you
can
add
multiple
side
of
v6
blocks
as
well.
At
the
same
time,
because
you
know
we're
going
to
hit
that
eventually
as
well,
when
people
say
that
you
know
10
billion
addresses
isn't
enough
for
my
personal
raspberry
pi
network.
So
so
all
this
to
say,
I
think
you
know
high
level.
P
Sounds
good
yeah
we
definitely
are
looking
for.
You
know
active
help.
If
for
nothing
else,
then
at
least
we
can
define
a
good
spec
and
api
around
what
we're
looking
for
and
then
you
know
the
controllers
can
come
down
the
line.
F
A
I
haven't
read
the
proposal
yet,
but
I
think
I
would
say,
like
I
agree
at
a
high
level.
This
is
a
problem
of
just
generally
wanting
discontiguous
cider
space
that
I
see
people
have
I'm
guessing.
If
and
when
I
read
that
proposal.
My
concerns
will
be
around
its
ties
to
the
cluster
allocator,
the
cider
allocator,
rather
than
the
general
concept
itself.
B
So
I'm
on
record
previously
as
being
sort
of
unhappy
with
the
fact
that
kubernetes
is
involved
in
ipam
at
all,
but
it
is-
and
I
agree
that
this
is
a
real
pain
point
for
users
and
we
shouldn't
abandon
them
just
because
we
made
a
mistake,
so
I'm
generally
in
favor
of
solving
this
problem,
but
I
see
a
lot
of
risks.
I
doubt
if
anything
I
say
here
will
be
the
first
time
you
heard
it,
but
for
the
for
the
audience
and
the
record.
B
First
of
all,
I
think,
if
we're
going
to
do
this,
we
should
do
it
as
a
new
module,
not
as
extending
the
existing
cluster
allocator,
because
I
think
it's
going
to
be
different
enough,
that
almost
all
the
code
will
be
different.
So
let's
just
make
it
a
different
one,
and
that
seems
like
if
it
turns
out
that
that
really
works
really
well.
We
can
maybe
swap
it
in
the
old
indiana
jones
idol
trick
and
swap
it
into
the
old
name.
But
in
the
meantime
tim,
that's
it's
true.
That's
true!
B
The
there's
a
there's,
a
related
issue
to
this,
which
is
variable
sized
node
siders.
There's
a
separate
cap
just
for
that,
and
I
think
with
that
cap
I
made
the
same
argument
of
like
we
should
just
make
a
new
module
and
implement
variable
size.
Ciders
there.
One
of
the
things
we'll
just
need
to
figure
out
is:
is
the
cider
prefix
size
an
attribute
of
a
particular
range,
or
is
it
variable
within
the
range
like?
Are
we
writing
malloc
or
are
we
writing
a
slab
alligator.
B
Yes,
so
the
the
thing
that
I
like
best
about
this
model
is
it
takes
something
that
is
currently
a
flag
and
it
makes
it
not
a
flag
right.
It
makes
it
we'll
have
to
make
it
part
of
the
api
in
band,
because
flags
are
just
a
horrible
thing
to
update.
B
So
imagine
that
there's
going
to
be
a
resource
that
says
these
are
the
ciders
that
I'm
allowed
to
use
for
my
cluster,
which
is
nice
because
it
actually
solves
one
of
these
long-standing
bugs
about
what
are
the
ciders
for
my
cluster,
the
downside
of
it
is
validation
of
input
right
now,
when
a
user's
today,
when
a
user.
If
I
create
a
node-
and
I
say,
give
me
this-
this
cider
range-
I
can
statically
validate
it
against
the
flag
and
say
sorry.
This
is
not
valid.
Get
out
of
here
right
in
the
future.
B
That
will
be.
It
will
require
us
to
do
a
different
set
of
checks
right,
we'll
have
to
keep
track
of
all
the
other
objects
and
compare
against
those
not
insurmountable,
but
I
don't
know
if
we
do
that
anywhere
else,
so
we
just
need
to
go
carefully
and
that's
my
biggest
concern
with
this
overall,
but
otherwise
I
think
yes,
I
think
we
should.
We
should
tackle
this.
G
Well,
we
went
back
and
forth
with
this
continuously,
but
the
thing
is
at
the
first
step:
you
can
move
to
a
resource
without
any
validation,
so
instead
of
using
a
flag
just
say
this:
this
cider
with
this
with
this
prefix
right
that
that
doesn't
have
any
of
your
concerns
or
that's
it
or
is
it
a
problem?
It's
the
question.
I
mean
we
take
the
flags
and
we
say
instead
of
the
flags
that
you
put
decided
with
the
node
mask
prefix.
Whatever
you
have
the
resource.
B
That
I
don't
know
if
it's
a
problem,
I
just
think
it's
different
than
anything
else.
We've
done
right
where
we
have
to
actively
read
the
resource
out
of
the
storage
and
then
keep
track
of
that.
While
we
resolve
other
api
requests.
I
just
don't
know
if
we
do
anything
like
that
anywhere
else
for
static
validation.
Right
like
if
you
look
at
the
I
mean
the
validation
code
today
doesn't
doesn't
have
any
context
right.
B
It's
all
syntactic
validation,
there's
a
small
number
of
resources
that
do
more
contextual
validation
like
service
right,
where
it
has
the
access
to
the
flags,
I'm
not
even
sure.
If
we
do
that
for
node
podciter
today
I
know
we
have
some
rules
that
you
can't
change
it
from
a
value
to
another
value,
but
you
can
change
it
from
empty
to
a
value,
but
I
don't
know
if
we
actually
even
do
static
validation
on
it.
B
So
so
maybe
I'm
making
a
big
deal
about
nothing,
or
maybe
it's
a
like
a
net
improvement
anyway.
So.
B
We
do
it's,
it's
not
the
most
pleasant
pa.
You've
you've
seen
the
the
rewrite
of
that
code
that
I'm
trying
to
put
in
place
because
it's
not
a
very
pleasant
pattern.
So
I'm
not
sure
we
want
to
copy
that
pattern.
But
yes,
there
you're
right.
There
is
precedent
for
that
that
dimension
of
it,
we
that
still
doesn't
read
from
a
resource
right.
G
J
G
To
and
he
he
suggested
another
approach,
and
I
think
that
you
should
check,
because
okay
he's
in
summary,
he's
suggesting
to
have
a
to
share
the
the
allocator
and
reconstruct
on
a
good
time.
So
you
don't
restore
their
location,
but
you
have
some
process
in
the
api
that
provides
the
arrow
creator.
So
I
think
that's
similar
to
what
you
are
suggesting.
Okay,.
B
The
the
the
biggest
concern
that
I
have
is
with
the
you're
right.
We
would
need
to
do
it
if
we
move
the
allocators
off
into
resources,
so
we'll
need
to
we'll
need
to
cross
this
bridge.
Today
I
have
a
static
flag
that
doesn't
change,
and
I
don't
have
a
like
a
watch
controller
or
anything
in
the
api
server.
G
But
that's
that's
an
interesting
part,
because
the
situation
is
worse
right
now,
because
this
week
I
was
playing
with
this
is
the
problem
is
that
you
can
now
create
three
different
api
services
with
different
flags
on
the
locators
yeah,
and
each
api
server
starts
to
fight
against
the
other
and
right
there.
F
B
Okay,
that's
that's
a
fair
point.
I
just
I
don't
know
from
a
like
a
logistic
code
point
of
view.
I
don't
know
how
we
would
write
the
watch
from
inside
of
a
rest
registry,
but
we
can
figure
that
out.
G
B
That's
true:
we
do
have
that
okay,
so
maybe
there
is
maybe
there
is
something
to
build
on.
G
G
I
have
one
question
now
that,
but
is
there
time?
Okay,
let's
mean
this
question?
The
thing
is
with
all
the
network
policy
I
was
putting
the
cap
and
and
it's
really
nice,
but
my
question
is
about
the
api
machinery.
So
all
the
comments
are
about,
including
these
new
ap
network
policies
in
the
group,
v1.net,
dot,
io
and
and
adding
it
a
server.
I
really
don't
know
if
that
is
even
possible
or
ap
machine
is
going
to
allow
to
serve
an
alpha,
and
you
are
sorry
a
new
group
that
was
already
replicated.
B
B
That
is
a
great
question
that
I
have
asked
jordan
myself,
because
I
think
the
answer
is
no
and
we
would
probably
have
to
put
it
into
v2
alpha
one
yay,
but
in
my
opinion
this
is
all
kinds
of
messed
up
and
it
just
says
to
me
that
we're
doing
alpha
and
beta
wrong,
which
I've
raised
with
the
api
machinery
folks
and
they
don't
even
disagree,
particularly,
but
it's
very
difficult
to
change.
A
Well,
we
are
just
out
of
time
now
so
we'll
have
to
finish
that
one
up
on
the
cap
or
offline
thanks
for
coming
everybody
we'll
see
you
in
another
two
weeks.