►
From YouTube: Kubernetes SIG Network meeting 20210304
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we're
now
recording
this
is
sick
network
from
thursday
march
4th
2021.
B
C
Yeah
I
fixed
it
this
past
week.
Don't
tell
me
that
two
hours
ago,
okay,.
B
Okay,
node
local
dns
cache
breaks
external
dns
updates.
I
read
this
over.
I
have
no
idea
why
it
would
do
this
and
it
needs
somebody
to
go
and
look
at
it
as
to
why.
B
B
Nobody,
okay,
pod
probes,
lead
to
blind
ssrf
from
the
node,
so
this
is
a
fun
one
and
I
thought
it
was
worth
discussing
here.
It's
not
new
like
this
has
come
up
before
and
it's
thought
it
was
just
worth
discussing
just
briefly
here.
The
issue
here
is:
you
can
configure
a
readiness
probe
with
a
host
field.
B
B
So
in
that
sense
it
is
an
info
leak
and
it
was
disclosed
as
a
security
issue.
So
the
question
is
really:
is
this
a
useful
feature
enough
that
we
should
consider
doing
more
than
just
disabling
it?
The
same
way
we
disabled
external
ips?
That's
the
discussion
at
the
bottom
tim
all
claire
suggested
that
we
wrote
this
admission
controller
to
just
disable
external
ips.
B
C
B
I
agree
and
if
I
recall
mike
spritzer,
who
we
haven't
seen
here
in
a
while,
was
using
it
to
do
something
clever
with
segmented
networks,
so
I
do
believe
there
are
reasons
that
people
might
want
to
use
this
and
getting
like
strictly
getting
rid
of.
It
is
not
an
option
right.
The
best
we
can
do
is
give
cluster
admins
an
option
to
disable
it.
B
D
Well,
we
don't
want
to
do
that
as
a
solution
to
this,
but
like
there
there
are
a
handful
of
things
like
the
enforcement
of
of
unique
pod
ips
and
the
whole
ambiguity
of
whether
the
liveness
probe
means
the
pod
is
running
the
process.
It's
supposed
to
be
running
versus
it's
connected
to
the
network
and
how
that
interacts
with
ready,
plus
plus.
So
so
there
are
like
arguments
for
yeah.
We
should
kill
this
all
off
and
totally
redesign
probes.
B
Yes,
but
we
can't,
I
mean
again
like
we
have
to
keep
the
equivalent
here,
so
we
can
come
up
with
v2,
oh
god,
I
didn't
just
say
that
out
loud.
We
can
think
about
what
we
might
want
to
have
done
instead,
but
we
should
decide
if
we're
going
to
mitigate
this,
I
feel
like
we
should
do
something
here.
I
don't
have
anything
smarter
than
that
admission
control.
B
So
we
don't
need
to
debate
it
too
much
here.
I
didn't
put
on
the
agenda
for
real
discussion,
but
it's
sort
of
a
fun
issue
and
if
anybody
wants
to
think
about
it
and
or
weigh
in
on
it,
please
do
99425
absent.
Any
other
signal
will
probably
just
pursue
the
admission
control
option.
E
Just
to
let
you
know
folks,
we've
discussed
about
this
issue
on
this
sig
network
policy
api
on
monday
and
folks,
we're
already
thinking
about
as
long
as
a
long
term.
If
the
cluster
scoped
network
policy,
targeting
maybe
host,
would
be
something
so
I
I
don't
know
if
abhishek
and
and
satish
they
are
here,
but
they
they
have
started
to
think
about
this
as
a
long
term.
Also,
if
we
should
support
the
node
node
selector
and
this
being
somehow
a
user
story
for
this.
B
Just
okay,
interesting
idea:
all
right
next
cube
proxy
iptable's
wrong
configuration
for
external
traffic
policy,
local
in
large
clusters.
It's
basically
the
user
is
reporting
that
sometimes
they
see
that
their
local
pods
aren't
showing
up
in
the
local
set
of
endpoints,
and
I
think
they
said
they're
using
metal
lb.
B
So
traffic
is
arriving
at
the
node
and
black
holing
because
it
doesn't
have
a
local
end
point
I
asked
here
my
suspicion
is:
this
probably
has
something
to
do
with
pods
coming
up
and
down,
so
the
pods
aren't
ready
yet
or
propagation
delay,
or
something
like
that
andrew.
I
don't
know
if
you're
here,
but
I
wonder
if
this
has
to
lean
into
your
terminating
end
points
and
demon
set
stuff.
B
C
B
Service
refuses
traffic
to
end
points
after
updating
selected
pod
label.
This
is
one
that
antonio
pretty
definitively
proved
to
be
dns
whoops.
I
don't
know
what
I
just
did,
but
the
user
asked
for
some
follow-up
and
I
don't
really
know
what
they're
asking
for
follow-up
on.
So
they
specifically
mentioned
gke
is
anybody
else
from
the
google
side
here
today.
H
A
H
Reminder
about
the
cube121
code.
Freeze
is
march
9th,
which
is
like
next
tuesday.
So
beware,
if
you
have
enhancements,
make
sure
those
enhancements
are
completed
by
march
9th,
otherwise
your
enhancement
will
get
removed
from
the
milestone.
You'll
have
to
get
an
exception
to
add
it
back
in
so
please
request
reviews
of
those
pr's
from
people.
B
G
E
He
actually
asked
me
and
antonio
is
not
going
to
escape
from
me
as
well
to
proxy
what
what
he
put
in
the
item
in
in
the
agenda.
So
we
we
have
a
really
old
issue.
It's
from
2016
of
folks
asking
for
smart
range
on
services,
and
I
guess
the
the
there
is
some
discussion
about
the
environment
variables
that
were
added
previously
to
support
the
same
behavior
of
as
docker
links
and
and
so
on.
E
But,
like
too
long
didn't
read
the
the
question
is:
if
we
should
again
bring
this,
this
service
support
ranges
to
something
now
that
network
policy
also
accepts
port
ranges
and
and
check
check.
If
what
are
the
user
stories
and
because
we
we
are
seeing
folks
asking
like,
I
wanna,
run
my
webrtc
server
inside
kubernetes,
but
I
need
to
open,
like
a
bunch
of
udp
ports
on
my
my
my
load.
E
Balancer
and
those
are
those
are
report,
ranges
and
jay
asked
me
kindly
to
bring
this
to
you
and
be
beat
by
you
folks,
because
he
ran
about
this
idea.
So
what
do
you
think
about
this?
And
if
should
we
tackle
this
again
or
not,
then
put
a
good
comment
about
then,
which
you
put
a
good
comment
about
environment
variables,
but
I
think
this
is
solvable
like
if
you
use
serve
support
range,
we
are
going
to
disable
the
the
environment
variable
injection
in
your
pot.
B
I
think
it's
a
great
idea
that
we
should
totally
do,
but
we
can't-
and
the
main
reason
that
I
recall
there
were
two
two
reasons:
one
the
api
for
port
and
target
port
means
that
we
can
people
assume
you
can
remap
ports.
It
seems
obvious
that
you'd
be
able
to
remap
a
range
like
it
should
just
be
a
an
offset.
It's
not
might
be
tables
is
not
implemented.
That
way.
There
is
no
way
to
do
a
port
range
in
ip
tables
that
remaps
to
a
different
port
range.
B
It
it'll,
let
you
specify
a
range
and
a
target
and
then
it'll
map
all
of
those
ports
to
one
port.
So
thanks,
iptables
and
even
worse,
ipvs
has
no
way
of
representing
port
ranges.
You
can.
There
was
an
attempt
to
do
this
by
using
fw
marks
and
saying.
Well,
if
you
set
mark
zero,
then
you
mean
this
range
in
mark
one.
Then
you
mean
that
range,
which
means
you
can
only
have
32
minus.
However
many
other
report.
B
However
many
bits
you
use,
distinct
ranges
in
a
cluster
which
isn't
going
to
fly,
but
so
I'll
go
ahead.
Sorry,
no!
No!
You
you
go
ahead
and
I'll
jump
to
my
conclusion
later.
E
B
Yeah,
curiously,
the
load
balancers
are
actually
reasonably
well
equipped
to
do
this.
What
node
ports
is
another
issue
right.
If
I
have
a
range
of
ports,
do
I
open
a
range
of
node
ports,
also
node
ports
being
a
very
limited
resource,
so
the
the
biggest
the
biggest
problem
is
that
we
have
too
many
things
that,
like
environment
variables,
that
want
to
enumerate
all
of
the
ports
and
expand
the
list.
B
The
conclusion
that
I
came
to
was
that
the
better
answer
is
to
see
if
we
can
figure
out
how
to
do
whole
ip
forwarding.
Basically,
you
can
have
a
list
of
individual
ports
or
you
can
just
forward
the
whole
dang
thing
and
it
seems
like
all
the
load
balancers
are
capable
of
doing
that
too.
In
that
case,
we
would
say
like
there
is
no
node
port
right
and
we
could,
like
you
said,
for
environment
variables.
We
can
just
sort
of
define
it
away.
B
B
Okay,
so
ricardo,
do
you
think
that
that
would
satisfy
the
use
case
here.
E
D
It's
I
was
going
to
say
it
is
really
unclear
what
the
user
request
here
is.
I
I
commented
on
this
like
a
few
months
ago
on
this
pr.
If
you,
google,
stip
kubernetes,
this
is
the
the
pr
that
you
find,
and
I
think
most
of
the
people
here
just
want
a
solution
that
makes
sip
on
kubernetes
work,
and
several
people
have
commented
in
this
pr
that
the
proposed
feature
would
not
actually
solve
the
that
use
case
so
like
it
requires.
B
Investigation,
okay,
yeah,
I
am
but
no
means
a
sip
or
voip
expert
sip
is
the
case
that
I
always
hear
here
for
people
who
need
thousands
of
ports
for
those
few
customers
that
I've
talked
to
about
this.
They
seem
to
be
happy
with
the
idea
of
we'll
just
forward
the
whole
port
right
and
just
don't
don't
listen
on
things
that
you
don't
want
to
publish
at
that
point.
If.
K
I
may
it's
not,
please
yeah
it's
coming
from
the
in
running
nvas
on
kubernetes,
so
so,
if
you,
if
you're
running
low
balancer
on
kubernetes,
think
I
think
I
want
to
run
a
cooper
pillow
balancer
as
a
pod,
irrespective
what
it
does
in
terms
of
ip
tables,
its
own
user
space
or
whatever.
You
just
want
to
receive
all
anything
that
comes
to
this
side,
this
ip.
K
So
this
is.
This
is
basically
the
use
case.
Sep
falls
into
this
use
case.
They
load
balancers,
especially
like
non-non-vxlan
stuff.
Well,
you
need
something
like
that
and
so
on,
so
it
all
falls
down
and
then
to
that
that
thing,
that's
why?
If
you
go
to
clouds
today
and
you
ask
for
a
load
balancer
and
say
forward
everything
they
allow
you
to
do
that,
just
to
run
an
mva-live
component
inside
your
v-net.
K
B
B
Yeah
that
that
seems
that
seems
like
a
workable
feature
if
we
can
figure
out
how
to
sort
of
bend
the
api
without
breaking
it.
This
is
a
place
where
past
assumptions
are
locking
us
in
a
little
bit,
but
it's.
D
B
B
B
Unfortunately,
the
way
we
currently
have
the
spec
written
is
that
zero
is
not
allowed
explicitly.
So
this
is
where
we're
getting
the
finities
of
interpreting
compatibility
right,
xero
was
never
allowed,
so
possibly
there's
a
client
out
there
who
sees
xero
and
says
this
is
invalid.
B
Man,
hiram's
law,
okay,
all
right.
What
is
that
anything
that
is
exposed
by
your
product
is
part
of
your
api.
B
Okay,
so
I
I
ricardo,
I
I'm
supportive
of
the
whole
ip's
idea
if
we
can
figure
it
out
if
there's
interest
in
pushing
it
forward.
I
know
pavitra
did
some
work
thinking
about
it.
Maybe
you
can
can
sync
up.
F
Yeah,
okay,
so
a
couple
small
ones.
Well,
first,
one
small
a
couple
meetings
ago
I
mentioned
that
we
were
looking
into
renaming
service
api
to
gateway
api.
That
happened
so
just
so,
you
know
if
you're
hearing
about
gateway
api
or
if
you're
looking
for
service
apis,
that's
what
it
turned
into.
F
F
This
is
something
that's
easy
to
do
in
isolation,
because
we
said
well.
We
really
just
want
to
either
enable
this
and
have
some
automatic
approach
or
not
just
a
very
simple
on
off
idea,
and
hopefully
we
can
just
get
it
good
enough
that
we
can
do
it
by
default
eventually,
so
that
was
that
was
what
inspired
all
of
this.
At
the
same
time,
there
was
other
work
going
on
on
traffic
policy,
so
we
already
have
external
traffic
policy
local
in
121,
we're
adding
internal
traffic
policy.
F
F
There's
a
lot
of
different
ways
to
represent
that.
I
outlined
some
of
them
in
that
email
thread,
and
I
want
to
make
sure
that
what
we're
proposing
here
actually
makes
sense,
not
just
in
the
context
of
this
one
cap,
but
in
the
context
of
the
broader
service
api,
so
that
when
you
look
at
all
these
independent
fields,
they
the
combinations,
make
sense.
F
I
think
antonio
had
a
good
idea
and
I
think
the
last
response
on
that
thread,
which
was
maybe
for
now.
We
should
just
start
with
an
annotation.
It's
you
know.
Once
you
have
an
annotation,
you
basically
have
to
support
that
forever.
I
think,
but
that
is
one
way
that
might
be
relatively
simple,
to
introduce
this.
It's
something
that's
certainly
not
going
to
be
required
on
every
service,
it's
kind
of
an
opt-in
or
maybe
eventually,
an
opt-out
kind
of
thing,
but
yeah
just
wanted
to
open
some
conversation
here.
B
Okay,
so
the
the
the
biggest
issues
that
are
swirling
for
me
just
to
catch
everybody
else
up
were
right.
Now
we
know
that
the
our
implementation
of
topology
is
not
going
to
be
perfect
in
that
it's
not
there's
no
feedback,
we
don't
have
any
utilization
or
anything.
So
it's
possible
that
you
can
end
up
in
a
situation
where
you
have
really
pathological
performance.
B
It's
you
know,
like
you'd,
have
to
sort
of
cross
three
different
streams
to
get
there,
but
it's
it's
possible
to
do,
and
so
I
felt
like
there
has
to
be
some
out
some
way
to
say
no,
no,
no
back
this
out.
I
don't
want
it
then
the
the
question
that
antonio
asked
that
I
thought
was
insightful
was:
is
this
really
part
of
the
api
or
is
it
part
of
the
implementation?
B
I
know
what
I'm
doing
and
I
know
who
you
are,
and
I
want
you
to
not
do
this,
so
that's
kind
of
where
I'm
sitting
with
it
right
now.
It's
also
the
easiest
thing
to
back
out
of.
If
we
get
that
wrong
and
we
decide,
we
do
need
a
field,
we
can
always
add
a
field
right.
F
Just
one
clarification
on
that:
annotation
yeah!
That's
a
that's
a
really
great
summary.
I
I
I
want
to.
I
know
when
we
talked
about
a
field
here.
We
had
talked
about
at
least
starting
with
an
opt-in
and
transitioning
to
an
opt-out
when
we
felt
comfortable
enough
that
the
opt-in
was
performant
and
enough
people
liked
that
and
we're
using
it.
Are
you
suggesting
that
we
start
with
an
opt-out
and
not
provide
an
opt-in?
I
mean
separate
from
a
feature
gate.
B
I
I
think
for
alpha.
We
can
totally
get
away
with
that.
In
fact
we
could
say
for
alpha.
I
mean
the
pr
currently
is
implemented.
This
way
right
like
if
you
turn
on
the
feature,
gate,
you're,
getting
topology
everybody's
getting
topology
and
there
is
no
opt
out,
and
that
seems
totally
reasonable
for
alpha.
B
B
B
B
A
Was
the
next
one?
Yours
is
well
rob,
there
was
one
about
low
number
node
ports.
C
I
mean
raised
this
topic
in
the
challenge
of
the
day,
but
this
this
came
recurrently
I
mean,
I
think,
that
every
two
months
or
something-
and
I
remember
someone
in
one
of
the
meetings
asking
for
this-
but
there
is
an
option
to
to
configure
the
node
for
rights.
I
I'm
not
going
to
try
to
put
low
lines,
but
if
somebody
tries
and
doesn't
work
they
can
open
the
back.
B
C
B
B
C
B
C
So
I
mean
do
we
want
to
perpetuate
this
hack
and
build
on
top
of
that,
or
I
mean
I
see
like
I
don't
know,
investing
in
something
like
love,
this
thing
of
forwarding
things
to
the
pod
directly.
I
mean
because.
B
C
B
B
C
C
G
Oh
okay,
so
I
felt
motivated
and
actually
looked
at
the
five
enhancements
that
the
spreadsheet
claims
are
at
risk.
One
of
them
being
taken
care
of
another
one
can
be
removed
from
tracking
for
1.21.
There
were
three
left
and
andrew
andrew
sykim.
Do
you
need
help
because
they
all
have
your
name
on
them?
Do
you
want
to
partial?
You
know,
portion.
B
So
I
I
was
reading
1959,
I
was
reading
the
pr
for
that
today
and
I
think
I
sent
some
feedback
on
that.
One
internal
traffic
policy
we
were
discussing
this
is
this
is
the
issue
that
rob
brought
up,
so
that
is
very
much
in
progress
and
it's
gonna
be
under
the
door.
8
am
before
the
professor
gets
in
sort
of
a
situation.
B
G
It's
probably
worth
having
whomever
feels
the
most
informed
about
these
comment
on
the
issues
in
the
enhancements
repo
that
I'm
linking
because
the
last
comment,
as
of
like
20
minutes
ago
for
each
of
those
was
hello:
you're
gonna
miss
the
boat,
okay,.
J
Yeah
sorry,
I've
been
pretty
busy
with
the
internal
release.
We
have
coming
out,
of
course,
so
yeah
the
internal
traffic
policy.
As
tim
mentioned
thinking,
someone
on
my
team
is
working
on
that
one.
The
the
low
balancer
class
annotation
also,
I
think,
is
close
to
review
tim.
If
you
could
review
that
one
that'd
be
great.
J
Okay,
awesome
for
the
optionally
disabled,
node
port
one.
I
had
a
had
a
pr
open
to
add
integration
tests
for
that
I
think
rob
reviewed
it.
I
think
I
need
to
address
some
of
his
feedback,
but
I
think
you
know,
aside
from
like
adding
some
good
ed
and
integration
test
coverage
that
should
just
be
flipping.
J
The
feature
gate
to
beta-
and
I
was
hoping
lars
might
be
able
to
do
that,
but
yeah
I
haven't
haven't
seen
him
on
get
up
for
a
while,
but
yeah
I
I
can
like
once
the
integration
tests
pr
merges.
I
can
just
flip
the
switch
if
we're
okay
with
turning
that
on
beta
for
121..
J
J
Need
so,
while
we're
on
this
topic,
there's
also
the
work
around
q,
proxy
handling,
terminating
pods
yeah,
hoping
to
work
on
once
we
have
the
traffic
policy
stuff
sorted
out.
I
get
the
sense,
maybe
it's
too
late
for
121,
but
do
we
want
to
discuss
that
at
all.
B
J
Yeah
so
where
we
left
off
in
120
was
or
actually
so
in
120,
we
basically
the
api
changes,
an
endpoint
slice
so
that
we
have
the
serving
and
terminating
condition
early
in
121
emerge
the
pr
that
basically
adds
like
all
the
q
proxy
watch
cash
updates,
so
that
we
have
like
the
internal
representation
of
the
conditions
in
q
proxy,
but
the
actual
proxy
modes
just
ignore
all
the
other
conditions
that
aren't
ready.
J
B
J
Yeah
there
is
a
pr
open,
but
I
haven't
gone
to
updating
it
so,
like
I
think
the
biggest
question
we
had
in
120
was
if
we
wanted
the
fallback
logic
for
internal
traffic
as
well,
and
I
think
that's
kind
of
where
we,
where
we
left
off.
B
I
don't
remember
the
thinking
there
I'll
have
to
go
revisit
it.
Can
you
am
I
assigned
that
pr.
J
I
don't
think
so,
but
yeah.
B
If
yeah,
if
you
want
to,
if
you
want
to
discuss
it,
please
assign
it
to
me,
I
will
be
going
through
all
of
my
assigned
prs
as
as
rapidly
as
I
can.
If
it's
a,
if
it's
not
assigned
to
me,
I
promise
you.
I
will
miss
it.
J
A
That
brings
us
to
the
end
for
anything
else.
Folks
wanted
to
see
anything
that
we
didn't
get
to.
C
B
If
you
mean
that
one,
I
actually
still
have
it
open
here.
Let
me
paste
you,
the
link.
Was
it
this
one?
The
replace
update
doesn't
work
for
cluster
ip
and
node
port
yeah
yeah.
Actually,
it's
yes,
I'm
interested
in
thinking
about
how
to
solve
it.
It's
clearly
not
going
to
make
21.
B
The
the
the
problem
is
all
right.
The
the
the
problem
statement
is,
I
did
a
a
put
of
a
resource
that
set
it
to
a
particular
state
and
in
that
put
I
left
cluster
ip
out.
B
I
didn't
say
anything
for
cluster
ip,
and
that
means
we
go
and
we
allocate
you
one
right
and
then
they
do
another
put
of
the
same
state
and
it
fails-
and
the
argument
is
put-
should
probably
be
idempotent
in
the
sense
that
you
should
be
able
to
do
this
and
it
can't
because
we
we
interpret
that
as
you're
trying
to
set
the
cluster
ip
to
the
empty
string
which
isn't
what
we
allow.
We
don't
allow
you
to
change
cluster
ip.
B
So
what
they're
more
or
less
asking
for
is
recognize
that
the
cluster
ip
wasn't
set
and
don't
try
to
change
it.
Don't
basically
interpret
a
put
sort
of
as
a
patch,
and
I
argued
against
it
at
first
and
if
you
read
the
whole
discussion
I
sort
of
softened
at
the
end,
it
actually
starts
to
sound
more
reasonable.
B
So
I
think
we
could
actually
do
it
and
it
wouldn't
be
terrible,
and
I've
certainly
seen
this
issue
come
up
enough
times
that
I'm
I'm
sympathetic
to
it.
Node
port
is
actually
worse.
If
you
have
a
yaml
that
specifies
node
port
0,
then
rather
doesn't
specify
node
port,
and
if
you
just
reapply
the
same
yaml
over
and
over
again,
it
will
actually
reallocate
your
node.
B
The
rest
yeah
the
rest
in
a
router
thing.
I
think
they
intersect
in
that
the
the
state
of
the
rest
stack
after
my
changes
will
be
easier
to
fix
these
things.
Yeah.
C
B
K
B
In
no
put
and
put
as
an
update
right,
two
posts
should
fail,
but
but
even
that,
like
the
depending
on
what
you
read
about
rest,
there
is
no
like
book
on
rest
right
or
there
are
too
many
of
them
anyway.
There's
a
there's
one
school
that
says
these
operations
should
be
idempotent
and
do
applying
the
same
context
twice.
Should.
K
B
I'm
well
aware,
I
mean
I've
spent
enough
time
in
that
in
that
code
with
you
that
I
understand
exactly
what
is
being
asked
for
and
that's
part
of
what
I
said
in
there
I
was
like
yeah,
I
kind
of
I'm
sympathetic
to
the
ideal.
It's
never
been
the
yardstick
that
we
measure
our
changes
by.
B
But
I
kind
of
like
it
and
if
I
could
make
it
work
reasonably
easily,
maybe
we
should
and
I
actually
I
don't
think
this
would
be
that
hard,
but
I
haven't
had
the
time
to
really
pursue
it.
So
if
anybody's
interested
in
that,
like
I'm
happy
to
talk
about
it,
but
otherwise
it's
just
sitting
in
my
queue
of
things,
to
look
at
eventually.
B
K
B
K
B
And
it's
so
read,
read
towards
the
end
of
that.
The
issue
that
I
linked.
I
I
think
we
could
reasonably
easily
change
cluster
ip
to
a
pointer
and
then
the
intent
is
pretty
clear
if
you
set
it
to
the
empty
string.
It's
still
an
error,
but
if
you
leave
it
nil
then
your
intention
was:
I
don't
care.
B
B
I
I
I
don't.
I
can't
imagine
how
we
can
call
that
correct.
So
I
I
what
I
would
like
part
of
the
reason
that
I
haven't
pursued
this
further
is,
I
want
to
think
really
more
deeply
and
look
through
the
code
is
to
like
understand
why
that's
happening,
because
that
was
surprising
to
me.
B
It
feels
like
we
can
make
a
similar
argument
like
if
node
port
was
a
pointer
and
you
don't
specify
anything,
it
means
leave
it
alone,
and
if
it's
zero,
then
it
explicitly
means
reallocated
right,
and
actually
this
goes
to
some
of
the
dual
stack
stuff.
Where
we've
talked,
I
mean
like
dual
stack,
you
can
add
a
second
ip,
but
not
get
rid
of
the
first
one
and
cal.
B
And
part
of
the
I
mean
it's
coupled
with
this
argument
of
we
used
in
the
past.
We
use
the
go
zero
values
to
indicate
unset
and
that's
ambiguous
right.
So
if
somebody
feeds
me
in
an
empty
string,
are
they
asking
if
the
empty
string,
or
did
they
just
not
say
anything.
K
K
You
have
three
states
right:
a
value,
empty
string
or
a
null
you're
using
a
pointer
so
becomes
it
becomes
problematic
if
you
start
looking
at
it
from
from
from
from
a
bigger
picture
like
yes,
you
can
solve
for
one
and
then
another
and
then
another
and
then
another,
but
once
you
start
looking
for
everything
out
there,
it
becomes
a
very
hard
problem
to
solve
the
speaking
of
not
poor
sports
even
had
this
problem,
I
think
we
fixed
that
which
is
oh,
I'm
switching,
my
my
my
service
type
to
a
different
to
a
different
type
that
doesn't
need
not
ports.
K
Around
this
right
yeah,
what?
If
the
user
made
a
mistake
right
now?
What
and
then
you
add
it
to-
is
the
new
force
that
the
reservation
of
no
ports?
What
if
they
wanted
this
port
and
then
they
made
this
mistake,
and
then
we
released
that
board
now
the
user
is
in
a
situation
where
sorry
your
report
has
been
taken.
Yes,
it's
unlikely
to
happen
that
fast,
but
yeah.
It
just
goes
into
the
discussion
of
user
intent
and
apis
are
a
saying
that
usually
needs
some
serious
thinking
before
doing
it.
B
Yes,
I
I
appreciate
your
consistent
perspective
on.
K
Yeah,
I'm
just
trying
to
reach
with
varitha
yeah.
Can
you
please
share
a
link
for
the
doctor
or
an
empty
place
for
the
doc,
where
we
can
start
huddling
on
the
note
the
noteport
range
stuff,
so
we
can
think
about
it?
I
have
space
over
the
next
couple
of
weeks
to
actually
think
and
write
stuff
about
that.
B
Yeah
well,
I
would
I'd
also
like
to
say
in
case
anybody
missed
it.
I
mean
I
think
we
talked
about
last
time,
but
the
the
dual
stack
is
beta
now.
So
if
you
spin
up
a
cluster
from
head
today,
you
will
see
all
the
dual
stack
fields
there
so
have
at
it
beat
on
it
see
if
it
makes
sense
and
already
I've
been
playing
with
it
and
and
it's
looking
looking
good.
I
don't
have
a
dual
stack
cluster,
but
I
can
still
allocate
dual
stack.