►
From YouTube: Kubernetes SIG Network meeting 20210610
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
B
B
E
F
F
F
E
E
B
Or
we're
confused
or
someone's
confused
okay,
this
particular
one
it
looked
like
it.
Tim
was
saying
that
if
we
have
to
special
case
something
like
you
were
talking
about
it
together
and
like
I
basically
my
question
for
this
one
is:
are
we
accepting
triage,
you
know,
are
we
trying
to
accepting
on
this,
or
are
we
not
sure
yet,
if
this
belongs
to
us.
B
Oh
fantastic,
then
I
will
just
skip
it
all
right
and
in
this
one
do
we
do.
We
think
this
is
even
us.
G
B
B
I
Yeah
I
looked
at
this
one
and
it
seems
it's
perhaps
they
need
to.
Let
us
know
about
what
cni
is
implementing
this
type
of
policy
and-
and
I
guess
if,
if
they
have
like
a
calico
or
psyllium
or
entry
or
any
of
the
cni,
then
maybe
they
need
to
ask
the
community,
because
you
know
there's
no
implementation
upstream.
So.
A
Yep
I
mean,
but
also
if
we
go
up
to
the
top,
you
know
they
have
two
from
blocks
right,
and
one
of
them
is
oh
okay,
but
there's
ports
there
too.
Yes,.
A
B
A
I
mean
honestly,
I
don't
think
this
is
an
issue
that
we
should
deal
with.
It
is
a
network
plug-in
issue
and
we
should
probably
end
up
closing
the
issue.
A
I
know.
Maybe
we
want
to
let
the
person
reply
and
then
just
be
nice
and
follow
it
up
and
tell
them
where
to
go.
You
know
so
if
they
say
calico
point
them
at
casey,
just
kidding.
B
E
E
B
J
B
A
B
A
Yeah,
I
would
ask,
and
your
you
know:
are
they
running
q
proxy?
Are
they
running?
You
know
what
network
plug-in
or
what
cni
plug-in
are
they
using?
A
B
K
A
B
E
E
K
Don't
know
I
mean
that'd,
be
that
would
be
a
good
start
like
we
don't
even
have
a
beep
test
for
sctp.
So
but
you
know
like
most
contributors,
I
expect
or
sorry
like
most
bug
filers.
I
expect
this
person
to
say
my
problem
is
solved.
I'm
gone
right,
which
is
not
an
indictment
of
them,
but
it
doesn't
help
us
yeah.
K
A
I
mean
maybe
q
proxy
gets
fixed
to
set
the
sctp
timeout
stuff
as
suggested
there,
but
we
would
have
no
way
to
test
that
that
actually
makes
a
difference.
K
B
A
And
this
will
be
our
last
issue.
We
will
be
moving
on
to
kep
review
now.
E
A
K
B
K
So
what
do
we
want
to
do
on
review?
I
I
mean
we
can
start
going
through
the
caps
just
to
make
sure
that
we're
all
aware
of
them-
or
I
forget
what
the
goal
that
we
decided
was.
B
K
Okay,
all
right!
Well
then,
let's
look
at
the
let's
skip
the
evaluated
not
committed
for
now.
Let's
look
at
the
pre-alphas,
so
graceful
termination
for
external
traffic
policy.
I
know
andrew
was
working
on
that
andrew.
Are
you
here.
L
L
Yes,
I
think
it's
just
waiting
for
more
reviews.
I
think
there
was
you
had
concerns
about
whether
the
amount
of
ip
cable
rules
we're
adding
is
going
to
really
impact
performance.
Just
that's
right,
not
ready
set.
I
think
antonia
mentioned
he
might
be
doing
some
testing
on
that.
I
personally
have
not
done
like
extensive
testing
to
know
like
100.
L
If
that
performance,
it's
going
to
be
huge
or
not.
K
So
I
guess
my
question
my
biggest
question
is
this:
will
cause
extra
ip
tables
rules
to
be
written,
whether
or
not
that
makes
a
difference?
I
don't
really
know
the
question
I
have
is:
does
we
have
to
do
that
or
is
there
a
way
that
we
could
avoid
those
table
rule
those
tables
that
are
never
jumped
to.
L
Yeah,
I
think
the
the
tricky
part
is
like
the
it's
just
a
chain
definition
that
is
included
for
all
endpoints,
whether
they're
terminating
or
not,
so
that
you
can
update
the
the
probability.
K
L
Yeah,
I
think
that
the
current
implementation
is
based
on
like
the
the
flow
of
q
proxy's
code,
where
we
don't
actually
know
if
it's
local
or
ready
or
not
until
like
way
later.
So
I
think
if
we
reworked
the
code
a
bit,
maybe
we
can
change
it
so
that
we
do
a
full
evaluation
of
all
the
rules
and
whether
it's
local
or
not
before
we
add
the
endpoint
chains.
K
Yeah,
okay
I'll
make
a
pass
through
it
again
and
I'll
think
about
whether
the
risk
here
is
high
enough
to
defer
until
we
do
that,
rework
now
that
you've
said
it.
It
makes
me
really
want
to
do
it.
I
mean
I
know
that
code
needs
clean
up
anyway,
so
yeah,
you
can't
see
me
because
my
my
camera's
off,
but
I'm
using
cody
fingers
for
cleanup.
L
Yeah
and
happy
to
do
the
refactor
if
we
need
to
do
it,
just
okay,
I'll
just
need
a
bit
more
time
than
you
know.
A
few
weeks.
I
Somebody
want
to
speak
to
this
yeah.
Maybe
I
can
say
something.
I
pushed
a
comment
yesterday
based
on
some
of
the
comments
that
seem
to
have
some
sort
of
fair
agreement
on,
but
there
are
wide.
I
A
bit
to
you
know,
make
sure
that
we
are
solving
the
correct
or
we
are
actually
capturing
the
correct
intent
and
for
that
in
the
sig
network
policy,
api
repository,
the
team
has
pushed
a
few
use
cases
a
separate
pr's
and
we
are
reviewing
them
and
trying
to
make
sure
that
you
know
they
align
with
the
intention
and-
and
we
would
like
to
have
you
know
the
sig
network
folks
to
come
review
those
you
know
separate
use
cases
so
that
we
can
hash
them
out
separately
and
not
intermix
them.
I
So
once
we
are
ready,
I
think
we'll
add,
add
you
guys
to
the
review
list
on
there,
and
I
think
then
you
know
we
will
continue
to
work
towards
comments
and
try
to
address
individual
comments,
but
but
yeah
we
as
a
team.
I
think
we're
continuing
to
work
on
these
different
points
that
have
a
lot
of
it's
like
a
battleground
break.
K
Okay,
sorry,
I
have
to
step
away
for
a
second
part
of
the
situation
going
on
here.
This
sounds
okay.
I
would
just
encourage
you
and
everybody
make
noise
when
you're
ready
for
review.
If
something
is
sitting
around
waiting
for
review,
that's
a
problem,
but
sometimes
it
just
gets
lost
in
the
filters.
So
everybody
please
jump
up
and
down
and
wave
your
hands.
If
you
have
something
that's
waiting
for
review
dan
can
I
can
I
pass
this
back
to
you
to
keep
going
with
kep
review
I'll,
be
back
in
a
few
minutes.
B
Quick
question
before
we
move
on
for
the
two
that
we've
already
reviewed:
can
the
person
who
updated
us
maybe
put
a
comment
on
their
on
the
one
that
they
had
open
just
so
that,
like
next
time,
we
look
at
it
we'll
be
like?
Oh
yeah,
we
talked
about
this
last
time,
blah
blah
blah.
We
that
was
the
current
status.
Just
like
a
one-liner.
I
feel
like
that
would
probably
be
helpful.
B
Anyway,
back
to
the
next
one,
sorry,
where
did
you
put
a
comment,
the
one
that
was
just
open,
like
literally
what
we
were
just
looking
at?
Okay.
B
A
Okay,
good
point:
bridgette
all
right
reworking:
q,
proxy.
C
C
We
originally
had
nf
tables,
that's
the
thing
that
mikhail
made
and
then
ricardo
sort
of
put
together
an
ipvs
prototype,
and
we
purchased
that
in
like
two
weeks
ago,
three
weeks
ago
and
then
the
last
week
we
built
a
bunch
of
automation
and
stuff
around
spinning
up
kaping
instances
with
ipvs
nf
tables
and
documented
some
issues,
because,
like
it's
weird
because
nf
tables
fights
with
iq
tables-
and
I
don't
really
know
why,
but
so
there's
some
freakiness
around
there.
A
A
A
Okay,
I
guess
anybody
else
who
is
interested
in
making
q
proxy
less
of
a
dumping
ground
for
stuff.
Please
help
out
with
that
effort,
all
right
services,
cluster
api
and
node
port
allocations.
Api
antonio,
is
there
anything
that
you
need
from
sig
network
in
general.
Here.
E
No,
this
is,
this
is
a
a
complex
topic.
I
mean,
I
think,
that
team.
Basically
this
this
is
waiting
for
team
he's
also
doing
a
big.
He
also
wants
to
do
a
big
refactor
on
the
crashed
on
the
service
crushed.
His
ip
no
portal
location.
K
Yeah,
sorry,
I'm
back
this
one
in
particular,
I
think,
addresses
a
sort
of
architectural
problem
with
the
ip
allocators,
so
I
really
do
want
to
get
it
in
it.
Hasn't
it's
not
urgent,
like
it's
not
on
fire,
but
I
do
think
I
want
it
to
go
in
if
we
can
get
it
in.
It
is
stuck
on
me.
I
know,
antonio,
you
sent
me
a
a
repo
that
I'm
supposed
to
look
at
and
I
just
haven't
had
a
chance,
but
you
know
other
people
are
welcome
to
look
too.
K
A
Okay,
all
right,
but
so
that
would
be
the
ask
that
we
get
some
more
review
or
what
do
you
need
to
move
forward?
Antonio,
besides
review
from
tim
or
others,
anything.
E
Change,
so
a
lot
of
things
can
go
wrong
so,
and
this
is
an
important
architectural
chain,
so
reviews
and
ideas,
but
just
trying
to
follow
a
bit
the
threat,
because
it's
a
wrong
thread
and
if
we
I
don't
want
to
to
take
the
risk
of
starting
over
again,
you
know
yes,.
E
K
A
E
K
Yeah,
so
I
there's
an
ongoing
discussion
about
how
to
load
crds,
and
I
don't
want
to
put
too
much
pressure
on
the
folks
who
are
doing
that
work.
But
we
do
need
to
decide
at
some
point
whether
this
is
going
to
be
a
crd
or
a
built-in
type.
It
sounds
like
there's
some
some
reality
creeping
in
that.
If
we
propose
this
as
a
built-in,
it
would
probably
be
okay
with
folks
like
clayton,
so
I
feel
like.
K
If
I
can
get
clayton
on
our
side,
then
we
can
get
it
as
a
built-in,
but
I
do
want
to
I've
spent
a
little
time
this
week,
playing
with
crd
the
most
recent
crd,
validation
and
stuff,
and
I
want
to
see
if,
if
it's
possible,
to
define
the
type
as
a
crd
and
still
make
the
controller
built
in
like
one
step
towards
the
goal,
but
that
depends
on
being
able
to
have
a
solution
to
the
crd
loading
problem.
F
Them
having
having
a
type
as
a
crd,
makes
it
editable
right.
F
K
F
F
Every
single,
every
crd
does
not
affect
the
core
of
the
system
right.
You
can
have
a
failure
like
that
that
the
biggest
failure
that
we
had
in
crd
was
early
on,
where
you
do
a
web
hock
against,
instead
of
like
partial
webhook
against
a
certain
type,
you
don't
know
a
book
that
fails
and
it
happened
to
be
in
the
past
of
everything
that
essentially
defunct
across
the
cluster,
but
in
our
in
our
case,
if
you
want
to
do
a
core
type
that
is
based
on
a
crd
and
then
the
user
edits,
the
crd.
K
It's
it's
something
to
think
about
yeah,
it's
true!
It's
a
good.
A
I
one
of
the
ways
we
deal
with
that
in
open
shift
is
that
we
have
types
that
are
crds,
that
input
to
our
operators
and
the
operators
decide
what
actually
makes
it
into
the
crd
that
the
rest
of
the
cluster
actually
looks
for
so
there
is
like
an
additional
layer
of
validation
there.
It's
not
a
you,
know,
perfect
fix,
but
it
is
one
way
to
avoid
that
kind
of
thing.
F
F
A
So
but
we
do
need
to
move
on
so
one
of
the
things
I
know
go
ahead
time.
B
Yep,
that's
dual
stack
related.
The
most
recent
update
was
eight
days
ago,
a
lot
of
people
weighed
in
which
I
appreciate,
and
I
wanted
us
to
because
even
though
it's
june
and
1.23
seems
far
away,
I
also
would
like
us
to
make
a
little
progress
here
in
terms
of
what
do
we
want
to
do
about
this
bug
so
yeah
cal?
Do
you
want
to
you
want
to
kind
of
summarize?
B
This
is
the
this
is
the
preferred
dual
stack
repair
loop
stuff?
Yes,
where
we
could.
F
So
the
upgrade
path
is
easy
right.
The
downgrade
passes
on
that.
That
really
is
funky,
because
right
now
you
want
to
downgrade
what,
if
you're
downgrading
a
service,
that
let's
say
a
user
created
the
alternative
family
first
kind
of
service.
So
the
cluster
is
ipv4
v6,
but
the
service
is
ipv6
v4.
F
E
F
Yes,
single
to
dual
is:
is
is
okay
right.
Dual
to
single
is
something
that
we
need
to
the
thing
that
I
promised
to
look
at,
which
was:
can
we
test
that
so
the
problem
that
the
measure
that
the
issue
was
discovered
when
a
user
batching
a
service
editing
a
label?
For
example?
I
preferred
the
dual
stack
service
and
then
it
gets
automatically
upgraded.
F
That
was
the
issue
all
right,
so
I
looked
at
the
code.
We
can,
in
theory,
look
if
the
update
is
not
touching
the
the
that
cluster
ip
related
fields
as
we
call
them
and
we
can
have
like,
if
not
then
do
not
upgrade,
but
that's
a
way
more
completely
complicated
code
that
needs
to
go
in
I'd
rather
not
have
it
really
right,
but
it's
not
a
hell.
I'm
gonna
die
on.
E
K
Documenting
the
current
behavior
seems
like
the
worst
answer,
because
it's
totally
random
what
happens
and
when,
like
you
just
don't
know
the
next
time,
you
touch
your
service,
you
or
anybody
right
you
can.
You
could
be
a
label,
it
could
be
anything
you're
going
to
get
changed
like
that
seems
like
the
the
worst
option.
So
I
think
either
we
make
it
happen
under
a
control
that
we
that
we
understand
and
we
document
the
the
edge
cases
where
we
can't
make
it
happen,
or
we
do
the
gross
work
to
make
it
not
happen.
E
F
K
E
Yeah
but
the
the
thing
is
so
we
want
to
solve
a
problem.
I
have
a
preferred
lure
and
without
noticing
is
doer,
but
that
is
going
to
happen
with
pause.
So
let's
say
that
you
have
a
preferred
lure
and
the
service
is
blue,
but
this
is
still
single,
because
the
pods
are
still
single
and
suddenly
in
somebody
restart
the
pulse,
and
then
it
becomes
dual.
K
Depends
on.
We
know
that
a
lot
of
not
a
lot
of,
but
some
cni's
already
assigned
two
ips.
They
just
don't
have
a
way
to
report.
It.
M
F
If
we
don't,
if,
let's
say
we,
we
get
the
code
and
that
checks
that
oh
they
touched
cluster
ip
fields,
let's
actually
make
a
dual
stack
right
for
a
preferred,
stack,
preferred
all
stack
servers
and
we
have
that
in
place.
How
how
a
user
would
upgrade
the
service,
how
a
user
can
go
and
make
sure
that
the
preferred
dual
stack
is
actually
dual
stack.
F
M
K
M
A
Okay,
so
cal
you
will
I
love
another.
Look
at
this
issue
versus
the.
F
A
Okay,
next
up
in
the
15
minutes,
we
have
left
tim,
you
have
network
policy
and
quote
unquote
external
traffic.
A
N
Anthony,
if
I'm,
if
I'm
correct,
what
would
you
referring
to
the
slides
which
are
somewhat
related
to
tim's
topic
about
the
network
model
for
host
network
and
so
on?.
E
E
Yes,
we
don't
require
always
without
that,
for
some
reason.
A
N
Antonio,
this
is
sanji
if
some
of
this
involves
updating
our
definition
of
the
kubernetes
network
model,
that's
an
area
which
I'm
interested.
If,
if
I
can
help
in
some
way
in
docs,
I'm
happy
to
do
that
so
I'll
contact
you
offline.
A
Okay,
moving
on
to
cra
api
for
network
checks,
antonio
again,
okay,
we'll
talk
about.
E
This
in
another
signal
we're
meeting,
and
I
don't
think
that
this
is
something
that
we
need
urgently,
but
I
see
that
we
can
have
some
benefit
from
it
in
the
long
term
if
we
so
basically
it's
instead
of
the
cubelet
sending
the
tcp
and
http
props
to
the
pods,
we
create,
we
add
an
api
to
the
cri.
So
the
these
probes
are
executed
directly
from
the
container
runtime
in
the
independent
space.
E
K
I
think
it's
an
interesting
idea.
We
have
to
the
biggest
issue
that
I
see
is
compatibility
with
folks
who
are
using
localhost
today
to
mean
the
node,
so
at
the
very
best
we'd
have
to
special
case
that
other
than
that,
I
could
see
this
happening,
but
I
don't
know
how
big
of
a
change
it
is
in
cri
and
I
don't
even
know
who
to
talk
to
around
cri
at
this
point.
So
I
tagged
a
few
of
the
sig
node
people
and
I
don't
know
how
far
up
the
priority
list.
This
one
rises.
K
E
K
No
than
a
network,
I
agree,
I
think,
it'd
be
a
fun
one
to
look
at
and
it'll
cross
a
lot
of
components,
and
you
know
you
get
to
make
some
changes
to
container
d
and
cryo
and
all
the
other
ones.
E
C
Yeah,
it
also
would
be
cool
for
yeah.
I
think
it'd
be
really
cool
to
have
this
in
container
d
as
well.
It
would
make
it
a
lot
easier.
Well,
I
mean
it's.
I
think
it
would
make
the
network
policy
stuff
easier
to
reason
about
also
and
then
for
for
alternate
container,
runtimes
and
stuff.
It
would
make
it
a
lot
easier
to
know.
What's
going
on.
E
K
C
A
K
Yeah
and
if
I
recall
mike
at
ibm,
was
they
had
some
weird
model
where
they
were
making
that
work
through
some
extra
agent,
like
it
wasn't
actually
directly.
A
Right
but
mike's
cases
had
overlapping
ips
in
the
cluster
and
like
multi-thread.
F
K
So,
like
I
said
I,
I
think
this
is
probably
a
good
idea
in
in
general,
but
I
just
don't
I
well
it's
not
getting
done
like
today,
so
like
keep
it
open
for
sure.
Maybe
give
it
a
important
long-term
tag
and
see
if
anybody
wants
to
help.
A
Right,
okay,
all
right,
yeah,
it's
pri
priority,
important
dash,
long
term;
okay,
wait
triage
accepted
priority
like
that
right,
yup,
okay,
all
right!
So
in
five
minutes
last
one
we
have
is
all
ports
support,
status.
D
Yeah,
thank
you
that
might
take
more
than
five
minutes,
so
I'll
I'll,
probably
move
it
to
the
next
one
as
well,
but
I
can
give
a
short
update
today.
So.
D
Yeah,
thank
you
so
yeah.
This
is
about
the
all
board
services
kept
that
yeah
you're.
Seeing
right
now,
I
made
some
slides
as
well
to
provide
an
update,
so
we
had
sent
out
a
survey
to
users
on
the
kubernetes
issue
to
to
get
more
idea
about
the
use
cases
and
also
whether
the
approach
we
had
discussed
in
the
cap
would
work.
So
I
wanted
to
quickly
go
through
those.
D
So
yeah
I'll
jump
right
into
the
sorry
jump
right
into
the
results
that
we.
D
B
D
Okay,
yeah
yeah
the
approach
we
had
we'll
get
to
later
so
anyway.
The
the
survey
that
we
sent
out
was
to
identify
what
what
use
cases
folks
were
running
into
and
and
whether
the
approach
we
were
suggesting,
which
was
like
all
boards,
would
be
specific
only
to
headless
services
plus
load
balancer
services.
D
If
that
approach
would
would
be
sufficient
and
out
of
17
responses,
what
we
got
was
that
70
were
okay
with
ip
level
of
balancing,
but
a
big
percentage
wanted
support
for
both
cluster
ip
as
well
as
low
balancer
service.
So
just
headless
plus
low
balancer
didn't
seem
to
be
enough
yeah.
I
I
just
have
more
info
about
it.
There
were
some
use
cases
specified,
but
the
details
were
not
very
clear
like
why
you
want
it
for
cluster
ips,
but
here
were
some
use
cases.
D
The
zip
protocol
rtp,
which
we
knew
about
before
webrtc,
which
is
like
video
conferencing,
which
is
under
the
use
case
we've
heard
about.
Then
there
was
ftp
video
gaming
and
there
were
also
some
about
portrait
mapping,
which
would
essentially
mean
we
do
port
ranges,
rather
than
the
entire
port
range
entire
valid
port
space,
which
is
the
which
is
what
the
cap
was
talking
about,
or
aiming
to
do.
So.
These
were
some
use
cases
and
the
max
number
of
ports.
D
It
seemed
like
10
out
of
17,
wanted
10k
or
more
ports,
and
what
what
was
somewhat
surprising
to
me
was
that
node
port
needed
to
be
generated
for
each
service
in
that
portrait,
each
port
in
the
in
the
port
range
that
was
being
exposed,
which
again
won't
be
enough
with
the
with
the
ip
level
the
balancing
we
were
talking
about
in
the
cap,
but
it
did
say
that
the
ip
level
of
balancing
would
be
sufficient
for
for
a
big
majority
of
the
use
cases
yeah-
and
I
know
we
are
out
of
time,
but
there
were
some
answers
for
why
this
wouldn't
work.
D
So
I
I
tried
to
reason
through
why
they
we
could
still
make
that
work
with
the
ip
level
of
balancing
that
that
people
are
going
towards,
but
anyway,
after
all,
these
results.
The
one
other
thing
was:
if,
if
we
could
just
improve
the
ux
and
provide
an
easy
way
to
translate
a
start
and
end
port
to
just
the
whole
spec
which
again
was
was
not
going
to
solve
most
of
the
use
cases,
a
big
percentage
of
the
use
cases.
D
So
I
wanted
to
get
thoughts
on
how
we
want
to
move
this
world,
given
that
headless,
plus
low
balancer
service
might
not
satisfy
majority
of
use
cases
plus
it's
it's
also
kind
of
tweaking
around
to
find
find
enough
hoops
in
the
validation
to
get
it
passed.
So
I
was
wondering
if
the
next
step,
we
should
be
to
explore
relaxing
the
non-zero
port
requirement.
D
D
It
seemed
like
that
should
that
my
opinion
was
that
we
should
probably
move
that
to
gateway
l4
implementation
that
can
change
service
api
so
much
but
yeah,
that's
that's,
I
think
kind
of
where
we
are,
and
I
wanted
to
see
if
we
can
go
ahead
and
if
I
should
just
rewrite
the
cap
to
to
do
the
service
port,
relax
restriction,
relaxation
approach-
and
I
just
wanted
to
also
share
the
results
we
got.
K
So
I
think
the
interesting
question
to
think
about
on
this
is
if
we
didn't
have
any
restrictions
and
we
were
doing
this
sort
of
from
scratch,
what
would
we
want
the
api
to
look
like,
and
can
we
get
even
close
to
that?
And
I
think
the
idea
of
relaxing
the
zero
ports
requirement
gets
closest
to
that.
But
to
do
that,
we'd
have
to
be
super.
Super
careful,
like
you
say,
soaking
it
for
several
releases,
because
we
have
to
wait.
Till
clients
would
have
aged
out,
and
we
know
that
we
have
a
version.
K
Skew
rule
with
respect
to
nodes,
so,
like
cube
proxy
would
have
to
we'd
have
to
know
that
the
oldest
cube
proxy
understands
no
no
ports
or
whole
ip
services
and
won't
crash
so
that
you
know
at
the
most.
It
would
be
like
a
year
of
roll
it
out
into
alpha
and
just
wait,
and
then
we
could
at
least
say
well.
The
oldest
cube
proxy
is
safe.
D
Right
right,
yeah
yeah,
that
makes
sense.
I
had
also
scanned
the
the
dns
core
dns
and
dns
implementations,
so
those
seem
to
handle
the
zero
port
as
well,
but
yeah
makes
sense
to
keep
it
in
alpha
until
the
two
version
behind
q
proxy
has
is
able
to
pick
up
the
zero
port
array
correctly.
Okay,.
K
K
Right,
we
can't
do
it
strictly
from
compatibility
reasons
like
cube
proxy
indexes
on
the
port
number
all
over
the
place
right,
so
that
would
explode
cube
proxy
today,
but
if
we
fixed
it
in
cube
proxy
and
then
rolled
it
out
and
just
waited
until
the
version
skew
window
matches,
then
you
know
a
year
from
now.
We
could
have
a
discussion
about
actually
moving
to
beta.