►
From YouTube: Kubernetes SIG Network meeting for 20230302
Description
Kubernetes SIG Network meeting for 20230302
A
This
meeting
is
being
recorded,
hello,
everybody
and
welcome
to
the
March
2nd
edition
of
the
Sig
Network
meeting.
A
We
do
not
just
a
reminder
that
this
meeting
is
under
the
kubernetes
code
of
conduct,
which
essentially
boils
down
to
please
be
nice
to
one
another.
We
don't
have
a
whole
lot
on
the
agenda
for
today
we
just
have
triage
and
grooming.
So
if
you
do
have
an
item,
take
a
couple
seconds
put
it
on
there
while
we're
talking
and
we
can
get
to
it.
Otherwise,
we'll
just
go
over
triage.
A
All
right
so
we'll
get
started
from
the
top
of
the
list
here.
Is
it
still
necessary
to
maintain
the
limitation
on
exposing
Port
10
to
50
externally.
A
B
About
Cube
proxy
above
that
I
think
Tim
put
in
and
as
far
as
I
know,
it
still
listens
by
default
on
10256
only
for
gcp
internal
load
balance
or
health
checks.
D
So
the
the
cube
proxy,
so
there's
the
the
original
history
of
This
was
providers
and
honestly
I
forget
which
all
of
them
it
was
I,
know
Google,
was
included
in
the
list,
but
I
don't
know
who
else
where
the
load
balancer
could
inadvertently
take.
If
I
created
a
service,
an
external
service
on
Port
10250,
it
would
be
possible
to
access
the
cubelet
on
the
internet
and
the
problem
is
the
cubelet.
Api
is
actually
really
powerful.
D
It's
not
just
a
read-only
API,
it
can
do
a
bunch
of
stuff,
and
so
we
threw
that
limitation
in
with
a
note
to
come
back
and
check
on
it.
Once
all
the
load,
balancer
implementations
were
fixed.
Clearly
we
forgot
to
come
back
and
check
on
it.
I
don't
know
for
sure.
If
all
of
them
are
fixed,
I,
don't
know
that
because
I
don't
think
we
ever
enumerated
them
all.
D
We
could
still
use
that,
but
it's
sort
of
a
less
less
weighty
assertion
now
that
the
entry
providers-
because
we
know
that
they're
not
all
of
them
so
and
and
Jordan's
point-
was
the
in
some
cases
the
default
for
Cuba
is
not
authorized,
so
it
would
still
be
wide
open.
So
the
impact
of
getting
this
wrong
is
pretty
high.
D
I'm,
honestly,
not
sure
how
to
proceed.
I
would
love
to
get
rid
of
this
little
old
wart
in
here.
A
No,
no
sorry
I
I
get
I
get
the
sense
in
getting
rid
of
this
old
wart
I
am
a
little
curious
like
it
doesn't
I'm
not
seeing
like
what
the
drive
for
this
was.
It
almost
feels
like
something
that
he
just
maybe
this
person
stumbled
upon
and
just
kind
of
thought
like
that
shouldn't
be
like
that
anymore,
be
nice
to
know
if
there
was
like
a
driving
cause
behind
this,
because
I
could
see
this
getting
stuck
in
the
swamp
for
a
very
long
time.
B
A
E
This
is,
this
is
something
to
migrate
for
that.
A
If
I
understood
this
comment
correctly,
then
under
certain
conditions
that
is
true,
but
not
under
all
conditions,
so
like
there's
some
backwards,
compatibility
behavior
that
we
have,
if
you
run
the
kublet
via
command
line.
But
if
you
run
it
via
config
file,
it
actually
has
a
safe
configuration
by
default,
which
is
interesting.
B
F
D
D
And
the
I
guess
the
question
is:
is
there
any
path
forward
where
we
just
say
YOLO
like
we're
taking
this
out
or
or
do
we
make
it
a
flag
like?
Do
we
add
it
to
API
server
and
say
dash
dash,
allow
service,
10,
250
and
tell
providers?
If
you
know
your
provider
is
safe?
You
should
go
and
enable
this,
because
so.
B
D
I
I
kind
of
I
agree
with
you
right.
So
maybe
the
the
use
case
in
question
here
is
dubious.
That's
actually
a
great
Point
I.
B
B
I
was
just
gonna,
say:
I
I
know
that
there
is
a
use
case
for
having
the
node
Port
service.
There
I've
heard
of
that
with
virtual
machines
that
are
running
as
pods
in
the
cluster
so
like
I
know,
that's
a
use
case,
and
this
doesn't
seem
that
far
off
of
it,
but
it
just
seems
a
little,
maybe
overly
broad,
as
stated
originally
so
I.
Basically
maybe
there
is
a
valid
use
case
in
here
for
that,
but
it's
just
not
clear
to
me.
If
that's
the
intent
of
the
reporter
or
not
Roger,.
D
In
there,
I
captured
a
couple
of
thoughts
here
so
I'll
post
those.
But
if
you
want
to
dig
into
it
further
like
please,
we
can
entertain
it
if,
ultimately,
if,
if
it's
really
important
that
we
enable
this
I'm
not
going
to
stand
in
the
way
of
a
flag
or
something
it's
just
unfortunate,
I.
F
D
G
H
Per
yeah,
so
too
small
I
mean
aren't
there
more
ports
that
should
be
protected
and
should
be
a
mechanism
to
inform
the
load
balance
of
which
ports
it
shouldn't
be
able
to
use.
That's
one
side
of
the
matter
is
that
I
checked
in
the
analyst
for
ports
and
Port
1250
is
actually
not
assigned
for
that's
12
12
250,
it's
not
assigned
to
anyone.
So
we
should
probably
register
this
portion
if
it's
Statics
so
important
at
the
honor.
D
Yeah
we
we
could
I,
don't
know
why.
A
Would
you
mind
throwing
a
comment
on
this
issue
just
to
that
effect,
real,
quick.
A
Okay,
I'm
gonna
follow
up
with
them.
I
do
agree
with
Tim's
call
out
that,
like
we
should
probably
make
this
a
cap
so
that
we're
taking
the
time
to
make
sure
we
don't
YOLO
it,
because
it's
just
it
kind
of
seems
like
a
no
therapy,
dragons
kind
of
situation,
anything
else
to
say
about
this
one
or
should
we
move
on
to
the
next
move
on.
A
E
B
E
E
D
Yeah
I've
seen
this
before
in
in
a
couple
of
other
places
where,
like
the
load
balancer
just
says
things,
look
like
they're
going
to
hell,
I'm,
just
gonna
spray
to
everybody
or
things
like
they're
going
to
hell.
Let
me
flip
to
a
different
back-end
service
that
returns
really
cheap
404s,
instead
right
or
not,
four
fours,
but
503s,
or
something
instead
right.
G
D
E
E
So,
let's
finish,
you
have
a
roadblock
and
you
have
an
important
okay.
I
do
need
greatness.
The
way
that
load,
balancing
Implement
is
really
usually
they
can
check
directly
from
whatever
outside
the
thing,
but
in
kubernetes
this
is
not
the
case.
Is
the
realness?
Is
the
the
one
that
you
configured
in
the
in
the
post
and
it's
the
one
that
we
propagate
with
some
points
right
right?
D
So
I
think
there's
a
few
different
things
that
this
pattern
mitigates.
One
is
our
health
Checkers
are
unreliable
and
we
don't
believe
what
they're
telling
us
the
other
is.
Things
are
going
to
hell
everybody's
overloaded,
better
to
spray
the
traffic
around
and
try
to
serve
it
than
not
right,
which
I'm
not
sure
I
buy,
but
but
I
I
see
why
people
could
get
there.
D
The
way
I
interpreted
so
I
I
linked
this
back
to
another
issue
right,
which
is
the
the
backup
selector
the
way
I
understood
the
backup
selector.
The
real
value
is
not
we
don't
trust
our
health
checking.
It's
something
is
really
going
wrong
with
this
service.
Let
me
send
traffic
to
an
alternate
service
instead,
not
to
do
the
same
job,
but
to
give
a
lower
cost
error.
D
Answer
like
when
you
go
to
GitHub
and
you
get
the
Unicorn
right,
like
that's,
coming
from
a
different
back
end
than
the
regular
git
servers
right,
because
their
load
balancer
was
like
crap
Something's
Wrong,
send
them
to
the
error
page.
D
D
I
think
I
I
interpret
this
request
as
some
threshold
below
which
we
send
traffic
to
a
different
set
of
back
ends.
Instead
right,
so
if
I
have
10
services
and
only
one
of
them
is
actually
reporting
ready
at
the
moment,
just
send
them
to
the
unicorn
and
let
the
Unicorn
deal
with
it
versus
trying
to
overload
that
one
and
then,
when
the
rest
recover
say
we
get
back
to
30,
then
they
come.
D
I
well
a
backup
selector,
so
I
mean
the
original
description
was
having
two
selectors
in
a
service,
one
being
the
primary,
the
other
being.
The
oh
crap.
B
D
And
so
I
I
thought
it
was
an
it's
an
interesting
idea.
I've
heard
it
enough
times
now
that
I'm
open
to
considering
it.
But
the
real
question
is:
do
we
pound
this
into
the
service
API
or
do
we
say
actually
here's
another
great
use
case
where
Gateway
might
be
the
better
vehicle.
D
You
think
if
we
had
a
service
in,
we
have
like
a
TCP
route
right.
D
What
if
the
TCP
route
was
the
place
where
we
put
the
backup
selector
in
like
service,
is
just
such
an
overloaded
API
that
everything
we
add
to
it
comes
with
a
hundred
Corner
cases.
If
you
look
at
the
test,
Matrix,
that's
in
the
registry
tests
for
service,
like
every
new
dimension,
we
add,
adds
thousands
of
lines
of
test
case
because
it
has
to
be
tested
against
all
the
other
combinations.
So
possibly
this
would
be
make
more
sense
in
like
a
TCP
route.
Instead
of
instead
of
in
service
I
mean
a
TC.
D
A
That
does
kind
of
put
the
the
onus
on
whoever
like
this
guy,
for
instance,
if
he's
got
his
unicorn
front
end
and
his
python
back
end
right,
like
he
said
in
the
situation,
service
type
load,
balancer,
exposing
it
or
something
like
that.
That
then
means
they
have
to
put
something
else
that
speaks
Gateway
API
in
between
it,
so
it
doesn't
serve
the
case
where
you
wouldn't
necessarily
want,
like
an
Ingress
controller
in
front
of
it.
A
That'd
be
my.
My
only
worry
I
guess
is
that
if.
D
If
we
I
would
love
to
see
us
as
a
project
get
to
a
place
where
service,
the
role
that
service
fills,
was
minimized
and
minimized
and
minimized
right,
and
so,
if
they
wanted
to
create
a
cluster
IP,
we
could
have
a
cluster
IP
gateway
gateway
class,
equals
cluster
IP
right
and
then
in
the
TCP
route.
For
that
you
could
have
your
backup
service.
D
Instead
of
jamming
it
into
service.
Sorry
backup,
selector.
Instead
of
jamming
it
into
the
service
API
itself,
the
the
conversations
digressed
a
little
bit
into
this
idea
of
like
do.
We
derive
a
new
API
resource
from
service.
That
is
just
a
selector
like
a
standing
query,
but
it's
a
it's
a
lot
of
ideation.
Without
a
lot
of
follow-through
right
now,
Cal
I
saw
your
hand,
go
up
and
then
I
think
it
went
down.
What
do.
F
F
And
it's
also,
it's
also
like
one
of
those
things.
It's
like
putting
a
user
space
thing
and
token
module
like.
Why
should
we
do
that?
It
should
be
on
a
pop
layer
like
one
layer
above
not
one
layer
below
and
to
be
honest,
not
everybody
will
do
that.
So
a
Gateway
sounds
to
me
like
a
perfect
place
for
this
right.
I.
A
Should
say
that
on
First
on
first
consideration,
I
agree
like
that
does
seem
like
a
compelling
feature
yeah
we
do.
We
do
I,
do
feel
like
lately
and
and
in
the
same
way
that
we
worry
about
overloading
service.
I
do
feel
like
lately.
We
say
that
should
go
to
Gateway
a
lot,
and
that
does
give
me
some
pause.
Sometimes.
D
As
the
person
who's
throwing
the
fire
hose
at
you
guys,
I
I
feel
that,
but
Gateway
is
already
designed
to
be.
This
is
my
internal
excuse.
Gateway
is
already
designed
to
be
a
modular
API
with
multiple
extensions
and
extension
points.
So
adding
something
to
Gateway
doesn't
mean
slamming
another
field
into
the
same
resource.
E
H
B
D
So
I've
seen
a
couple
I've
seen
a
couple
of
cases
where
people
are
trying
to
build
an
API
that
looks
a
whole
lot
like
service,
like
it's
a
selector
with
a
port
and
but
they
don't
want
to
use
service
or
they
try
to
use
service.
But
service
is
so
overloaded.
They
have
to
then
say
well
what,
if
the
user
makes
it
a
headless
service,
what
if
they
make
it
a
load,
balancer
service?
What
if
it's
a
node
Port?
What
if
they
set
external
name
and
then
point
your
thing
to
this?
D
None
of
those
things
make
sense
in
those
contexts,
and
so
what
people
are
I've
seen
a
couple
times,
not
not
a
lot
but
a
couple
where
what
they
really
just
want
is
a
selector
that
generates
endpoints,
and
they
don't
want
to
write
that
logic
themselves,
because
it's
complicated
logic
right.
D
This
is
part
of
the
discussion
of
externalizing,
the
endpoint
slice
controller
right
and
like
using
that
as
a
library,
but
the
other
direction
would
be
what
if
we
just
had
a
resource
that
was
a
pod
selector
that
generated
endpoint
slices
and
like
the
same
way
when
you,
you
know
Antonio
the
work
with
IP
address
right,
you
create
a
service
and
you
get
an
IP
address.
What,
if
you
created
a
service
and
you
get
a
pod
selector
and
the
Pod
selector
is
the
thing
that
actually
triggers
the
endpoint
generation
like?
Does
that
make
sense?
E
No,
what
I'm
saying
is
we
have
to
forward
traffic
to
70
and
when
it
seems
that
you
have
the
use
cases,
and
it's
clear
from
your
comment
that
is,
people
is
focused
on
Parts.
What
is
the
the
end
of
the
vacation
and
and
what
I'm
asking
is?
Is
that
the
right
backend
or
we
should
move
people
to
work
with
the
Prime
Minister
process
and
all
these
things
that
already
orchestrate.
D
H
E
H
A
We
are
hitting
the
half
hour
on.
D
On
this
issue
mostly
moved
on
to
the
the
linked
issue
of
backup
selectors,
so
people
want
to
keep
the
conversation
going.
That's
where
it
seems
to
be
right
now:
okay,
but
yes,
I
think
we
can
triage,
accept
this
one
and
move
forward,
or
rather
the
I
guess.
The
question
is:
should
we
just
de-dupe
it
to
backup
selector.
A
That's
I
think
the
question,
and
this
is
what
I
was
going
to
say
is
I
I,
think
the
the
concept
that
we
might
be
able
to
put
like
some
fallback
routing
Behavior
into
Gateway
API
makes
sense,
but
I
think
there
are
two
questions.
We
need
to
ask
this
person.
One
of
them
is
if
that
interpretation
of
backup
selector,
is
actually
accurate
to
what
they
want,
because
it's
not
clear
to
me
100
that
it
is.
We
should
just
verify
that
and
then
see
if
something
like
putting
that
in
Gateway.
Api
would
be
tenable.
A
B
A
B
D
That's
a
fair
question:
when
I
first
saw
the
backup,
selector
discussion,
that
was
my
thought
was:
why
don't
you
just
scale
out
instead
of
having
a
backup
selector,
and
the
thing
that
came
to
I
came
to
realize
was
the
backup.
Selector
is
not
a
replacement
for
the
service
like
it's
doing
the
services
job,
it's
just
something
that
says.
Oh,
no,
oh
my
God
something's
happening
like
load
shed
load
shed
return
errors,
be
super.
D
Yes,
I
mean
in
a
lot
of
the
Ingress
implementations.
We
have
the
default
404
server
right
like
if
you,
if
you
didn't,
if
you
didn't,
if
you
give
us
your
Ingress
controller,
a
URL
that
it
didn't
understand,
it
would
just
send
you
to
the
404
server
and
all
the
404
server
would
do
is
return
404.,
okay,.
E
A
A
E
D
D
E
G
D
E
B
E
Thing
is
I.
We
have
this
issue
with
the
service
side
that
and
all
the
things
what
is
configuration?
What
is
installation
what
is,
and
we
have
this
semi-sided
cluster
sizes,
DNS
config,
and
since
that
we
are
moving
towards
make
it.
I
A
Without
without,
if
I'm,
if
I'm
reading
this
right,
I'm
hearing
this
right
without
more
information
from
him
about
what
problems
this
is
causing,
this
seems
like
it's
just
something
that's
inconvenient,
so
this
is
kind
of
in
the
nice
to
have
territory
yeah,
so
is
it
the?
Is
it
the
kind
of
situation
where
we
should
accept
it
and
then
put
it
on
like
priority
backlog?
If
somebody
wants
to
get
to
it
eventually,
they
can.
E
H
E
Finished
is
there
any
president
that
we
touched
the
cubelet
conflict
team
you've
been
because
that
used
to
be
signal
territory.
D
A
D
I'll
respond
to
this
one.
You
can
assign
it
to
me,
I'll,
actually
I'll,
just
edit
my
answer
a
little
bit
and
recap
what
we
talked
about
and
I'll
accept
it,
which
will
force
us
to
look
at
it
again
in
a
few
months
because
nobody's
going
to
work
on
it
in
the
meantime,
that's
my
prediction.
A
Okay,
that's
good
this
one
I
haven't
seen
at
all
failure.
Cluster
test,
note,
ipam
controller
with
Center
mask!
Oh
dear.
E
Yeah
he's
sending
a
fix,
and
for
for
for
that
right,
the.
E
No,
it's
it's
timing
out
with
this
yeah,
but
during
the
test
Professor
he
found
out
that
we
were
doing
okay,
rock
fatal
directory.
The
controller
and
I
was
commenting
with
Jordan
and
I.
Don't
say
it's
better
that
you
don't
do
that
I,
do
it
properly
or
propagate
there,
so
it
said
and
but
well
you
can
assign
it
to
to
pack.
Okay.
H
E
A
I'll
do
that
async
right
after
all,
right,
let's
go
okay,
so
actually
we
only
have
one
more
thing:
Bridget,
we
do
have
the
time.
So,
if
you'd
like,
we
can
look
at
some
of
the
frozen
stuff.
I
A
E
A
C
C
C
I
mean
what
do
we
say
here:
yeah
like
there's,
really
no
consensus
about
why
external
IP
even
exists
like
cubelet,
does
not
internally
ever
look
at
ever
care
about
multiple
node
IP
addresses
and
internal
iot
versus
external
IP.
So
it's
all
external
tooling,
and
so
one
argument
is
oh
yeah.
We
need
to
add
the
ability
to
have
external
IPS
on
bare
metal
nodes
so
that
you
know
blah
blah
blah
and
then
the
other
argument
is
well.
You
know
we
lasted
this
long
without
it.
E
B
B
B
C
B
Well,
maybe
a
different
question
is:
what
actually
cares
about
external
IPS?
That
Kubler
writes
there
anyway.
A
So
it
kind
of
sounds
like
we
would
really
need
to
see
somebody
from
the
outside
pressing
us
for
this
for
us
to
actually
work
on
it.
So
maybe
it
is
I
mean
I,
don't
know.
I've
said
this
before
I
feel,
like
close,
does
not
mean
Dead
Forever,
but
I.
Don't
know
how
everybody
else
feels
about
that
necessarily
so
maybe
it's
fair
to
close
it.
It's
not
something
we're
prioritizing
at
all,
but
that
doesn't
mean
that
somebody
couldn't
come
along
later
and
we
reopen
it
if
we
need
to
because
they
want
to
push
for
it.
I
A
That's
something
Tim
brought
up
last
time
too,
which
is
why
I'm
not
trying
to
be
too
pushed
about
it.
Like
clothes,
people
don't
usually
search
through
clothes,
I
tend
to,
but
I
would
say
it's
probably
accurate.
The
vast
majority
of
people
don't
so
duplicates
if
nothing
else,
but
the
alternative.
I
guess
is
that
it's
open
forever
without
anybody
working
on
it,
which
is
that,
worse
than
somebody
creating
a
new
one
that
when
they
want
it.
C
Assign
it
to
me,
because
I
need
to
revisit
the
cloud
node
IP
kept
in
the
next
cycle
anyway,
because
as
I
started,
implementing
it,
it
yet
again
turned
out
to
be
more
complicated
than
we
had
so
so
I
would
say,
leave
it
open
and
assign
it
to
me.
E
D
A
G
I
D
E
D
I
mean
if
I
understand
correctly,
this
is
about
the
TLs
certificates
right
and
like
it
sounds
like
any
application
that
serves
HTTP
with
a
TLS
certificate.
Has
this
problem?
It's
not
a
cube
litter,
Cube
proxy
thing:
it's
a
applications,
don't
generally
go
back
and
reload
their
HTTP
certificate,
or
rather
that's
not
a
default
feature
of.
Like
goes
HTTP
stack.
B
A
E
E
D
B
C
B
B
Well,
the
first
comment
in
the
top
of
the
issue
seems
pretty
clear
under
if
you
go
all
the
way
up,
put
a
little
bit
right
there
under
what
happened,
so
it
says,
TL
asserts
were
updated
properly,
but
cubelet
and
Cube
proxy
keep
the
old
certs
in
memory.
This
causes
them
to
fail
when
communicating
with
API
server.
E
D
Sir,
let's
wrap
this
one
does:
does
anybody
want
to
like
look
into
it?
It's
actually
kind
of
an
interesting
problem
in
general.
I,
don't
think
it's
necessarily
specific
to
cube
proxy
or
cubelet.
It
is
interesting.
It
doesn't
seem
like
it's
on
fire,
but
if
somebody
wanted
to
like,
if
somebody
was
interested
in
this
area
and
wanted
to
take
it
as
a
background
task,
that
would
be
good.
D
E
For
cubelet,
it
has
something
that
cuts
the
crying
correct,
because
it
has
a
a
dialer
cache
and
when
it
has
to
renew
it
grows
all
the
kind
Corrections
and
it
costs
to
reconnect
through
the
client
side.
D
Yeah
I
mean
we
could
do
this
like
we
did
with.
Was
it
Cube
proxy
like
well?
Your
config
has
changed
OS
exit
and
let
the
let
the
controllers.
D
A
D
We
have
that
for
something
else,
I
forget
Antonio.
What
what
do
you
remember
what
it
was
that
we
were
doing
that
for.
E
D
D
D
A
I
E
A
Antonio
it
sounded
like.
Maybe
you
had
one
other
topic.
You
wanted
to
kind
of
bring
in
real
quick
before
we.
E
Yeah
because
I
have
within
this
is
a
conversation
with
150
governments
in
this
lack
about
the
you
know
that
we
have
now
crushed
inside
the
API
that
configures
the
iPhone
and
the
both
sides
on
the
knob
and
what
I'm
trying
to
do
is
to
create
the
same,
but
for
service
a
service
item.
But,
as
Cal
correctly
pointed
out,
is
we
are
going
to
allow
people
to
have
overlapping
between
just
aside
that
and
service
side.
So
we
need
to
have
a
way
to
to
avoid
that.
E
We
right
now
are
doing
that,
because
we
are
passing
both
parameters
and
subtracts,
and
there
is
only
one
node,
iPhone
controller
and
another
controller
check
the
tracks
and
and
removes
one
of
them.
So
what
I
was
thinking
is.
If
we
have
a
service
either
we
can
add
a
status
condition
and
the
node
icon
title
will
be
the
one
to
populate
this
condition
to
say
you
can
use
it
without
overlapping
or
not.
D
I'm,
just
thinking
I'm,
trying
to
page
back.
F
E
Yeah
but
I
I
commented
this
with
other
people
that
is
not
in
Network
and
and
so,
with
diabetes
and
and
and
I
think
that
everyone
well
anyway,
and
he
told
me
that
the
canonical
way
to
solve
this
problem
is,
is
with
a
status
conditions
and
controller
populating
the
conditions
Romance
I
just
wanted
to
be
to
to
write
it.
With
this,
a
quick
win
or
I
opt
out.
We
should
think
about.
D
D
Historically,
it's
the
service
ranges
win
right,
but
that
was
only
because
we
could
do
it
at
startup
time
right
like
what
happens
now
that
this
is
all
asynchronous
to
each
other.
What,
if
I'm
create
a
service
range
that
overlaps
an
existing
nodes?
E
No,
but
the
thing
the
thing
is
the
use
cases.
People
are
gonna
want
to
have
one
or
two
products.
Well,
so
this
initially
was
because
ipvc
didn't
support
more
than
one
132,
but
the
use
cases
people
have
the
services
have
called
and
doesn't
have
any
way
to
a
ring
or
to
or
to
grow
I
mean
it
cannot
resize
step-by-side
already.
A
Is
this
all
in
that
slack
conversation
you
were
talking
about
or
is
there
a
relevant
issue.
D
Oh,
you
have
a
new
one,
but
service
doesn't
support,
match
expressions.
Deployment
does
but
service
doesn't.
D
D
D
D
Okay,
it's
sort
of
the
same
question
as
the
all
ports
question
of
like
we're
sort
of
hemmed
in
by
past
API
decisions
superficially.
We
can't
change
past
API
decisions,
but
actually
in
practice.
If
we
do
the
right,
you
know
pirouettes
at
the
right
time
we
can
avoid
the
lasers.
It
just
is
a
lot
of
work.
If
somebody
wants
to
undertake
it,
we
can
talk
about
it.