►
From YouTube: Kubernetes SIG-Network Meeting for 20230427
Description
Kubernetes SIG-Network Meeting for 20230427
A
This
meeting
is
being
recorded,
hello,
everyone
and
welcome
to
the
April
27th
edition
of
the
Sig
Network
meeting,
just
a
reminder
for
everyone
as
usual,
that
this
meeting
is
governed
by
the
kubernetes
code
of
conduct,
which
boils
down
to
please
be
nice
to
one
another.
So
please
do
be
nice
to
one
another.
A
A
B
B
That
then
well,
there's
two
that
are
both
flagged
as
ipvs
I
think
that
it
would
be
useful
if
somebody
from
ipbs
could
go
look
at
they're,
both
Cube
proxy
and
ipbs,
I.
Think
and
then
there's
one
about
ipam
allocation,
which
I
just
tagged
service
and
Antonio
on
the
others
are
older.
A
Okay,
so
yeah,
let's
start
with
the,
it
sounds
like:
let's
just
go
with
what
we
got
in
the
agenda
so
far
and
then
we'll
follow
up
with
triage
then,
but
if
anybody
could
jump
on
those
in
particular
ipvs
and
so
forth,
we'd
appreciate
it.
Can
you
see
my
doc?
A
A
B
B
Should
be
good
now
yep,
what's
this
window
called
significance.
B
How's
that
you
see
it
yes,
all
right:
let's
do
the
27's
out.
Congratulations,
everybody!
Let's
run
through
caps
and
see
what
we
need
to
touch
in
terms
of
moving
between
columns
and
in
terms
of
next
milestoning.
So
did
we
remove
any
Gates
this
time
we
removed
dual
stack
right?
B
Oh,
this
is
already
in
the
column,
so
I'm
the
wrong
wrong
column.
Did
we
remove?
Did
you
know
traffic
policies,
28
27,
grpc
probe?
Did
that
go
ga
there
did
we
remove
the
gate?
C
B
C
B
Oh,
we
did,
we
did
the
some
of
it.
We
didn't
touch
the
ones
that
need
to
be
updated
now
in
29
or
28..
That's
right!
Thank
you,
Dan.
Let
me
close
these
because
not
interesting
right
now,
okay,
so
grpc
went
GA
in
27,
so
we're
going
to
set
the
next
touch
for
29
right.
B
So
let's
watch
this.
Oh
I,
add
a
new
milestone.
B
B
So
now
I
mark
it
I
mark
it
as
129,
and
we
know
that
the
next
time
we
touch
it,
we
will
be
129..
This
is
28.
This
28th
is
28.
cool,
28,
28
27.
This
we
want
a
ga
in
28,
hopefully
right.
B
B
B
E
B
Okay:
dual
stack,
IPS
Dan.
D
B
C
B
Yeah
Alpha,
okay,
I'm
just
cross
checking
everything,
make
sure.
There's
no
more
27s
left.
B
D
B
Okay,
I'll
look
at
that,
but
I'm
going
to
move
it
over
to
can
I
move
it
all
the
way
over.
No,
how
I
move
it.
B
C
We
go
what
happened
with
this
host
IPS,
because
we
went
back
and
forth
a
lot
of
times
with
this.
Okay
on.
B
Oh,
oh
I
think
it
is.
B
E
B
Yeah
right,
okay,
so
I
think
that's
it
for
now,
I'll
run
through
the
other
ones
to
see,
if
there's
any
that,
have
making
any
progress
or
need
to
be
moved
out.
This
is
the
time
where
I
remind
people
that,
if
you
have
caps
that
aren't
in
this
dashboard
we're
not
paying
attention
to
them,
or
at
least
I'm.
B
Not
so,
if
you
know
of
caps
that
aren't
here,
let
me
know-
and
we
can
add
them
to
this
dashboard
or
you
can
think
you
can
assign
them
to
the
project
yourselves
with
that
I'll
stop
sharing
and
let
you
continue
with
the
agenda
foreign.
A
All
right,
moving
on
to
the
next
item,
then
Marcus
Network,
quality
of
service
demo
and
feedback
go
ahead.
Yes,.
F
Yeah,
hello,
everyone,
I'm,
Marcus,
left
and
I'm
working
from
foreign.
F
At
this
and
I'm
about
to
talk
talk
one
or
present
cap
that
I've
been
I've
been
and
then
kind
of
one
one
possible
you
user
of
of
that
cap,
so
kind
of
network
network
quality
of
service,
so
I
would
have
a
few
slides
just
to
present,
give
some
background
overview
of
the
kit
and
then
I
have
a
demo
of
a
group
of
Concepts
kind
of
network
POS
implementation
that
we
have
based
on
on
the
mechanism.
F
Yeah
cool,
so
yes,
so
I'm
going
to
present
this
well
overview
of
the
QRS
resources
cap
and
then
kind
of
one
one
usage
of
that
that
we've
been
kind
of
concepting
network
QRS,
so
the
Curious
resource
is
kept,
is
about
improving
the
quality
of
service
of
applications
in
kubernetes.
F
And
it's
it's
really
the
main
kind
of
goal
there
is
to
improve
quality
of
service
applications
by
enabling
like
mechanisms
and
Technologies
the
primary
and
for
qers
that
have
traditionally
been
out
of
control
in
kubernetes.
So
like
two
primary
usages
for
this,
that
we
could
enable
immediately
with
this
Gap
is
cache
allocation
and
memory
bandwidth
control
in
Intel.
F
We
have
this
kind
of
Hardware
technology
called
resource
director
technology
for
rdt
and
they
are
similar
in
AMD
and
arm
as
well,
but
it's
kind
of
in
the
hardware
sort
of
class
class
based-
and
we
also
have
implemented
in
the
runtimes
class
based
management
and
configuration
of
the
block,
ioc
group
controller
for
for
managing
throttling
and
prioritization
of
local
disk
IO.
So
this
would
be
the
kind
of
immediate
first
kind
of
customers
for
the
for
the
new
QRS
mechanism.
F
But
then
there
are,
there
are
other
usages
that
they've
been
concepting
and
thinking
about.
One
is
Network
cures
and
there's
like
memory,
qos,
slab
management
and
other
other
uses
as
well.
There's
a
link
for
the
for
the
pr
or
the
cap,
and
we
are
now
really
trying
to
get
this
into
into
128
as
Alpha.
F
It
was
all
has
been
on
the
table
quite
quite
Lonely
Long
in
signaled
already.
Originally
it
was
tagged
for
125
but
then
dropped
because
of
reviewer
bandwidth
limitations
and
same
happened
to
126,
and
we
missed
127
as
well,
and
I'm
really
trying
to
well
get
awareness
for
this
clip
and
people
to
kind
of
plus
one
and
preview
it
if
they're
interested
and
then
yeah
get.
F
We
call
it
well
resource
typing,
Cube,
kubernetes,
so
kind
of
fundamental
new
resource
type
like
Qs
resource,
where
you
will
not
allocate
any
any
numerical
amount
of
something
but
but
just
kind
of
assign
the
class
class
identifier
to
a
container
or
a
pod.
So
the
resource
could
be
yeah
in
this
case
Network,
and
then
you
could
just
assign
this
network
class,
let's
say
fast,
slow
or
something
else
to
containers
and
then
so
it's
not
exclusive
allocation
but
allocation
by
Design.
F
F
So
the
configuration
and
management
of
this
qos
resources
and
classes
would
be
handled
in
the
content
of
runtime
or
below
that
and
kubernetes
only
only
kind
of
knows,
the
resource
types,
let's
say:
Network
or
or
cash,
and
then
the
classes
which
are
available
on
which
nodes,
but
not
much
other
than
that
kubernetes
doesn't
need
to
know,
know
the
meaning
of
the
resources
of
the
classes
or
the
kind
of
parameterization.
F
So
what
let's
say
Network
paper
class
fast
would
mean
so
people,
but
this
doesn't
need
to
know
anything
about
that
and
got
a
short,
simplified
architecture.
F
Diagram
of
the
cap
that
we
are
proposing
here,
just
to
get
the
understanding
how
it
will
or
would
play
out
in
our
in
our
proposal
is
that
well,
this
simplified
picture
has
like
simplified
you
better
at
this
cluster,
with
API
and
scheduler
from
the
control
plane
and
then
one
node
and
in
in
the
node
we've
kind
of
depicted
under
the
content
around
time,
and
the
system
is
everything
everything
else
always
and
hardware,
and
everything
and
kind
of
the
story
of
the
qos
resources
starts
from
the
point
or
bullet
one
in
this
picture.
F
So
the
container
runtime
would
initialize
or
or
or
discover
the
qos
resources
and
classes
in
the
available
on
the
system
and
then
in
the
step
number
two.
It
will
or
would
kind
of
inform
you
bullet
about
the
available
us
resources
and
the
classes
and
then
in
step.
Three
cubelet
would
in
turn
update
the
node
status
in
the
API
server.
F
So
in
this
case
it
would
update
or
show
three
types
of
keyword,
resources
in
the
in
the
node,
a
b
and
c,
and
then
some
classes
under
under
it's
it's
Qs
resource
type
and
then
in
step.
Four,
the
Pod
is
created
in
the
API
server.
F
It
has
some
requests
in
this
case,
two
requests
for
two
different
qos
resources:
hey
for
a
we
would
request
class
gold
and
of
course
it
is
high
priority
Plus
and
then
scheduler
picks
that
up
and
put
up
and
and
just
does,
the
kind
of
normal
normal
load
filtering
and
to
see
which
which
node
is
able
to
satisfy
the
QRS
resource
requirements.
And
it
says
that
okay,
no
X
it
can
provide
this
Gold
Class
for
a
and
high
priority
class
for
C.
F
And
then
it
schedules
the
pod
to
node
X,
the
kubernet
picks
the
top,
it
picks
the
Pod
up
and
then
in
in
Step.
Seven
then
sends
the
information
to
the
content,
runtime,
that
okay
create
this
port
and
these
containers,
and
with
with
these
QRS
qos
resources
and
classes,
and
then
the
runtime
in
the
last
step,
number
eight
will
actually
enforce.
F
F
So
then,
about
the
network,
us
proof
of
concept,
just
one
slide,
so
we
thought
that,
okay,
how?
How
could
we
or
how
would
would
it
be
possible
to
leverage
this?
This
mechanical
Network
and
we
came
up-
came
up
with
with
a
one
one
solution
with
or
yeah
proof
of
concept
type
of
solution.
We've
had
some
private
discussions
with
some
people
and
they've
been
kind
of
encouraging
that
okay
looks
kind
of
that
it
could
work
out.
F
So
please
present
it
to
seek
Network,
and
that's
that's
what
I'm
now
doing
so
in
this
proof
of
concept,
the
kind
of
extend
the
CNR
config
in
a
sense
to
have
one
con,
one,
one
top
level
of
top
level
field,
so
utilize
and
the
capabilities
mechanism
to
express
qos
resources
or
other
classes
and
and
the
runtime.
Let's
get
gets
the
QRS
available
classes
from
the
CNR
config,
and
it
has
this
in
this
proof
of
concept.
This
qos
field
on
the
on
the
top
level
and
it
can
have
arbitrary
number
of
classes.
F
In
this
example
on
the
right,
we
have
fast
and
slow
and
then
something
else
and
basically
you
can
Define
or
specify
any
arbitrary
or
could
specify
capabilities
there
and
then
the
runtime
in
our
proof
of
concept
we
have
containerd
now
enabled
so
it
it
gets.
The
Network
us
class
that
is
requested.
It
gets
it
from
the
Hublot
at
the
sandbox
creation
time
and
then
just
sees
what
what
capabilities
are
are
set
for
the
class
and
then
passes
those
those
capabilities
to
the
CNR
plugins.
That
kind
of
are.
F
F
So
here
I
have
a
single
single
one:
node
cluster
running
clutched
kubernetes,
but
I
haven't
rebased
called
in
a
while.
So
it's
still
like
127
alpha
alpha
version,
but
anyway,
and
on
the
patch
patch
container
there
to
support
this
curious
resources.
F
The
OneNote
cluster
I
can
show
the
how
the
node
looks
like
so
we
had
some
Curious
resources
here,
for
example,
here
for
the
for
the
network.
We
have
now
like
three
classes
fast,
normal
and
slow,
and
then
we
have
like
third
node
capacities
there
as
well.
F
So
that's
how
the
node
looks
like
so
then.
First
simple,
simple:
example
of
a
pod
requesting
some
computers
curious
resources.
So
here
we
have
a
simple,
simple
pod:
doing
really
nothing
but
requesting
like
past
Network
and
then,
which
is
one
container,
that
request
this
cache
Dash
memory,
bandwidth,
plus
gold
and
block
bio
plus
match
priority
and.
F
That
that
board
board
and
whether
that
it's
running
and
then
we
can
take
a
look
at
take
a
look
at
the
node
as
well,
and
we
always
say
that.
Okay,
the
capacity
has
changed
so
Network.
Now
it's
kind
of
saturated
so
fast.
F
F
F
F
So
so
everything
is
specified
in
the
near
the
CNR
config.
F
Three
classes:
slow
bandwidth
parameters,
then
normal
data
capacity
of
eight
for
this
node
and
then
first
capacity,
one
and
then
some
bandwidth
parameters
and
as
we
saw,
we
have
the
same
same
kind
of
classes
and
and
capacities
on
the
Node
as
well.
F
And
then
we
can
also
see
that
it
actually
does
something.
So
so,
when
okay,
this
simple,
simple
board
requesting
fast
Network.
So
let's
run
that
one
and
then
that's.
F
Yeah
no
well
here,
okay,
so
now
we
can
see
that
right.
So
now
we
can
see
that
it
kind
of
for
the
either
interface
it
kind
of
actually
put
put
in
the
Cape
bandwidth
parameters
that
we
set
and
then,
if
we
can
get
another
one
basically
request
that
class
normal.
Then.
F
F
F
E
G
F
A
F
A
Just
wanted
to
kind
of
point
out
the
time,
but
I
have
a
couple
of
questions
so
I'm
personally.
This
is
interesting.
I'm
personally
coming
in
this
late
I'm
like
this,
wasn't
really
on
my
radar
before
so
I,
just
cc'd
myself
on
the
pr-
and
this
is
probably
an
obnoxious
question-
but
I
just
didn't
catch
this
while
looking
anything
over
is
it
do
you
have
other
orgs
that
are
kind
of
interested
in
this
working
with
you
on
it?
Or
was
it
mostly
just
Intel?
So
far,
just
a
question
just
curious,
because
I
didn't
see.
F
It
well
yeah,
so
a
lot
of
people
have
kind
of
let's
say
said
that
there
are
interested
kind
of
interested
and
yeah.
It
looks
like
cool
mechanism,
but
but
no
kind
of
other
industry
partner
that
I
could
kind
of
catch
up.
F
Give
you
at
the
moment
that,
okay,
that
we
in
the
land-
let's
say
some
other
other
big
player,
are,
are
kind
of
driving
for
this.
But
this
kind
of
for
us
to
be
a
generic
mechanism
and
to
enable
enable
a
lot
of
use
cases
and
they're
yeah
in
the
cafe
released
someone
and
some
more
exotic
possible
future
Endeavor
as
well.
A
Gotcha
Lars
just
linked
celium's
bandwidth
manager,
which
I
assume
is
like
a
is
to
say,
hey.
This
might
be
relevant
to
you
as
well.
F
And
yeah,
and
one
one
yeah
of
course
comment.
So
they
of
course,
can
needn't
be
kind
of
any
like
fast
normal
slow
type
of
it
could
be
like
red
blue
type
of.
If
you
have
different
kind
of
well
you're,
not
a
network
better
than
being
me,
but
but
kind
of
have
have
a
fast,
fast
backbone,
Network
for
storage
and
then
like
not
that
fast
for
for
services,
and
then
you
could
kind
of
like
select.
Okay,
now
I
want
red
red
type
of
network
and
then
trying
to
schedule.
F
B
F
B
This
is
the
the
biggest
the
biggest
problem.
I
have
and
Marcus
I'm,
not
ignoring
you
on
slacker
I'm,
just
super
busy
I'm,
ignoring
everybody,
the
I'm
not
ignoring
you
in
particular,
but
the
problem
that
I
see
here
is
we
have
three
now
caps
in
Flight
that
are
all
roughly
describing
overlapping
areas
right.
This
is
a
explicitly
qualitative
API,
not
a
quantitative
API.
You
can't
request
one
gigabyte,
a
gigabit
of
network
traffic.
B
Gold
or
purple
or
triangle
right,
yeah
and
there's
the
multi-network
cap,
which
is
about
indicating
the
need
for
explicit
network
connections
right
I,
need
connectivity
to
this
exact
Network
and
there's
the
dynamic
resources
API,
which
is
about
connecting
to
specific
arbitrary
resources.
We
just
had
a
conversation
this
morning
about
trying
to
bring
those
ladder.
Two
in
line
I
had
not
been
thinking
about
this
one
as
part
of
the
multi-network
space,
but
now
that
you
bring
it
up,
I
can
see
the
potential
for
overlap,
and
so
we
need
to
figure
that
out.
B
My
biggest
fear
here.
I
love
this
idea
by
the
way
and
and
as
a
qualitative
metric
I've,
definitely
heard
customers
over
and
over
and
over
again
asking
for
Priority
Access
to
networking.
So
I
I,
like
the
the
idea,
I'm
just
super
concerned
that
we're
throwing
three
huge
things
at
the
scheduling
subsystem
all
at
the
same
time,.
H
Well,
Tim,
you
mentioned
what
it's
theoretical
possible
to
bring
cuadier
and
this
multi-network
cap
together
yeah.
So
hopefully
it
will
be
like
one
thing:
it
was
the
scheduler
and
it's
already
like
implemented
problem
for
this
one
like
according
to
what
Marcus
tried
it's
it's
not
that
big
impact
of
a
scheduler,
because
it's
like
we're
countable
resources
and
the
same
like
no
status
part
so
should
be
not
problematic.
B
Yeah
I
need
to
go
back
in
and
catch
up
with
everything
you
guys
have
done
and
and
give
you
some
love
soon.
B
H
Pos
for
like
for
this
kind
of
network
setup,
is
it's
more
for,
like
more
formal
parameter,
passing
to
to
exist
in
setup
or
existing
like
think
like
black
maltos,
it
might
be
more
helpful
rather
than
annotations
and
and
for
this
multi-network
interfaces
with
explicitly
a
requesting
the
interface
yeah.
We
had
also
in
our
plans
to
talk
with
Sig
Network
multi-network
folks.
Unfortunately,
we
missed
two
millions
yesterday
but
Antonio
who
directed
us
what
we
will
talk
to
him
anyway
and
foreign.
H
Level
resources,
so
when
Network
abstraction
can
be
done
with
the
same
approach,
what
what
we
did
for
devices
all
right.
B
A
Yep
sounds
like
we're:
we
are
running
low
on
time.
It
sounds
like
the
next
follow-up
here
is
to
kind
of
connect
the
dots
between
these
three
kepts
Marcus.
Thank
you
for
bringing
this
up.
This
was
a
great
presentation.
I'm
also
I
also
think
it's
really
cool,
but
a
little
bit
more
of
coordination,
I
think
sounds
like
is
what
we
need
to
do
next,
to
keep
you
moving
forward
all
right,
so
that
is
pull
enhancements.
Number
3004
if
you're
interested,
that's
the
pr
please
join.
In
the
conversation
there.
A
All
right
moving
on
Dave
or
let
me
share
my
screen
again
next
agenda
item
Dave,
endpoint
slice
type,
fqdn
go
ahead
and
start
your.
G
Yeah
I
guess
the
backstory
here
is:
can
you
have
uses
service,
epic,
serial
name
kind
of
like
as
a
c
name,
a
way
to
program
cname
records
within
the
DNS
system
without
actually
meaning
to
know
what
the
DNA
system
is
recently
with
all
the
external
name
CVS,
which
can
cause
confused
deputies
for
proxies
or
turn
proxies
into
confused
deputies,
I
sort
of
suggested,
like
don't
use
or
external
name
references,
especially
having
proxies
and
programming
them
to
point
to
these
type
of
services?
G
So
I
kind
of
discover
like
hey,
you
could
use
endpoint
slices
and
then
the
specific
type
is
the
fully
qualified
domain
name
as
like
an
alternative
to
get
cname
records
into
the
kubernetes
DNS
system
and
Lo
behold,
I
saw
I
was
being
deprecated
because
I
did
find
semantics,
so
I
kind
of
chimed
in
on
the
issue
suggesting
like
hey
hold
on:
let's
not
deprecate,
it
I
have
a
potentially
used
whether
this
is
the
way
to
do
those
sort
of
record
aliases
or
an
alternative
I'm.
G
Just
trying
to
come
here
to
figure
out
like
what
what's
a
good
path
forward,
because
the
one
of
the
reasons
why
this
came
top
of
mind
for
me
is
for
the
Geo
API.
One
of
the
issues
is
having
conformance
on
the
different
types
of
kubernetes
services
that
can
be
potentially
targeted
as
the
back
end.
G
It'll
be
good
to
know
like
hey.
This
isn't
a
valid
type
anymore.
Don't
include
that
in
the
Upstream
test
performance
test
and,
alternatively,
for
me
in
the
kinita
sense
like
how
do
I
as
a
person
writing
or
targeting
kubernetes,
as
like
the
platform
for
these
higher
level,
crds
that
we
have
how
do
I
program,
the
dnf
that
way
I
can
program
gateways
to
point
to
these
services
that
are
I
would
consider
manage
only
solely
by
key
native
and
not
end
users.
So
that's
sort
of
like
I
hope
to
summarize
the
discussion.
G
That's
kind
of
happened
in
this
issue,
thanks
Tim
for
asking
questions,
it'd
be
easier.
If
I
could
just
be
with
you
to
whiteboard
it
yeah.
B
B
Remember
now
so
for
the
the
larger
audience
the
history
here
is,
if
you
allow
a
shared
proxy
to
access
services
in
the
cluster
like
like
a
an
Ingress
Gateway
or
something,
and
you
allow
an
individual
like
namespace,
a
to
request
the
proxy
to
serve
namespace
B
right,
then
you
have
enabled
a
way
to
get
access
to
things
that
you
might
not
have
access
to,
but
the
proxy
does,
and
in
particular
you
know,
the
user
in
namespace
B
might
have
configured
the
proxy
to
have
firewall
rules
right
at
that
point
and
they
may
have
their
own
network
policies
that
prevent
you
from
in
a
from
accessing
it.
B
But
by
going
through
the
proxy
you've
now
enabled
access
to
something
that
you
shouldn't
have
enabled
access
to,
and
that
was
the
big
cve.
That's
why
we
shut
down
or
tried
to
shut
down
all
the
external
name,
support
like
people,
you
shouldn't
you
shouldn't
be
doing
that
external
name
is
just
a
way
to
get
around
that.
B
It's.
Why
Ingress
doesn't
have
a
namespace
field
on
the
service,
and
it's
why
reference
Grant
exists
in
the
Gateway
API
right?
So
the
the
question
then,
is
a:
is
this
a
legitimate?
Is
there
a
legitimate
use
case
for
wanting
a
to
be
able
to
export
an
endpoint
that
is
in
B
and
sounds
like
K
native
has
a
use
case
for
that,
and
then
what?
B
How
do
we
do
that
in
a
safe
way
that
doesn't
enable
the
escapes
that
you
don't
want
and
there's
a
follow-up
of
how
do
I
enable
routing
to
arbitrary
DNS
names
that
aren't
necessarily
cluster
services
without
also
enabling
routing
to
Cluster
services,
that's
sort
of
where
the
thread
ended
right.
G
Yeah
and
I
can
highlight
key
native's
use
case
for
those
on
the
call
we
do
a
bunch
of
fancy
stuff,
like
L7
traffic
manipulation
like
splitting
and
rollouts,
and
things
like
that,
and
we
also
want
it
to
be
done
within
the
cluster
like
for
potentially
like
one
key
native
I'll,
just
call
it
application
in
namespace
a
wants
to
call
a
community
of
application
in
namespace
b,
but
they
there
might
be
traffic
splitting.
So
what
do
we
do
there?
G
We
actually
will
proxy
you
through
an
L7
proxy
that
might
exist
in
a
contour
name
space
or
an
SEO
namespace.
So
that's
why
we
use
those
sort
of
aliases
to
to
support
the
extra
hop.
So
we
can
do
these
sort
of
fancy
traffic
mini
collections,
but
I
would
say
everything
you
mentioned
is
accurate.
Yeah.
B
G
I
think
the
other
thing
to
note
is
like
we
can
either
does
a
lot
of
programming
mechanically
or
programmatically
so
like
if
it
involves
like
a
third
person
to
come
in
as
long
as
like
we
can
do
it
programmatically
I
think
that's
fine,
and
as
long
as
it's
a
because
someone's
suggesting
like
oh
you
can
you
just
program,
coordinates
directly
and
I'm
like
no,
because
I
don't
want
unless
there's
a
proper
kubernetes
API
that
is
portable,
then
I
don't
want
to
do
things
like
that.
I
Everyone
can
do
that
as
absurd,
as
that
sounds
I
feel
like
that,
may
be
an
easier
path.
You
know
it
seems
like
we
keep
on
getting
stuck
on.
This
is
something
I
you
know.
I
I
can
see
the
purpose
of,
but
it's
also
very
hard
to
do
safely,
but
you
know
certainly
something
that
we've
been
thinking
about
or
I've
been
thinking
about
is,
if
we're
ever
going
to
have
cluster
IP
gateways,
and
we
want
to
have
a
similar
experience
to
service,
then
we
need
cluster
IP
allocation,
which
Antonio's
kept
gets
Us
close
to
and
then
probably
some
way
to
allocate
DNS
names.
I
So
maybe
that's
actually
it
a
more
promising
path.
I,
don't
know.
G
I
I,
don't
care
if
this
takes
like
two
years
or
whatever,
because
I
know
in
the
meantime,
I
can
I
can
like
shim
in,
like
one
services
like
cluster
ID
into
the
other's
end.
Point
right,
like
that,
like
I,
have
a
workaround
which
isn't
ideal,
but
we
already
have
to
do
that
for
Contour
because
they
completely
turned
off
external
name
support
like
istio
chugging,
along
which
is
great,
so
I'd
rather
yeah
like
like
to
your
point.
G
Rob
like
it
would
be
nice
to
do
it
right
and
if
I
can,
for
example,
create
like
DNS
records
at
a
higher
level.
Api
that
would
be
that'd,
be
great.
B
And
I
think
the
problem
here
is
that
I
mean
I
I,
agree
with
Rob
that
that
seems
like
a
likely
outcome,
but
there's
a
chain
of
questions
that
we
need
to
answer
before
we
can
get
there
like
the
service.
Ip
stuff
is
still
sort
of
under
discussion
about
there's
a
I,
don't
know
who
we're
talking
about
today,
but
there's
a
big
open
issue
of
how
it
overlaps
with
the
pod
cider
allocations
and
literally
I
use.
B
The
word
overlap
on
purpose,
because
that's
sort
of
the
core
of
the
problem,
like
I,
don't
know
that
we
can
really
decide
on
what
we're
going
to
do
with
DNS
resource
until
we
have
a
clearer
picture
of
what
we're
doing
with
this
service.
Ip
allocation
and
I
desperately
want
service,
IP
gateways
right.
Rob
knows
this.
I
never
shut
up
about
it,
but
yeah
I
I,
like
in
the
in
the
thread
we
discussed,
maybe
there's
an
alias
kind
of
service
and
that's
a
way
to
use
reference.
B
Grant
I,
don't
really
know
how
that
would
work.
I
just
was
spitballing.
You
know
I'm
I'm.
What
I'd
like
is
to
encourage
is,
let's
think
about
some
creative
answers.
Instead
of
just
trying
to
shoehorn
in
the
things
that
we
know
don't
work
but
paint
off
the
corner
cases
right
if
we
can
find
a
different
way
of
representing
it,
if
we
can
get
to
the
core
problem
statement,
how
do
I
denote
that
something
is
safe
and
something
else
is
not.
A
G
Another
extreme
I
guess
I
haven't
thought
about
it
is,
if,
like
this,
is
sort
of
like
black
really
like.
Could
there
be
a
designated
label?
Geo
API
recognizes
as
like.
G
If
this
label
exists
on
these
services
with
external
name,
then
we
know
we
can
trust
this
I
don't
know
if
and
then
I
could
be
like,
for
example,
a
an
operator.
You
can
create
a
web
hook
that
prevents
that
service
with
those
labels
and
generic
and
generic
name
spaces,
but
it
allows
it
in
some
places
or
I
assume.
Also
the
weapon
can
look
at
the
role
of
the
user,
potentially.
E
I
Remember,
who
else
mentioned
it,
but
you
know
reference
Grant,
as
it
gets
into
cigar,
and
moves
more
and
more
to
authorization
is
also
a
possible
solution
here.
But
I
I
don't
know
but
I
you
know
any
any
kind
of
label
or
something
that
it
feels
like
is
is
not.
It
is
really
just
kind
of
a
hack
or
a
workaround
yeah.
G
That's
fair
I
guess:
can
reference
Grant
Express
like?
Would
it
express
the
the
traffic
being
able
to
go
from
one
to
another
name
taste
or
are
you
trying
to
say
reference
Grant
would
allow
users
to
create
resources
of
a
certain
type.
I
At
this
point,
the
the
idea
in
sigoth
is
that
reference
Grant
would
allow
cross
namespace
references.
You
know
as
today,
but
also
authorize
them.
So
the
controller
can
read
those
resources
across
namespaces
without
having
you
know,
full
access.
A
No,
it's
okay,
I,
just
I'm,
just
letting
you
know
like
I,
had
intended
to
bring
this
up
too.
So,
there's
a
little
bit
of
this
in
a
dock
that
you'll
be
able
to
see.
David
awesome
it's
on
it's
on
the
meeting
agenda.
B
Thanks
for
bringing
it
back,
I
popped
it
to
the
top
of
my
queue
again
and
I
will
try
to
put
some
more
thinking
into
it,
but
just
to
repeat
myself,
like
hey
people
put
on
your
creative
thinking,
hats,
let's
try
to
see
if
there's
another
angle,
to
come
at
this
room.
G
And
if
you
feel
the
back
and
forth
on,
the
issue
is
like,
if
you
just
want
me
to
whiteboard
like
a
mirror,
like
just
ping
me
on
slack
I,
think
I'm,
just
deep,
deep
road
tassel
to
save
this
time,
I'm
available.
A
Roger
that
okay,
thank
you
Dave,
so
I
just
had
this
relevant
little
tidbit.
A
A
But
to
summarize,
there
seems
to
be
interest
in
kind
of
there's
like
a
dichotomy
now
where,
on
one
hand,
our
reference,
Grant
and
Gateway
API
is
about
basically
allowing
network
access
to
endpoints,
but
they
are
interested
in
having
the
entry
reference
Grant,
which
we're
kind
of
perspective
about
right,
now,
more
or
less
being
a
tool
to
where
there
would
be
a
controller
that
would
actually
like
provide
the
rbac
permissions
relevant
to
that
and
like
give
them
to
an
entity.
A
So
that's
kind
of
the
direction
that
they're
talking
about
right
now
and
I
wanted
to
bring
this
up
because
reference
Grant
was,
you
know,
born
here
in
Sig,
Network,
it's
starting
to
hit
multiple
sigs
and,
if
you're
at
all
interested
I
would
track
this
get
in
touch
with
Jordan
and
basically
it's
Sig
off
Sig
Network
and
Sig
storage
who's
implementing
reference
Grant
right
now.
A
I
Yeah
I
mean
just
this.
This
is
a
huge
set
of
projects
here,
so
it
you
know
there
is
lots
of
opportunity
to
help
out.
If
this
is
something
you
want
to
see,
succeed,
we're
going
to
need
some
help,
so
there's
a
series
of
things
that
we
want
to
get
in
here,
including
a
custom
authorizer.
So
again,
just
this.
This
is
really
just.
This
will
happen
eventually,
but
it
would
happen
sooner
if
we
had
more
people
to
help
so
yeah.
B
Careful
what
you
ask
for
yeah
I
mean
I
I'm,
watching
it
trying
not
to
get
super
involved
because
I
don't
think
I
have
anything
unique
to
offer.
But
what
I'm?
Seeing
really
is
the
you
know
kubernetes
as
a
project
is
maturing
in
regards
to
permissions
and
security.
You
know
still
this
feels
like
a
you
opened
a
can
of
worms.
B
That
is
actually
much
a
lot
more
worms
in
there
than
you
thought
and
we
are
now
entering
a
space
where
yeah
we
really
do
need
this,
but
what
we
need
is
sort
of
an
order
of
magnitude
more
sophisticated
than
what
we
needed
for
Gateway,
which
is
great
it's.
You
know
unfortunate
that
it
will
slow
things
down,
but
I
think
it's
great,
that
the
the
larger
off
folks
are
now
looking
at
it.
Yeah.
A
Interesting
metaphor,
I
would
have
not
gone
for
can
of
worms.
I
would
have
been
like
box
full
of
cats,
but
you
had
no
idea
the
cats
were
in
there.
It's
like
this
is
going
to
get
very,
very
interesting
and
just
the
the
sheer
the
future
of
potentially
having
apis,
where
it's
like
I
do
this.
It
can
get
an
entity
and
like
provide
it
with.
Our
back
is
enormous,
so
yeah
definitely
jump
in
into
the
conversation.
A
I
Yeah
I
think
so
yeah.
You
know,
there's
gonna
be
a
lot
of
work
in
the
next.
You
know
we're
getting
into
the
next
cap
and
enhancement
cycle
and
that's
going
to
be
where
a
lot
of
this
discussion
has
to
happen.
I
think
we've
agreed
on
a
general
plan
and
so
I'm
hoping
we
can
get
the
cap
finalized
this
this
cycle,
but
there's
a
lot
of
work
that
kept
will
require
yeah.
A
B
No
I
think
it's
good
that
we
stop
pretty
much
on
time.
Anybody
last
words.
C
I
take
the
two
ipvs
issues.