►
From YouTube: Kubernetes SIG Network meeting for 20230119
Description
Kubernetes SIG Network meeting for 20230119
A
A
Cool
well,
this
is
the
January
19
2023
kubernetes,
sick,
Network
meeting.
We
here
to
the
cncf
code
of
contact,
which
boils
down
to
don't
be
Mead
mean
we
record
these
meetings
and
post
them
up
on
YouTube
for
your
records.
So
we
do
have
a
few
items
on
the
agenda.
Quite
a
bit
agenda
items,
so
we
might
have
to
time
box
everybody
to
about
10
to
15
minutes
to
get
through
everything,
and
why
don't
we
kick
it
off?
It
looks
like
we
have
triage
on
the
books
and
Tim.
Are
you
gonna?
Take
that.
B
I
got
the
sound
right
this
time,
yes
good
well,
the
good
news
is
triage
is
short
today.
So
we'll
give
you
some
time
back.
I
only
have
two
issues
that
I
thought
were
worth
talking
about,
neither
one
of
which
is
particularly
egregious
first
issue
and,
as
usual.
Thank
you
for
everybody
who
did
pre-triage
first
issue.
B
Is
somebody
reporting,
Ingress
class
and
Ingress
class
name
annotation
can't
both
be
set
at
the
same
time,
except
they
can
through
update
they
can't
through
create,
and
so
clearly
at
some
point,
somebody
thought
it
was
important
that
we
not
allow
both,
but
they
have
a
use
case
where
they
have
different
controllers
that
look
at
different
parts
of
it,
and
so
they
this
customer
really
wants
both.
B
My
feeling
is
I.
Don't
see
a
reason
why
we
should
actively
disallow
it,
but
maybe
we
should
send
back
a
warning
for
them
and
I
wanted
to
see
if
anybody
like
vehemently
disagrees.
E
I,
don't
feel
that
strongly
about
it
at
this
point,
I
know
back
in
the
day,
we
were
really
trying
to
push
people
to
move
to
the
new
thing
and,
at
the
very
least,
not
use
the
new
and
old
thing
together,
because
using
the
new
and
old
thing
together,
it
gets
complicated.
If,
if
you're,
a
controller
author
which
thing
you
should
use,
the
using
both
at
the
same
time
was
meant
as
a
transition
step,
not
a
you
should
create
things
that
are
brand
new
with
yup,
both
new
and
old
values.
E
C
B
F
B
D
C
B
B
C
All
right,
the
only
reference
I
saw
was
endpoints
also
had
a
warning,
and
that
was
about
it
and
it
wasn't
clear
to
me
where
I
mean.
B
It's
been
adding
a
lot
of
warnings
lately,
okay
and
it
they
really
are
pretty
straightforward
right.
It's
really
an
if
condition
and
a
return.
A
string
like
there's
there's
not
a
whole
lot
of
structure
there.
Okay,.
B
Cool,
thank
you
and
then
the
second
one
to
look
at
is
this
poor
user
who
has
run
out
of
cluster
cider
they've.
They
did
follow
up
and
they
they
have
a
slash
16
cluster
cider
and
a
node
mask
of
24
and
they're
in
the
200
nodes
range
and
they're
out
of
I
out
of
pod
IP
space
and
are
asking
what
to
do
and
and
the
only
answer
I
was
able
to
give
them.
B
H
S
with
Calico
and
I
increased
it.
Nothing
came
up
with
the
new
stuff,
I
restarted
stuff
enough
times
it
eventually
started
working.
It
was
not
nice.
B
I
B
Oh,
it's
119.,
yeah,
yeah,
they're,
separate
backgrounds,
several
years
old
and
I,
and
you
know,
usually
we
say,
oh
too
old.
We
can't
help
you
but
like
I,
feel
really
bad
that
there
are
people
in
production
who
are
running
into
these
issues
and
I'm
I'm
seeking
to
try
to
help
them.
If
we
can,
without
you
know,
backboarding
whole
API
changes
by
six
revisions.
C
I
mean
that
said:
there's
not
a
lot.
We
can
do
with
this
other
than
say.
Try
it
maybe
and
if
there's
a
bug
to
file
a
bug
report,
yeah.
K
But
the
more
cluster
sitters
yeah
yeah
well.
B
A
K
K
B
The
other
question
right:
that's
why
I'm
qualifying
all
my
recommendations
to
them
and
yet
I
still
feel
like
I
want
to
try
to
help
them,
so
I
didn't
get
the
answer
that
I
was
hoping
for,
but
I
will
follow
up
here
with
you
know
the
fact
that
we
talked
about
it
and
that
you
may
need
to
restart
the
network.
B
Plugins
I'm
I'm
gonna
recommend
that
they
try
bringing
it
up
in
a
separate
cluster
and
moving
things
over,
which
is
also
a
very
complicated
answer,
but
could
solve
the
upgrade
problem
at
the
same
time.
Maybe
this
gives
them
the
incentive
they
need
to
upgrade.
B
B
I
would
like
to
ask
everybody
to
participate
in
our
backlog,
grooming
and
so
for
next
session.
It
would
be
great
everybody
bring
one
issue
like
go.
Look
at
our
old
issues.
Just
bring
one
issue
here
that
we
can
talk
about,
whether
it's
should
we
close
this
or
I
think
this
is
implemented,
but
I'm,
not
sure
or
oh.
This
seems
like
a
real
problem.
How
come
we
never
got
to
this?
A
I
A
I
Everyone
yeah
naming,
is
a
hard
thing,
so
in
multi-network
Community
we
start
talking
about
the
name
for
the
object
that
we
would
like
to
introduce
with
this
effort.
The
tldr
I
don't
want
to
take
much
of
the
time
so
tldr.
We
wish
to
add
a
new
object
that
it
will
be
something
that's
going
to
represent
networking
the
Pod
networks
to
which
a
pod
can
reference.
Of
course,
this
object
will
be
later
on
as
well.
It
could
be
referenced
by
some
other
objects.
I
We
are
hoping
this
can
become
like
a
Corp
object
in
kubernetes.
There
is
a
bunch
of
proposals
and
I
just
want
to
kind
of
get
a
feel
of
of
the
community.
One
of
the
puzzles
thing
that
stood
out
was
knit
and
what?
What
is
the
thing
about
that
name
and
just
to
maybe
add
a
kind
of
obvious
name
like
network
was
proposed,
but
the
the
kind
of
we
are
not
kind
of
leaning
towards
that
one
is
because
it's
it's
very
overloaded
term.
B
I
Like
common
discussion
with
anyone
on
the
on
about
networking,
you
would
always
have
to
add
what
network
you
have
in
mind
right.
So
network
is
not
the
kind
of
the
best
candidate
for
us.
So
that's
one!
That's
why
we're
kind
of
trying
to
move
away
from
that
game.
Any
other
proposals
there's
a
link
in
the
minutes
of
this.
This
of
this
of
them
in
the
minutes.
There
is
a
link
to
our
minutes
where
we
have
some
some
proposals
over
there.
I
We
have
I,
think
few
minutes,
I
think
I
wrapped
up
quite
nicely
any
thoughts
on
the
k-net
name.
G
Yeah
I
I
would
really
strongly
suggest.
We
not
do
things
that
can
confuse
us
with
the
traditional
kubernetes
networking,
so
K9
I've
known
a
lot
of
places
where
people
will
talk
about
the
kubernetes
networking
as
meaning
what
we
actually
have
currently
and
picking
a
name.
That's
going
to
confuse
that
I
think
is
going
to
be
just
a
mess.
I
G
But
that's
not
what
most
people
think
about
as
kubernetes
networking.
What
they
think
about
is
the
current
definition
that
we
have,
which
is
not
a
low-level
definition
right.
What
things
people
think
about
is
I
have
layer,
3
connectivity
between
all
of
my
pods
without
not
I.
Have
you
know
this
network
policy
thing
that
allows
me
to
do
isolation.
G
I
have
kubernetes
services
that
allow
me
to
do
some
kind
of
load
balancing.
That's
what
people
think
about
in
terms
of
kubernetes
networking
I,
really,
don't
think
that
we
should
be
hijacking
those
conceptual
norms
in
terms
of
naming
other
things
right,
because
names
have
power,
they
shape
how
you
think
about
things,
and
they
also
can
sow
confusion,
and
so
I
would
really
strongly
suggest
not
doing
something.
That's
going
to
Silk
confusion.
D
G
I
G
G
It's
you
sit
down
and
you
talk
about
kubernetes
networking
in
terms
of
these
things,
what
one
of
the
things
that
kubernetes
has
gotten
extraordinarily
right
is:
we've
moved
away
from
the
cloud
1.0
dysfunction
and
Cloud
1.0
everybody
took
all
the
physical
objects,
they
slapped
a
v
in
front
of
them
and
they
called
it
good
right,
and
so
we
carried
all
the
physical
Concepts
over
now.
One
of
the
beautiful
things
about
kubernetes
networking
is
it
doesn't
do
that
right?
G
One
of
the
things
that
I
will
sometimes
say
with
with
great
satisfaction,
is
that,
with
a
very
small
number
of
exceptions,
you
don't
even
find
IP
addresses
in
kubernetes
networking
API,
because
users
don't
care,
and
so
I
would
suggest,
if
you're
trying
to
rethink
kubernetes
networking
in
a
way
that
makes
users
care
about
low-level
Concepts.
We
certainly
don't
want
that
confused
with
the
beautiful
thing
that
we
have
currently.
I
K
I
think
we
agree
on
a
few
things.
We
don't
want
to
call
it
Network
and
I,
don't
want
to
call
it
Network
attachment
definition
and
sort
of.
So
what
does
it
we're
trying
to
describe
I
mean?
Is
it
so?
We
said:
okay,
it's.
What
a
pod
attaches
to
is
that
a
network
I
mean
we
had
this
discussion
before
right
when
I
mean
I
had
to
interviews
that
turns
them
and
I
also
got
L2
and
L3.
You
use
the
IP
and
ethernet
right.
I
think
it
was
like
we're
using
right.
K
I
mean
that
when
L2
ethernet
is
a
basically
a
network
spring
string
that
you
use
over
all
the
nodes
and
attached
to
but
I
mean
this
sort
of
it's
the
problems
at
first,
it's
like
what
is
it
we're
modeling
and
if
it
is
a
network
okay,
then
we
need
to
give
it
a
name,
but
is
it
different
type
of
networks?
I
mean
we
said,
ethernet,
IP
or
l2l3?
What
is
it
really
you're
trying
to
model
and
I?
K
Think
that's
what
Adrian
was
saying:
what
are
we
trying
to
model
and
how
do
we
give
them
the
appropriate
name?
So
we
don't
go
back
and
use
call
it
call
it
the
the
same
type
of
things
that
we
did
in
openstack.
K
So
so
one
way
would
be
to
start
saying
so
part
of
The
multi-networking
Proposal
is
also
to
be
able
to
have
parallel
cluster
or
cluster
networks
of
pod
networks.
Right
where
you
can
have
different
namespaces
using
different
cluster
Networks.
How
would
we
describe
that
the
sort
of
if
we
need
to
conceptualize
these
things?
Maybe
we
should
start
conceptualizing
what's
there
and
then
expand
on
that
instead
of
trying
to
plug
in
the
new
and
then
convert
when
it
comes
to
coming
up
with
these
names,
I
mean
one
of
the
reasons
we
call
it.
K
The
object
was
that
we
could
not
agree
on
what
to
call
it
right.
We
didn't
find
something
that
was
good,
so
we
too
early
I,
said
better
try
to
to
describe
this
ability
to
because
I'm
in
the
cluster
Network
right.
What
you
have
is
connectivity
and
you
can
realize
that
connectivity
in
many
ways
it
can
be
realized
by
one
long
L2,
where
it
can
be
an
L2
per
node
or
not
any
networks
at
all
that
you
attach
to
the
pawn
right.
It
could
be
one
network
for
every
port
that
attaches
to
a
v
route.
K
B
So
in
in
kubernetes
we
have
two
basic
categories
of
resources:
we
have
resources
that
are
authoritative
when
you
create
them,
they
cause
things
to
happen,
and
we
have
resources
that
are
representative
that
model,
something
that
already
exists
right.
Nodes
are
the
latter
right.
If
you
delete
a
node,
it
doesn't
make
your
VM
go
away
right.
That's
an
intentional
decision.
Sometimes
I
regret
it,
but
it
is
intentional.
B
Whereas
if
you
delete
a
pod,
it
does
make
your
workload
go
away
right,
which
pattern?
Does
this
thing?
Follow
I?
Think
it's
the
latter
right.
This.
I
B
Creating
a
network
or
whatever
we're
going
to
call
it
doesn't
cause
a
thing
to
come
to
be
right.
So,
let's
the
yes.
B
I'm
only
I'm
only
raising
this
issue
I
raised
this
issue
only
to
put
the
seed
in
for
the
discussion
like
as
you're
thinking
about
names.
Naming
is
hard
and
I'm
terrible
at
it,
but
as
you're
thinking
about
it
like
this
is
a
representation,
a
model,
an
external
and
an
other
right
like
let
those
words
sort
of
permeate
your
thoughts
for
the
sake
of
time,
I
think
we
I
don't
think
we
can
have
this
full
discussion
here.
B
Mache,
which
you
should
probably
do,
is
open
an
issue
or
or
make
a
Google
forms
or
something
and
send
it
to.
The
discussion
include
a
blurb
about
you
know
what
you
said
before
like
this
is
what
we're
looking
for
and
make
it
a
brainstorm
like
there's
no
dumb
answers
here,
like
just
solicit
suggestions
for
names,
and
then
we
can
discuss
the
collected
set
later.
I
So
yeah
I
was
thinking
about
like
some
sort
of
form,
but
you
know
a
form
can
be
manipulated
as
well
politics.
You
know
basically,
I
am
trying
to
capture
the
candidates.
That's
what
I'm
trying
to
get
so
let
with
the
community
in
our
meeting,
will
try
to
gather
and
identify
the
candidates
and
then
yes,
I
will
send
out
the
form.
K
The
best
give
it
a
temporary
name,
because
I
still
think
that
we
need
to
describe
what
these
orbits
are.
Of
course,
we
know
what
the
name
is
and
sort
of
describe.
What
is
it
we're
modeling
and
sort
of
we
saw
when
we
did
the
I
mean
they
call
it
the
from
the
plumbing
group
right
there.
Then
we
have
L2
networks
that
we
stretch
out
that
you
can
attach
to
right.
I
think
Ed
was
very
convincing
when
he
brought
up
the
arguments
for
that.
We
should
have
more
than
that.
K
C
C
You
know
I
reread
the
initial
bit
there
in
the
document
and
it
still
seems
a
little
bit
too
long
if
we
want
to
have
a
concise
discussion
around
what
the
name
of
that
thing
that
we
are
trying
to
describe
should
be
like.
Is
there
a
two
sentence,
definition
of
what
this
object
is
supposed
to
be.
K
B
It
in
a
different
way,
the
best
part
about
brainstorming,
is
the
lack
of
judgment
like
you
can
just
throw
names
in
and
when
somebody
mentions
one
name,
you'll
say:
oh,
that's
interesting.
Let
me
Riff
on
that
idea
and
throw
some
more
names
out.
So
you
know
we've
done
this
rather
successfully
in
like
Sig
multi-cluster
coming
up
with
names
or
at
least
weeding
out
names.
B
I
I
A
E
E
Yeah,
no
I
think
all
these
will
be
pretty
quick.
The
first
one
I
wanted
to
just
reference,
a
doc
that
describes
some
changes,
we're
considering
in
Gateway
API.
That
would
remove
the
concept
of
beta,
so
we
would
basically
have
two
states
there
would
be
a
off
by
default
and
experimental
and
a
on
by
default
and
stable,
and
not
an
intermediate
state
of
on
by
default,
but
not
quite
stable,
interested
in
feedback
here.
This
will
send
this
out
to
the
mailing
list.
E
We'll
also
get
some
feedback
on
from
cigarch
on
this
so
far
seems
to
be
mostly
positive
feedback
from
what
I've
heard,
but
I
would
love
to
you
know,
get
get
a
broader
perspective
here.
This
is
crd
based,
API
and
it's
especially
expensive
to
add
new
API
versions.
It
takes
several
releases
to
get
rid
of
old
API
versions
once
you've
added
a
new
one
and
to
do
that.
Full
transition
and
I'm
not
sure
how
much
value
it
really
adds.
E
So
anyway,
that
that's
a
thing
that's
open
for
discussion,
I'll
give
just
a
minute
or
two.
If
anyone
on
meeting
has
comments,
feedback
on
that
idea,.
E
I
I'll
say
I'll
take
silence,
as
just
everyone
loves
the
idea
and
I'll
run
with
that.
Okay,
all.
B
Right
I
talked
to
you
about
this
before
I
don't
object
to
it.
My
feeling
here
is
you
guys
over
in
Gateway
land
are
are
already
at
the
Leading
Edge
of
so
many
things,
I
I
think
you
should
go,
try
it
and
come
back
and
tell
us
how
it
worked.
You
know
where
Gateway
goes.
The
rest
follow.
E
That's
kind-
hopefully,
hopefully
we're
going
in
a
good
direction.
Yeah.
Okay,
next
up
is
the
topology
traffic
policy.
Everything
in
between
discussion
that
I
feel
like
has
just
gotten
stuck
in
circles
and
so
I
linked
out
to
a
cap
and
and
Dan.
Thank
you
for
having
the
PR,
but
I
know
it's
been
kind
of
overwhelmed
with
comments
and
discussion
that
are
tangential
in
in
a
way
to
what
your
original
goal
was,
and
so
I
I
don't
know.
E
I
just
know
we're
what
a
couple
weeks
away
from
kept
freeze
and
would
love
to
to
figure
out
something
for
127
whatever.
That
means,
and
it
seems
like
our
current
trajectory
on
that
cap
is,
we
need
something
to
happen
and
I.
Don't
I,
don't
know
what
that
is.
I
I
have
ideas
on
topology
side,
I
I,
just
I
I,
don't
know
how
to
how
to
help
move
this.
This
idea
forward
and
there's
there's
lots
of
different
ideas
together.
I
I,
don't
know.
J
So
the
the
original
thing
that
that
kept
was
supposed
to
to
do
was
to
make
it
so
that
local
traffic
policy
didn't
cause
services
to
break
when
endpoints
moved
around,
which
has
been
addressed
by
proxy
terminating
endpoints
in
the
meantime,
so
so
the
actual
original
feature
of
prefer
local
or
node
local
topology,
like
I
care
less
about
now,
which
is
why
I
haven't
been
moving
as
as
very
fast
on
on
this
cap.
J
It,
it
kind
of
seems
to
me
like
the
right
answer
is:
is
the
thing
that
I
was
suggesting
where
we
we
integrate
with
that
new
pod
scheduling,
topology
feature
that
none
of
us
had
noticed
before
and
and
try
to
make
Services
integrate
with
that,
but
yeah
I
guess
I
I
don't
have
a
a
really
strong
sense
of
how
people
are
using
topology.
So
I
don't
know
for
sure
that
I'm
the
best
person
to
be
pushing
it
forward
now.
E
Yeah,
that's
helpful
context.
I
I
hadn't
even
thought
through
that
that
idea
that
you
know
terminating
endpoints
does
solve
a
lot
of
those
problems
or
proxy
terminating
endpoints
does
and
and
I'm
I'm
intrigued
by
the
idea
of
using
topology
like
using
topology,
spread,
constraints.
I
think
is
what
it
is
for
scheduling,
but
obviously
the
the
relationship
between
service
and
the
Pod
configuration
below
service
is
not
always
clear.
You
know
like
if
every
pod
backing
a
service
has
completely
different
config,
it
gets
I
I,
don't
know
what
you
do.
E
It
sounds
awful
to
try
and
make
that
connection.
E
One
of
the
things
I've
I've
been
thinking
about
talking
with
a
few
people
around
this
I
think
it
was
Tim's
idea
to
move
topology
a
little
bit
closer
to
how
scheduling
works
in
that
there's
plugable
heuristics
right
that
you
make
it
as
sufficiently
generic
thing
that
somebody
can
bring
their
own
heuristic
and
we
can
accept
it
at
least
in
an
experimental
state,
so
we're
we're
providing
a
system
that
enables
others
to
Plug
and
Play
What.
E
The
the
custom
logic
that
they
think
is
is
most
useful
for
them,
because
you
know
right
now.
We're
we're
using
CPU
allocatable
CPU,
as
our
ratio
of
you
know
this.
This
is
the
approximate
amount
of
traffic
we
expect
this
node
to
receive
and
then
we're
trying
to
balance
based
on
that,
which
is
maybe
accurate
in
some
cases,
but
certainly
not
all
so
yeah
that
maybe
maybe
what
I'm
hearing
is.
We
should
actually
just
fold
this.
E
Maybe,
and
maybe
we
should
just
pull
this
all
back
into
a
topology,
related
discussion
and
just
say
this
is
a
something
that
we'll
explore
as
a
plugable
form
of
topology
and
kind
of
as
the
the
proof
of
concept
that
we
can
actually
plug
and
play
into
the
apology
hint
system.
I,
don't
know
Tim.
B
Yeah
I'm
I'm
really
torn
on
this
one.
You
know
and
I
was
the
voice
of
let's
not
do
this
at
the
beginning
and
then
I
sort
of
came
around
because
there's
enough
seems
like
there's
enough
people
who
just
say
like
just
give
me
the
wheel
right,
give
me
the
controls
that
I'm
I
was
I,
shifted
to
being
so
not
an
enthusiastic
endorser
but
an
endorser,
and
if
we
want
to
shift
the
conversation
back
to
hey,
let's
try
fiddling
with
the
heuristic
more
which
I
agree.
B
We
should
do
I
think
that
would
be
a
great
way
to
do
it
that
doesn't
solve
any
problems
that
customers
users
have
today
right
and
it
won't
solve
any
problems
that
they
have
for
quite
a
while,
and
so
the
question
I
have
is
you
know?
Is
it
worth
doing
both
I?
Don't
generally
like
the
answer
of?
Let's
do
both,
but
you
know
this
feels
like
the
the
break
glass
situation
like
do.
We
need
to
offer
it?
Is
it
worthwhile.
E
That
could
make
sense
in
because
one
of
one
of
the
things
I've
been
thinking
about
is
just
implementation,
detail,
right
policy
it
or
whatever.
We
want
to
call
it.
The
idea
of
if
there
are
any
endpoints
in
the
zone
route
to
them
in
the
same
Zone
is
something
that's
easy
to
implement.
Without
hints
you
can
you
can
look
at
the
data
plane
and
it
can
just
Implement
that
without
any
centralized
control
plane,
anything
anything
beyond
that.
E
Like
a
a
new
heuristic
and
endpoint
slice,
controller
for
hints,
I
is
probably
going
to
be
more
focused
on
the
The
Ratio
or
you
know
how
how
we
think
we
should.
You
know,
given
this
set
of
PODS
and
the
set
of
nodes,
how
should
we
distribute
them?
Which
I
guess
that
works
too
I
I,
don't
know
I,
it's
just
I.
H
Just
just
jumping
that
slightly
I
was
kind
of
I've,
been
following
a
lot
of
these
and
tried
to
contribute
some
bad
ideas.
The
I
thought
like
I.
This
is
so
critical.
H
It
feels
like
I
really
like
the
idea
of
making
this
hint
system
better
making
it
so
I
can
give
my
own
metrics
similar
to
how
the
auto
scaling
went
from
these
fixed
metrics
and
then
then
you
could
bring
your
own
and
suddenly
the
whole
thing
became
way
more
powerful
and
way
more
useful,
I
think
if
I'm
currently
still
on
whatever
cluster
version,
still
supports
the
topology
keys
and
I
can't
move
until
I
have
something
that
allows
me
to
just
say
local
zone
region
and
it
sucks
because
I
see
all
the
cool
stuff
that
you
guys
come
up
with
and
come
out
with
and
I
can't
move
and
I
know.
H
I'm
not
alone
in
that
from
just
other
people.
I've
spoken
to
so
like
I
would
really
love
to
see
an
expansion
of
the
hint
system,
making
it
better
making
it
more
useful,
bringing
in
custom
metrics.
All
of
that
I
think
that's
great
I
would
really
really
kindly
ask
that
we
try
and
put
in
a
stop
Gap
just
so
we
can
move
and
upgrade
and
stop
this
yeah.
B
I
I
I,
don't
I
mean
I,
agree
with
you
that
that's
how
it
looks
like,
but
I
don't
put
a
lot
of
stock
in
that
as
a
signal,
because
exactly
what
you
said,
people
don't
come
on
to
say
good
job
right.
We
don't.
This
is
not
Amazon
reviews
where
there's
some
incentive
to
come
back
and
tell
us
that
we
did
a
nice
job.
B
The
you
know,
I
have
anecdotal
evidence
that
at
larger
scale
the
heuristics
work
great
right
and
that's
kind
of
what
they're
designed
for
like,
given
enough
machines
with
enough
cores
the
the
CPU
heuristics
sort
of
works
out
at
relatively
smaller
scale
and
Lawrence
I,
don't
know
how
big
your
clusters
are
and
for
anybody
watching
the
recording,
I
didn't
like
plan
this
at
Lauren
said:
oh
I,
don't
know
who
he
is
or
where
he
works,
or
what
he's
working
on.
B
But
thanks
for
chiming
in
it's
clear
that
it's
smaller
scale,
the
CPU
bias
heuristic
isn't
really
going
to
work.
E
Yeah
yeah
at
smaller
scale,
the
the
problem
with
any
kind
of
automatic
system
is,
you
run
into
very
significant
risk
of
overloading
endpoints,
right
and,
and
our
system
doesn't
provide.
The
hints
as
it
is
today
doesn't
provide
any
way
to
Define
what
acceptable
risk
is,
and
so
we
make
a
best
guess
and
that
best
guess
is
not
working
for
a
pretty
large
chunk
of
people,
especially
at
the
smaller
scale.
H
I
would
say
just
kind
of
on
the
scale
stuff
I
think
scale's
interesting,
so
we're
we've
got
clusters
that
are
30
nodes.
We've
got
clusters
that
are
getting
bigger,
I've
tested
this
out
to
1920,
which
I
mean
I
get
it's.
Some
people
are
way
bigger,
but
it's
not
tiny.
What's
interesting
is
when
you're
you're
spreading
those
nodes
across
loads
of
zones,
because
it's
not
really
about
your
cluster
side
in
terms
of
new
like
pure,
no
count
it's
how
many
nodes
do
you
have
in
each
of
these
availability
zones?
H
Sorry,
that's
an
overloaded
term.
I
just
mean
zones,
and
so
I
think
that's
where
this
stuff
starts
to
to
really
hurt
and
we're
kind
of
we're
stretch
thin
when
it
comes
to
resources
per
zones.
But
I'll
stop
talking
because
I've
made
my
point
and
I
promise.
I'll
come
back.
D
H
I
I
need
something
that
today
allows
me
to
say:
I,
don't
care
about
your
heuristics
I
know
what
I'm
doing
I
want
you
to
go.
Try
the
local
node!
If
it's
not
there,
try
the
Zone.
If
it's
not
there,
try
the
region
I'm
using
those
three
examples,
because
those
seem
to
be
like
Primitives.
You
know
like
this
isn't
custom
things
I'm
defining
these
are
like
Primitives
in
the
API
somewhere.
That's
like
well
known
and
understood.
H
That
hints
was
meant
to
try
and
fix
like
if
I
can
have
enough
control
and
insight
into
that
feature,
then
maybe
I
would
move
to
that.
But
it
sounds
like
that's
a
long
way
away
and
so
I
kind
of
want
both
which
I
guess
greedy.
But.
B
We'll
have
to
wrap
for
for
time
and
lead
to
push
forward,
but
you
know
my
My
Philosophy
on
this
is
largely
unchanged
right.
Users
should
not
have
to
care
about
this.
They
should.
We
should
do
the
right
thing
as
often
as
possible
by
default.
B
It's
clear
that
the
heuristics
that
we
put
in
place
don't
reach
the
level
of
efficacy.
That
we'd
like
to
see
it's
scary,
to
give
users
this
level
of
control,
because
we
know
that
it
will
overload
them
and
it
will
cause
new
issues
to
come
back
to
us
and
we're
going
to
say,
but
I
told
you
so
right.
E
Technically,
the
hint
system
allows
anyone
to
publish,
hints
and
them
to
be
consumed
by
the
underlying
data
plane.
So
it
is
like
Antonio's
saying
you
could
just
I'm
not
saying
anyone
should
write
their
own
endpoint
slice
controller,
but
if
like,
if
you
need
ultimate
customization,
that
is
available
to
you
that.
E
B
Point
is
is
true
if
we
say
we're
going
to
add
this
policy,
then
we're
imposing
that
on
all
Cube
proxy-ish
implementations,
at
least
until
kaping
takes
over
the
world,
and
you
know
so.
We
will
have
to
go
back
to
our
friends
at
psyllium
and
our
friends
at
Calico
and
our
friends
at
at
oven
and
say
hey
what
do
you
guys
think
about
this
implementation
or
this
this
API?
Rather,
could
you
build
some
questions?
Sorry
I
just
see
that
power
has
his
hand
up.
K
K
It's
really
really
tough
to
do
this
in
the
load
balancer,
when
traditionally
we
looked
at
the
CPU
load
or
number
of
connections
right,
but
I
mean
to
do
it
in
a
way
where
you
can
detail,
control
this
that
works
for
everyone,
I
think
the
it
will
become
so
complicated
that
it
will
not
work.
So
we
need
to
find
something
that
simple
and
sort
of.
A
C
K
I
A
We
also
have
Antonio
and
he
has
a
10
to
15
minute
slot,
so
we're
already
overlapping.
B
D
A
B
D
And
you
can
see
the
screen:
okay,
yep.
This
is
the
way
that
you
get
one
issue
and
everything
is
the
same
issue
so
I.
This
was
my
contract.
Pick
and
I
saw
contact
pieces
everywhere.
The
last
one
was
because
of
the
cubelet
props,
so
several
people
asked
me
to
to
explain
the
problem,
so
I
decided
to
put
in
a
presentation
and
share
it
here.
For
some
of
you,
this
is
going
to
be
obvious.
Maybe
some
things
are
wrong,
so
you
better
be
with
me.
D
K
D
It's
kind
of
level,
so
don't
don't
be
picky
okay.
So
if
we
move
to
kubernetes
the
node
actually
converts
in
a
route,
the
node
is
no
longer
a
system.
An
operating
system
is
a
bunch
of
it's
an
operating
system
that
has
also
another
is
more
operating
system
than
inside
and
it
has
to
wrap
then,
in
addition
to
the
common
resources
that
we
have
as
RAM
CPU,
storage
and
network,
we
now
have
to
deal
with
the
network.
D
Resources
typically
is
ephemeral
parts
for
all
the
connections
that
we
create
in
the
node
file,
descriptors
for
all
the
for
the
same
connections
and
the
contract
table.
The
contract
table
is
going
to
limit
us.
The
number
of
connections
that
we
can
handle
in
this
system.
Okay-
and
there
are
several
options
in
the
camera
that
allows
us
to
tune
this
this
systems,
and
it
was
in
this
diagram.
These
are
the
places
where
the
packets,
the
packets,
hits
this
resources.
D
So
when
you
are
generating
a
connection,
you
you
go
from
the
local
process,
you
take
an
ephemeral
part
and
you,
when
you
Traverse
the
output,
you
generate
a
contract
entry
the
same
when
a
packet
is
coming
externally
or
from
a
park.
You
hit
the
contacting
in
this
routing
places
and
the
packet
move
forwards.
D
B
D
Well,
it's
a
typical
diagram
of
the
network.
Is
that
nothing?
It's
just
the
common
stuff.
You
you
most
sure
that
all
of
you
know
this
better
than
me
okay.
So
this
is
how
the
contact
lenses
are
generated,
and
now
we
need
to
understand
how
the
once
the
connection
is
created,
how
the
the
socket
freeze
the
resources.
D
So
when
you
finish
the
connection,
the
connection,
the
operating
system
is
still
great
during
a
certain
period
that
by
the
RPC,
whatever
thing
is
the
double
maximum
seven
lifetime
that
is
in
that
in
Linus
is
60
seconds
by
default.
This
is
the
time
that
the
operating
system
keeps
the
socket
open.
Despite
the
connection
has
finished,
and
this
this
this
parameter
cannot
be
overriding
with.
It
has
to
be
I
think
this
is
a
constant
in
the
kernel
you
have
to
recompile
the
kernel.
D
K
D
To
be
yeah,
yeah
yeah,
thank
you
and
then
what
happens.
The
problem
is
that
when,
when
the
cubelet
has
to
probe
a
lot
of
parts,
you
need
to
generate
all
of
these
connections.
Okay,
so
this
is
an
example.
If
you
create
100
replica,
this
is
in
the
same
note.
Okay,
this
is
a
an
edge
case,
a
limited
it's
not
come,
but
in
in
San
like
in
Sun
clusters,
you
can
see
these
things.
D
Okay,
and
if
you
have
a
lot
of
Prof,
you
have
a
lot
of
disconnectors
and
if
you
have
a
small
period,
you
have
a
lot
of
small
connections
that
I'm
going
to
take
60
seconds
to
recycle
and
what
they
do.
Is
they
create
this?
This
funny
contact
table
for
you.
You
can
see
that
you
have
a
lot
of
style
entries
in
time
grade.
So,
in
addition
to
the
familiar
Parts,
you
are
also
consuming
a
contact.
D
The
maths
are
the
following,
so
we
have
the
the
the
the
default
configuration
in
Arena
system
is
that
you
have
about
28
000,
similar
ports,
the
contact
table
by
Q
proxy
I,
don't
know
how
this
value
is.
I,
don't
know
if
it's
dynamic
or
no
I
think
that
is
based
on
CPU,
but
in
my
system
we
have
200
000
entities
for
Content.
So
if,
if
we
put
a
case
in
the
that
you
have
100
pods
with
two
containers
and
one
prop
per
container,
it
means
that
you
are
going
to
have
300
props.
D
If
you
have
a
very
small
time
period
for
these
props,
this
is
going
to
generate
300
requests
per
second,
because
we
have
this
tank
weight.
We
we
are.
We
are
not
going
to
start
to
recycle
this.
We
sent
this
anti
until
this
60
seconds
timeout
ends.
So
it
means
that
during
this
60
seconds
we
are
going
to
exhaust
this
17
000
parts.
That
is
more
than
half
of
the
ephemeral
parts
and.
K
D
A
new
yeah,
we,
let
me
finish,
okay,
to
do
more
sentences
than
anything,
okay
and
and
the
other
thing
is
in
addition,
they
know.
Team
wants
to
add
a
second
props
cap.
Okay,
that
this
can
go
at
the
millisecond
period.
Okay,
that
message
will
come.
The
solution
that
I
came
out
is
to
ease
the
soling
socket
that
allows
you
to
override
the
sock
in
this
time
weight.
K
Know
there
is
one
other
way
that
you
can
do
as
well,
and
that
is
sort
of
that
in
the
cubelet.
Instead
of
letting
the
system
assign
a
thermal
ports
that
you
sort
of,
you
go
and
grab
a
number
of
the
thermal
ports,
and
then
you
use
them
yourself,
because
you
can
use
the
same
port
when
you,
when
you
go
to
all
the
poor
to
all
the
probes.
K
So
if
you
say
you're
going
to
do
300,
Pro
being
during
X
minutes,
you
can
use
the
same
thermal
Port
out
to
all
of
them,
because
they're
going
to
have
a
different
IP
address.
So,
instead
of
doing
a
connect
with
the
zero
as
the
source
Port,
you
do
connect
with
the
source
Port
set
to
a
port
that
you
have
grabbed.
And
that
way
you
don't
get
this
problem
and
that
sort
of
I
used
before.
K
Is
yeah
yeah
because
because
every
every
container
you're
going
to
go
to
has
a
different
IP
address
right
or
you
go
to
different
iprs
and
different
port
in
them?
You're
not
going
to
send
send
the
two
probes
to
the
same
destination
address
and
destination
Port.
If,
unless
you
go
over
the
service,
that's
the
difference,
but
I
mean.
If
you
go
directly
on
the
container,
you
will
not
yeah.
D
D
K
So
as
long
as
no
one
else
is
using
the
same
Source
port
on
that
system,
you
can
write
all
that
stuff
in
the
descending
process,
yourself,
I've
done
it
for
a
whole
system,
I
mean
in
the
in
the
in
the
kernels
to
to
reuse
the
usage
of
the
ports.
So
it's
I,
you
know
what
I'll
do
this
offline
with
you
I
will.
K
B
B
K
D
Yeah,
the
problem
is
that
they
were
to
ambitious
and
I
wanted
to
do
everything
the
link,
the
linger
and
the
reduce
and
blah
blah
blah.
D
C
A
Sweet
deal
I
actually
have
to
leave
because
I
have
another
meeting.
We
are
at
time.
It's
a
great
discussion
sounds
like
we
have
carryover
for
Rob
further
on
discussions
with
multi-network
and
discussions
with
Antonio's
PR,
so
awesome
I'm
going
to
go
ahead
and
stop
the
recording.