►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20211028
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20211028
A
Welcome
everybody
to
the
kubernetes
sig
network
meeting
for
thursday
october
28
2021..
The
first
agenda
item
is
tag
we're
it.
C
D
C
But
I
put
a
lifesaver
in
my
mouth
right
as
I'm
about
to
triage
share
screen
window.
This
one
double
triple
check.
Okay,
everybody
can
see
me.
E
C
All
right
cool
we've
got,
I
started
doing
some
triage.
I
see
some
other
people
did
some.
I
did
not
get
all
the
way
through
the
list
today.
First
up
from
jay,
it
sounds
like
a
windows
issue
with
respect
to
cube
proxy.
I
didn't
quite
follow
the
the
gaps,
but
it
sounds
real.
Anybody
here
object
to
accepting
it.
C
C
A
reminder
for
folks
who
want
to
volunteer,
but
are
afraid
we're
not
asking
you
to
fix
the
bug,
just
look
into
it,
see
if
it's
real.
This
one
seems
like
a
bug
in
the
cube
proxy
health
check
logic
user
reports
that,
even
though
it
fails
to
run
iptables,
it
is
reporting
healthy.
C
We
have
logic
in
our
health
check,
that's
supposed
to
check
the
time
of
the
last
sink
and
fail
if
it's
not
healthy,
and
it
does
not
seem
to
be
the
case,
so
I
posted
a
little
bit
of
the
relevant
code
here.
One
of
these
conditions
is
triggering
in
this
case,
so
the
first
thing
to
look
into
would
be:
is
the
health
check
actually
working
at
all,
throw
a
little
bit
of
instrumentation
in
here?
Maybe
and
just
try
to
figure
out
like?
C
E
F
B
G
C
C
C
My
initial,
my
initial
reaction,
was
exactly
cal's;
no,
no,
no!
No!
No!
Interestingly,
we
actually
do
have
support
for
host
names
in
endpoint
slices,
but
that
was
for
a
different
reason,
which
I
failed
to
remember.
Right
now,.
H
I
think
it
was
a
pretty
theoretical
reason.
The
idea
of
it
endpoint
slices,
could
be
used
and
written
to
by
things
that
were
outside
that
that
wouldn't
be
consumed
through.
A
H
It
was
never
intended
that
coup
proxy
or
any
component
like
that,
would
actually
resolve
fqdns.
It
was
just
endpoint.
Slices
are
a
generic
thing
that
can
represent
endpoints.
C
Why
do
I
why?
Why
did
I
say,
okay
to
that,
I'm
the
one
who's
always
yelling
about
not
designing
for
hypotheticals
right?
Well,
anyway,
it's
there
now
it's
an
interesting
question,
though
we
could.
We
could
write
an
endpoint
slice
with
a
hostname
for
an
external
name
service.
It
wouldn't
hurt
anything,
but
I'm
not
convinced
it
would
actually
solve
this
user's.
G
Problem
anyway,
this
this,
I
don't.
J
C
C
Yeah
they
they
have
they're
trying
to
set
up
a
horizontal
pod,
auto
scaler,
based
on
a
custom
metric.
C
They
are
trying
to
do
this
to
an
external
thing
that
is
running
in
aws
somewhere,
and
so
they
have
a
host
name
for
it,
but
that
host
name
is
an
amazon
style,
dns
name
right,
which
was
obviously
is
a
c
name
itself
to
some
real
set
of
names
and
can
change,
and
so
they're
trying
to
set
up
a
local
service
of
an
external
name
type
which
redirects
their
amazon
load
balancer,
which
then
forwards
to
their
thing.
In
this
case,
it's
a
kafka
service.
I
think.
K
E
Let
me
get
this
you're
messing.
F
C
I
mean
there's
nothing.
It
can
do
honestly
like
iptables
kind
of
does
the
worst
thing.
It
resolves
the
name
one
time
and
then
never
tries
again
yeah.
So
you
know
like
in
theory.
It
would
be
something
that
would
set
up
a
timer
and
say
every
ttl,
whatever
the
ttl,
for
that
name
is
right:
every
ttl
div
two
we
go
and
re
re-resolve,
the
name
take
all
the
ip
addresses
that
it
resolves
to
and
program
those
as
equivalent
endpoints,
but
we
just
decided
not
to
do
that.
I
F
C
Right
I
mean
there's
a
separate,
kept
or
pre-cap,
or
something
for
fqdn
for
policy
right
and
the
same
concerns
were
raised
there,
but
it
sounds
like
for
the
people
who
are
using
this
for
policy
they're.
A
C
A
This
one
was
interesting
like
what
do
we
expect
to
happen
here,
like
ignoring
all
of
the
stuff
in
the
bug,
if,
for
some
reason,
part
of
the
cluster
can't
contact
the
other
part
of
the
cluster,
and
in
this
case
I
think
it
was
specifically
one
of
the
masters
can't
talk
to
a
bunch
of
the
nodes
or
something
like
that.
What
do
we
expect
to
happen?.
E
C
I'm
sure
I
am
it's,
I
remember
having
several
discussions
around
various
corner
cases
here.
There's
the
clean
shutdown
one
which
actually
went
through
then
there's
the
dirty
shutdown,
one
which
I
don't
think
has
gone
very
far.
A
I
mean
it
sounded
like
they
expected
that
after
the
partition
was
resolved,
that
everything
would
just
go
back
to
working.
It
would
just
basically
be
a
temporary
blip
because
it
wasn't
that-
and
maybe
I
misread
it-
it
wasn't
that
the
pods
on
both
sides
couldn't
talk
outside
and
didn't,
have
like
load
balancer
access
or
something
like
that,
but
they
just
couldn't
talk
to
each
other.
It's
kind
of
like
a.
I
don't
know
a
dmz
use
case
in
a
way.
F
Can
you
add
me
to
it?
Can
you
assign
it
to
me
a
comment
on
with
cluster
partition
that
the
nastiest
thing
about
partition
network
partition?
Is
you
don't
know
and
because
you
don't
know,
you
have
to
assume
that
it's
bad,
that's
the
shortest
answer
and
the
most
clearest,
cleanest
way
of
looking
at
it,
and
that's
why
people
who
are
running
without
like
leases
or
master
lisas
or
leader
and
leg
leases.
Always
I
look
at
them
as
like?
Okay,
because
yeah.
A
I
A
M
M
With
me,
and
then
I
missed
a
bunch
of
pings
on
it
and
finally
spotted
those
pings
today,
so
it
looked
like
they
were
binding.
Their
controller.
B
M
Address
that
wasn't
accessible
by
their
worker
nodes
in
a
different
network
and
they
were
using
the
wrong
cube,
adm
flag
to
try
and
reconcile
that.
N
I
will
been
away
for
a
bit,
I
will,
let
me
look
at
it
and.
C
F
We
can't
I'm
chasing
a
different
avenue,
which
is
the
the
warning
that
jordan
talked
about.
That's
okay!
So
that's
the
thing
I
was
looking
at
this
morning.
Okay,
fine
just
keep
it
open,
so
it
remains
nagging.
B
Yeah,
so
this
actually
already
got
merged,
but
anyway,
so
I
wrote
this
documentation
on
how
cube
proxy
works,
because
people,
the
ob
and
kubernetes
people
are
confused
about
some
points
and
there's
really
no
documentation
and
I
wasn't
sure
what
to
do
with
it,
because
it's
not
like
end
user
documentation
and
I
wasn't
sure
if
we
had
any
place
to
put
developer
documentation.
So
I
submitted
it
to
the
kpng
repo
thinking
in
the
future.
That's
where
people
who
are
writing
their
own
service
proxy
will
end
up
and
it
got
merged
there.
B
So
you
know
we
can
keep
it
there
or
it
can
go
somewhere
else.
People
may
also
want
to
read
through
it
and
see
if
I
missed
anything,
I'm
much
less
familiar.
N
C
But
go
ahead,
I
was
gonna
say:
did
you
did
you
document
the
way
it
works
today
or
did
you
document
the
way
it's
supposed
to
work
today.
B
F
Okay,
so
there
is
a
big
gap
in
and
and
knowledge
and
body
of
knowledge
on
how
things
work
in
kubernetes
universe,
and
you
have
no
idea
how
prevalent
and
prolific
this
symptom
is,
and
I'm
talking
about
who
people
are
managing
clusters
fairly
large
clusters
fairly
large
deployment,
and
they
really
don't
understand
how
and
when
things
work,
we
have
tested
something
internal
to
microsoft,
where
we
did
that,
like
how
things
work,
kind
of
10
to
10
12
sessions,
or
so
each
covering
one
covering
one
aspect
and
no
like
no
shying
away
from
details.
F
This
is
not
the
kind
of
conference
session
where
we
do
nice,
graphics.
No,
this
is
jumping
into
the
code.
This
is
where
the
thing
is.
This
is
what
it
does.
This
is
where
the
failure
mode
and
so
on,
and
I'm
kind
of
hoping
that
we
can
find
an
avenue
for
this
at
that
community
level,
because
I
think
it's
a
big
barrier
for
contribution
all
right
and
for
to
have
people
to
actually
try
hash
things
right
and
until
we
do
something
similar.
F
C
So
we
we
do
that
every
week
at
the
ping
meetings,
all
we
do
is
write
code
and
look
at
the
old,
coop
proxy
and
figure
out
what
the
hell's
going
on
and
we
do
it
every
friday
morning
and
that's
all
we
do.
We
don't
have
a
google
doc,
we
don't
argue
about
apis
or
anything.
All
we
do
is
look
at
the
existing
code
and
we
have
these
like.
We
just
draw
pictures
and
we
look
at
it
and
we
figure
stuff
out
and
we
we
write
code
together,
like
that's
the
entire
thing
yeah.
F
I
am
more
than
happy
to
contribute
to
that
effort
because,
based
on
what
the
result,
I've
seen
I've
seen
I've
seen
on
that
that
exercise
we've
done.
People
are
actually
getting
more
excited
about
the
thing
when
they
know
how
it
runs
and
it
helps
people
actually
contribute
and
like
at
least
people
can
can
that
the
minimum
is
the
minimum
you
get.
You
got
you're
gonna
minimum
value
out
of
this
we're
gonna
get
higher
quality
issues
or
higher
color
quality
problems.
F
F
F
Simple
stuff,
like
network
partition,
like
that,
the
issue
we
just
discussed
all
right
or
like
what
happens
when
you
start
a
part
right
all
of
these
things,
so
I
I
am
here
saying
I
can
help
I'm
not.
C
A
clear
place
for
that
you
can
write
some
docs
and
you
have
been
doing
that
that
we
stick
up
on
the
the
site,
but
those
are
often
not
read,
and
we
all
know
that
kubecon
is
historically
the
place
where
people
do
talks
like
this,
but
you
know
there's
only
a
couple
of
those
a
year
and
if
you
really
want
to
get
into
the
details
like
you'll
you'll
only
get
six
inches
deep
before
the
before
a
30-minute
session
is
over
right.
So
I'm
a
big
fan
of
very
short
presentations
on
one
topic
right.
C
C
There
isn't
a,
I
don't
think,
there's
a
cncf
or
even
kubernetes
owned,
like
youtube
channel,
where
you
can
go
participate,
so
you
can
do
it
on
your
own.
We
can.
We
can
like
jointly
put
it
together
and
say,
like
hey
here's,
the
sig
network
youtube
channel,
although
that's
sort
of
a
commitment
to
like
the
future,
I'm
open
to
ideas.
C
So,
at
the
end
of
every
sig
windows
meeting,
we
we
do
a
pairing
session.
That's
an
hour.
Long
timing
works,
but
because
the
stuff
keeps
changing
right,
like
you
can
write
a
million
documents
but
like
things
are,
changing
problems
are
changing.
The
questions
are
changing.
C
C
We
need
a
way
that
is
durable,
that
lets
people
self-service
this
sort
of
knowledge
on
their
own
pace
and
on
their
own
time,
but
you're
right
that
that
is
not
a
community
alone,
but
wouldn't
it
be
nice
if
everybody
who
showed
up
in
sig
network
on
slack
had
already
watched
20
minutes
of
cal
explaining
cube
proxy
like
the
questions
would
be
much
higher.
O
C
From
people
that
want
to
contribute
stuff
and
that
don't
want
to
read
or
sometimes
and
stuff,
so
it's
like
sometimes
I
feel
like
people
don't
have
time
to
well.
So
this
is
why
this
is
why
I'm
I'm
a
big
fan
of
very
short
videos
right.
If
we,
if
you
make
videos
that
are
five
minutes
long,
then
people
can
literally
watch
them
when
their
meeting
ended
early
and
they
know
they
have
to
go
to
another
meeting,
but
they
can
watch
it
or
they
can
watch
it
while
they're
indisposed
or
whatever
right.
C
It's
not
like
asking
them
to
find
45
minutes
to
watch
a
kubecon
talk
right,
five
minutes.
Here's
one
thing
go
into
the
gory
details
about
it
and
then,
if
that
one
thing
changes,
it's
fine,
you
throw
away
that
five
minute
video
and
do
another
one
right,
because
they
become
more
disposable,
the
smaller
they
are
that's.
This
is
my
my
theory
anyway.
I
haven't
done
a
lot
of
it.
J
So
it
sounds
like
a
combination
of
a
short
slide
deck
and
possibly
a
youtube.
Video
would
be
good
and
maybe
opening
issues
on
small
chunks
of
topics
that
seem
like
would
be
good
candidates,
and
let
people
pick
them
up,
maybe
even
as
a
first
contributor,
because
and
then
I
have
the
slides
in
the
in
the
in
the
github
and
pointing
to
possibly
a
youtube
video.
C
Or
tick
tock,
as
somebody
says,
in
the
chat
we
we
have
a
lot
of
good
youtube.
Videos
like
I
found
maybe
seven
or
eight
coupe
proxy
videos
on
youtube
the
other
day,
because
somebody
there
was
someone
from
kulun
that
wanted
to
start
getting
involved
and
I
was
like.
Well,
I
don't,
I
don't
think,
there's
any
shortage
of
stuff
they're,
not
as
small
like
you're
saying
but
gosh
like
who
proxy
is
so
complicated.
If
you
don't
have
45
minutes
to
watch.
C
C
What
are
you
gonna
touch
in
enough
detail
in
five
minutes
that
somebody
will
come
away
going?
Okay,
I
understood
that
and
now
that
one
little
piece
of
the
universe
is
clear
in
my
mind
right
and
then
I
can
come
back
for
the
next
piece
tomorrow
or
the
next
week,
or
whenever
I've
got
a
few
free
minutes
right.
I
think
it
would
be
really
interesting
to
just
try
to
work
up
some
outlines
on
this.
You
know
google's
done
some
of
this
vmware's
done.
J
C
The
problem
with
the
15
minute,
video
is,
if
you
share
with
me
a
15
minute
video
and
you
say
just
watch
this.
I
ain't
got
time
for
that.
I
can
skim
a
doc.
I
can
search
for
the
keywords
that
I'm
looking
for
in
less
time
than
that
or
I
can
go
to
slack
and
just
ask
my
question
in
less
time
than
that
and
whereas
five
minutes
I
feel
bad
for
not
doing
my
homework.
R
C
Yeah
yeah
totally
and
with
five
minutes
you
can
be
like
oh
man,
I
I
didn't
quite
get
that.
Let
me
go
back
to
it.
I
don't
remember
where
it
was
in
that
video
right,
but
it's
only
five
minutes
long.
So
let
me
just
burn
through
the
whole
thing
hell
I'll
watch,
five
minutes
on
2x,
right,
yeah
and
and
then
I
can
just
I
can
dig
in
it's
like
a
personal
tutor
right.
F
F
I
just
want
to
to
to
put
it
put
it
you
in
a
different
perspective,
think
think
of
yourself
trying
to
figure
out,
because
I
went
through
this
experience
right,
that
entire
merged
batch
thing
right.
It's
just
a
bunch
of
cola
and
golem
code.
If
somebody
just
have
a
slide
or
a
document
that
points
where
the
code
is,
and
I
don't
have
to
change
the
damn
thing
all
right
and
mind
you
I'm
a
stubborn
sob
like
I
don't
give
up,
and
it
is,
it
is
very,
it's
like
it's
really
like
there's
a
bar.
You
need
like.
F
There
is
a
wall
you
need
to
break
through
to
get
to
get
to
it
all
right
now
we
need.
We
need
something
that,
like
I'm
thinking,
okay,
now
imagine
somebody
who's
not
familiar
with
that
code
base
and
all
the
tricks
that
we
do
in
copenhagen.
Just
imagine
somebody
trying
to
figure
this
out
and
eventually
what
I'm
trying?
Okay,
I'm
gonna
say
something:
please
don't
don't
take
it
as
an
offense.
I
really
mean
it.
F
I
am
afraid
that
we're
creating
our
own
version
of
a
technology,
please
that
I
don't
want
this
to
happen
and
we're
they
were
doing
it
not
voluntarily
we're
doing
it
involuntarily
and
I'm
trying
to
break
this
down,
because
it's
in
our
best
interest
right
that
a
newer
generation
a
fresher
mind,
a
newer,
newer
people
with
newer
eyes.
Look
at
that
thing
and
make
it
better
because
it
is
that's
what
life
is
right.
So.
O
O
E
E
C
Yeah
yeah,
it's
like
you
know
we
could
make
things
super
easy,
but
like
it's,
what
I'm
finding
is,
as
a
new
person
who's
trying
to
understand
how
coop
proxy
works,
like
I'm,
the
only
person
here
who
doesn't
know
how
it
works.
So
so
we
sit
here
and
we
read
through
the
code
and
the
process
of
reading
through
the
code.
Every
week
you
learn
a
little
bit
more.
You
learn
a
little
bit
more
and
just.
C
You
because
you're
not
filing
enough
bugs
well
they're,
all
going
into
kaping
right,
so
they're
like
we're,
putting
we're
taking
it
all
and
we're
making
it
make
sense
in
kaping,
where
it
splits
everything
up
into
modular
subunits
right
like
and
our
plan
is
to
actually
do
exactly
what
cal
is
saying
there
because,
but
we
could
file
bugs
on
the
downstream
one
or
on
the
on
the
existing
one.
But
what
we're
finding
is
that
so
many
things
like
component
config,
just
you
can't
do
in
the
existing
one
because
of
the
way
it's
structured
right.
C
C
J
Also,
it's
not
clear
whether
we
are
primarily
targeting
potential
new
contributors
by
pointing
them
to
lines
of
code
or
is
it
sort
of
architectural
explanations,
because
there
were
two
kinds
of
things
expressed:
one
is
yeah
functionally
what's
going
on
with
q
proxy
and
things
like
that,
the
other
one
is
where
in
the
code
do
I
see
the
exact
actual
dna
thing
happening,
or
you
know.
C
Fair
point:
I
think
the
answer
is
both,
maybe
not
in
the
same
videos,
but
definitely
both
all
right.
I
think
we've
we've
got
time
on
this.
I
like
bridget's
idea
of
cloud
native
tv
that
would
be
really
cool
to
figure
out
does.
Does
anybody
want
to
take
point
on
trying
to
figure
out
what's
involved
with
putting
a
video
up
there.
F
S
S
I
can't
sign
up
for
something
new.
When
I
still
have
docs,
I
have
to
write
in
a
blog
post.
I
have
to
write
for
dual
stack
in
the
next
few
weeks,
but
what
I
can
point
people
to
is
that
github
repo
cncf
has
scaffolded
a
lot
of
the
effort
there.
So
if
you
take
a
look
and
you're
like
oh
okay,
running
a
show
on
cncftv
doesn't
look
that
hard
all
the
info.
Is
there.
C
I'll
I'll
read
it
over,
like,
I
think
many
of
us
here
for
the
next
three
weeks.
I
know
what
I'm
doing,
and
it's
not
that
right.
So
I
may
have
a
chance
to
take
a
look
at
it
in
more
depth,
but
probably
not
till
the
second
half
of
november.
F
I
am,
I
don't,
think
any
of
any
of
the
efforts
we
talked
about,
irrespective
of
that
shape
or
form
will
start
before
the
new
year
anyway,
before
we
get
there
like
get
yeah.
So
we're
fine
already
and
to
be
honest,
one
of
the
things
to
to
to
keep
in
mind
is
the
less
pressure
you
put
in
the
people
who
are
contributing
to
this,
the
better
it
will
the
better
outcome,
and
so,
let's
just
keep
it
slow,
steady
as
long
as
we're
consistent,
we're
fine.
C
Once
we
know
where
to
publish
the
videos,
I'm
sure
on
our
end,
there's
at
least
one
person
who
knows
each
one
of
the
various
proxies
reasonably
well.
We
could
help
put
up
some
of
the
videos.
C
I
I
think
the
the
imaginations
of
like
making
editing
and
publishing
the
video
outweigh
the
producing
content-
part
of
it,
at
least
in
my
mind,
they
do
if
I
knew
it
was
easy
to
just
sit
at
my
desk
and
like
talk
for
five
minutes
and
then
send
a
video
somewhere
and
it
the
rest
would
happen.
Man
I'd
be
I'd,
be
down
for
that.
P
P
Yeah,
I
think
the
I
think
the
core,
like
the
ask,
is
to
like
add
weights
to
end
point
slices.
I
can
give
a
little
bit
of
a
context
as
like
for
the
use
case,
so
I'm
kind
of
like
representing
the
sig
multi
cluster.
I
think
that
means
here
with
me.
P
The
multi-cluster
or
the
mcs
api
aspect
that
we
have
I'm
trying
to
define
the
or
like
modify
respect
to
also
incorporate
the
multi
network
where
one
kubernetes
cluster.
I
think
that's
the
case
today,
like
where
one
kubernetes
cluster
network
is
isolated
or
is
like
not
directly
routable
to
the
other
kubernetes
cluster.
P
The
idea
here
is
to
like
have
a
bunch
of
kubernetes
clusters
which
are
all
have
discrete
networks
and
then,
in
in
case
of
mcs
or
multi-cluster
service.
The
way
it
would
work
is
you
would
have
an
endpoint
slice
representing
a
service
instance,
that's
running
in
each
of
the
cluster
string.
P
Now
it
gets
complicated
when
not
complicated.
It's
like
a
common
scenario
where
the
same
service
that's
running
in
cluster
one
and
cluster,
two
might
not
have
the
same
number
of
parts
or
like
it
might
have
a
different
compute
right,
so
it
might
be
running
different
number
of
parts.
So
now,
when
it
comes
to
load
balancing
when
someone
is
consuming
a
service
that
is
of
that
is
spread
across
these
two
clusters,
you
need
to
have
appropriate
weights
assigned
to
the
endpoint
slices
in
order
in
order
for
that
load.
P
Balancing
to
work
perfectly,
otherwise
it's
just
going
to
do
around,
probably
not
like
a
5050,
even
though
the
parts
are
not
spread
across
these
two
clusters.
Equally,
so
yeah,
I
think
the
quote-unquote
the
ask
is
to
like,
at
least
like
I
mean,
have
a
discussion
around
like
adding
weights
to
the
endpoint
slices,
and
I
did
here
again
like
I'm,
I'm
very
new
to
this
group.
P
I
overheard
the
discussion
about
like
making
it
easier
for
the
contributors
so
yeah,
so
I
think
the
the
I
think
the
changes
that
are
needed
probably
are
also
in
the
cube
proxy,
as
you
yeah
and.
H
I'm
not
sure
I
understand
I
thank
you
for
for
writing
this
up
and
this
this
overlaps
with
a
lot
of
different
things,
I'm
working
on
right
now,
whether
it's
gateway,
pi
and
point
slices
or
as
cal
mentioned
topology
hints.
I
am
trying
to
understand
like
in
this
scenario,
you're
talking
about
weights.
You
know
like
if
you
have
less
endpoints
available
in
a
given
cluster,
you
want
to
weight
those
endpoints
less,
but
what
would
balance
weights
look
like
in
this
case.
H
L
Rob
for
context,
maybe
I
can
help.
I
think
this
actually
is
something
that
you
and
I
have
chatted
about
before,
but
the
idea
is
like
the
normal
mcs
use
case
with
you
know
a
distributed
number
of
pods,
not
necessarily
even
in
each
cluster
but
a
gateway
between
them.
So
there's
only
one
ip
exposed
for
each
cluster.
L
Right
and
n
end
points
in
cluster,
a
x,
endpoints
and
cluster
b,
and
you
want
to
just-
and
you
still
want
to
distribute
traffic
relative
to
the
total
number
of
endpoints
in
each
cluster.
But
you
only
see
like
a
load
balancer
in
front
of
each
cluster.
E
H
Yeah
with
topology
hints
we're
trying
to
also
basically
subset
groups
of
endpoints
and
deliver
them
to
in
in
that
case
zones.
But
in
this
case
it
sounds
like
clusters
are
the
thing
you're
you're
most
concerned
about.
Q
C
I
mean
you
can
see
how
people
would
immediately
want
to
start
using
this,
for
this
set
of
pods
has
four
cpus,
and
this
set
of
pods
has
two
cpus.
Therefore,
I
should
set
the
weight
to
two
for
all
of
them
right
for
all
the
first
group.
I
don't
think
that's
what
is
being
described
here,
but
I
do
think
that
that's
the
logical
conclusion.
L
Yeah,
like
this
seems
like
there's
a
there's,
an
mcs
use
case.
That
is,
that
is
really
just
you
know.
The
requirement
is,
if
I
have
100
endpoints
in
one
cluster
and
ten
points
another
and
both
have
have
like
a
single
point
of
ingress.
Then
I
want
10x
traffic
to
the
first
cluster,
but
yeah
exactly
tim
like
this
is,
it
seems
like
this
is
much
more
generic
and
useful
than
than
just.
Q
P
Yeah,
I
think
that's
a
very
good
point.
I
think
the
the
locality
aspect
would
also
come
into
picture,
because
if
you
see
in
this
particular
diagram
like
it
can
be
in
a
different
region.
P
And
I
think
there
is,
there
is
topology
on
their
hints
right.
I
think
yeah
again,
I'm
not
like
well-versed
with
that,
but
I
think
there's
another
effort.
That's
ongoing,
where
the
client
can
make
that
decision.
C
Yeah,
so
I
I
think
I
started
to
go
into
this
direction
and
I
pivoted
a
little
bit
there's
actually
a
related
concept
here
of
cost
right
and
even
if
they're
in
the
same
zone,
the
an
endpoint
behind
the
gateway
has
a
slightly
higher
cost
than
an
endpoint.
That's
not
behind
a
gateway,
and
we
may
need
to
factor
that
into
these
decisions.
If
we're
going
to
get
this
smart
with
it,
that
may
need
to
be
part
of
what
we
think
about.
J
So
is
this
requesting
a
kind
of
a
dynamic
update
where,
as
services
are
reconfigured
to
have
different
numbers
of
endpoints,
the
source
accordingly
adjusts
its
weights?
Or
is
this
just
a
one-time
static
waiting?
Because
I
think
the
whole
point
about
the
mcas
apis
currently
is
that
there
is
no
dynamic
communication
across
clusters
about
the
weights
of.
P
L
Q
One
thing
yeah
go
ahead,
rob.
H
Yeah,
so
so
I
I
just,
I
want
to
add
a
little
bit
of
well
to
clarify
one
thing
and,
and
that
is
that
we
considered
weights
for
the
topology
approach,
that
we're
doing
for
single
cluster
kubernetes
apology
hints
and
one
of
one
of
the
big
concerns
with
that.
Is
that
when
you
add
weights
you,
you
really
likely
increase
the
amount
of
updates
you're
going
to
be
doing
for
endpoint
slices,
because
weights
likely
change
more
more
rapidly
than
the
underlying
endpoints.
H
H
Yeah
I
just
yeah.
I
agree,
I
don't
think
it
applies
here.
I
just
wanted
to
say
that
if
we
were
to
add
weight,
I'd
have
those
concerns
like
it
may
make
sense
for
multi-cluster,
but
I
I
have
concerns
about
single
cluster,
the
other.
The
other
part
I
want
to
clarify
here
is
for
gateway.
It
seems
like
we
do
have,
as
as
this
dock
already
mentions,
we
do
have
weight
in
gateway
and
you
can
forward
to
different
clusters
at
least
theoretically
does.
Does
that
help
at
all
is
the.
H
P
P
H
That
that
is
absolutely
possible.
It
will
depend
on
the
implementation
of
the
gateway
api,
though
I
don't
know.
I
know
that
I
know
at
least
with
gk
we're
planning
on
a
multi-cluster
gateway
implementation.
I'm
not
sure
how
many
other
implementations
are
up.
There's
lots
of
gateway,
api
implementations,
but
the
overlap
between
that
and
multi-cluster
is
a
little
bit
smaller
right
now.
C
L
L
I
think
we're
going
to
point
at
a
lot
of
things,
but
the
gateway
had
weights
between
different
services
that
represented
each
back
end
each
one
of
those
pointed
at
a
point
at
the
gateway
in
the
remote
cluster
and
so
then
you're
using
gateway,
you're
kind
of
like
shoving
gateway
behind
the
mcs.
Like
there's.
Definitely
a
lot
of
steps
involved
in
this.
It
does
not
seem
simple,
but
I
I
could
see
it
like
getting
the
traffic
where
it
needs
to
go.
L
But
I
could
see
this
working
where
you
have
an
mcs
service
that
points
at
a
local
gateway
that
points
at
a
set
of
local
services
that
proxy
to
the
to
each
cluster's
service,
and
that
would
let
you
kind
of
assemble
this
chain
of
of
load,
balancers
and
proc.
I
mean
again
like
it
seems
like
it
would
technically
work,
but
we're
definitely
talking
about,
like
several
load
balancers,
to
do
what
you
you
know
should
there
be
introduce
additional.
H
And
this
is
me
not
completely
understanding
multi-cluster
services,
but
could
you
you
know
basically
have
each
cluster
export
a
different
variation
of
a
service?
So
basically,
like
you,
know,
service,
foo,
cluster,
a
and
service
food
cluster
b
and
then
in
your
input,
like
your
gateway
targets,
those
two
services
and
weights
between
them
kind
of
thing
as
service
imports.
I
don't
know
if
that's
possible,
but
maybe
that's
a
couple.
Less
steps.
L
C
L
Q
I
think
like
that
weight
is
at
a
very
semantic
level
between
services.
This
one
right
like
I
guess
my
question
here
is,
if
you
add
weight,
is
it
infrastructure,
in
which
case
you
need
to
very
precisely
define
the
fact
that
if
you
have
a
certain
way
that
behaves
a
certain
way,
is
it
simply
describing
the
fact
that
a
hint
to
another
system
to
kind
of
route
traffic?
That's
like.
I
guess
that's
my
going
back
to
my
previous
question
that
that's
sort
of.
C
Q
C
Was
asking
like
is
weight
too
general,
like
maybe
this
sh?
Maybe
it
makes
more
sense
to
make
the
something
like
this.
A
specific,
like
number
of
back
ends
represented
right,
very
clear,
like
this
is
not
about
a
bigger
pod
versus
a
smaller
pod.
It's
about
how
many
endpoints
am.
I
representing
it'll,
still
get
abused,
but
at
least
we
can
tell
people
they
did
it
wrong.
C
P
C
C
I'm
not
hard
against
it.
Only
the
old
people
got
that
joke,
I'm
not
hard
against
the
the
idea.
I
think
getting
really
clear.
Semantics
is
really
important.
C
You
know
is,
is
weight
the
right
metric
or
does
it
need
to
be
something
more
detailed
like
number
of
endpoints
or
capacity?
Now
we
don't
want
to
do
capacity
here.
Like
number
of
connections.
I
don't
know,
I
don't
know
what
the
right
semantic
is.
I
think,
like
one
thing.
Q
L
P
Yeah
so
I
mean
we
use
a
steer
today
and
I
think
the
the
way
this
is
done
is
you
have
you
have
kind
of
like
this?
Is
the
proxy
or
the
side
drive
that
does
this
for
you
right?
So
it
does
a
it
does
a
deal,
so
we
like
obviously
uses
envoy.
So
you
have
you
have
the
end
points
and
then
those
end
points
are
weighted.
Like
I
mean
there
is
like
different
sets
of
endpoints
the
same.
The
exact
same
thing,
which
I
am
trying
to
describe
the
end
point
slices.
P
There
is
a
concept
of
endpoints
and
you
have
a
set
of
endpoints
which
have
a
different
weight
versus
the
others.
So
there
is
a
load
balancer
for
the
gateway
and
that
load
balancer
has
a
few
ips
they
get
like.
So
there's
a
concept
of
weight.
There.
C
So
so
I
would
love.
Is
that
yeah,
if
this,
if
you
want
to
take
this
dock
and
iterate
on
it
and
by
calling
out
previous
work
like
that,
would
be
useful?
I'd
really
love
to
see
us
think
about,
like
is
weight
an
integer
or
a
floating
point
number.
What
range
and
what
is
the
semantic
of
them?
Is
it
or
is
it
treated
like
shares
for
cpu
shares
right
where
it's
a
fraction
of
the
total.
P
And
I
don't
know
what
they're
right
the
weights
for
envoy
or
yeah.
That
I
know
is
integers,
so
you
can,
it
can
be
any
you
can
say
46
I
mean
it's
not
a
percentage,
it's
yeah,
you
can
say
one
and
two,
which
means
the
second
one
will
have
twice
the
amount
of
traffic
versus
the
first
one.
So.
H
K
H
Less
likely
to
have
the
the
same
number
of
updates
or
potential
misuse,
I
don't
want
to
say
misuse,
but
it
it.
It
feels
a
little
bit
more
targeted
to
this
use
case.
C
But
that's
just
I'm
worried,
I
said
it
and
then
I
I
said
it
and
then
I
wanted
to
back
away
from
it
because,
with
the
distributed
nature
of
cube
proxies
like
when
you
say,
capacity,
you're,
sort
of
promising
people
that
you're
gonna
respect
that,
whereas
when
you
say,
wait,
you're
just
sort
of
suggesting
I'm
going
to
bias
my
decisions
based
on
the
weight
and
in
a
distributed
model
like
we're,
never
going
to
be
able
to
respect
capacity
properly,
especially
since
we're
at
l4,
not
at
l7
like
capacity
for
what
number
of
connections
are.
C
So
I
tangential
but,
like
you
know,
I
was
digging
around
and
I
guess
this
goes
back
to
cal's
thing
like
trying
to
figure
out
how
things
work.
There's
a
plugable
load
balancer
in
the
user
space
proxy
yeah
and
I'm
like
well.
I
thought
that
was
kind
of
cool,
but
we
only
have
one
one
way
of
doing
it,
which
is
the
round
robin
right.
So
it's
like
well,
that's
because
we
very
quickly
figured
out
that
doing
it
in
users
basis
yeah,
but
I'm
just
like.
C
I
thought
that
was
kind
of
cool,
but
I
guess
you
can't
do
that
with
iptables,
and
I
was
going
to
ask
about
that.
So
I
was
like
yeah.
Is
it
just
because
we
killed
it
or
did
somebody
at
some
point?
Have
a
plan
of
actually
writing
a
user
space
proxy
that
I
mean
initially,
it
was
the
round
robin
was
just
the
easiest
thing
we
could
do
right
and
yeah
literally.
I
think
it
was
right
about
1.0
when
we
said
this
user
space
proxier
is
terrible.
C
We
should
do
something
better
and
I
started
learning
happy
tables
right.
So
blame
it
all
on
me,
but
you
yes,
so
boy
says
you
can
wait.
You
do
weights
without
be
tables
right,
like
all
of
our
yeah,
they
wait
now.
Yeah
I
mean
they.
They
wait.
C
They
just
do
equal
equal
weight
for
everybody,
so
we
can
bias
all
that
and
make
the
math
a
little
bit
more
complicated,
but
that
seems
totally
doable,
whereas
if
we
say
capacity
it
sort
of
implies
that
I'm
going
to
keep
track
of
how
many
open
connections
there
are
and
pick
the
guy
with
the
lowest
utilization.
Like.
Maybe
that's
just
me
reading
into
it,
but
that's
the
sort
of
implied
contract
that
I
see,
but
we
think
you're.
F
C
F
Video,
I
post
here
the
video
I
posted
on
the
chat,
the
youtube
video
I
posted
on
the
chat
is
very
informative
right
about
the
load,
balancing,
algorithms
and
so
on.
Once
you
start
saying
capacity
and
business
and
not
then
you're
implying,
there's
a
feedback
loop
from
the
target
back
to
the
low
balance.
F
All
right-
and
this
wording
usually
doesn't
that's
why
weights
is
the
only
way
we
have
right,
because
it,
the
decision,
is
now
right,
based
on
what
I
have
under
under
under
under
at
the
point
of
routing,
so
this
this
this
guy
spent
a
lot
of
time
on
this,
and
I
I
strongly
recommend
that
it's
a
good
view.
It's
a
good
session
anyway,.
O
So
we
did,
they
did
at
some
point
in
the
past.
Talk
about.
C
Q
C
Okay,
so
who
who's
going
to
carry
this
forward.
H
P
N
L
C
So
let
me
say
I'll
make
this
my
my
parting
words,
the
biggest
thing
that
kills
good
ideas
in
our
community.
Is
people
giving
up
and
walking
away,
and
it's
not
because
we're
trying
to
be
a
giant
pain
in
your
butt
here,
but
we're
trying
to
work
through
the
semantics
and
understand
what
it
is
before
we
take
on
new
ideas
and
change,
a
system
that
is
sometimes
more
resistant
to
change
than
we're
happy
about,
and
you
know,
take
risks
with
all
of
our
customers.
C
Cool
thanks.
Sorry.
Last
last
last
last
thing
code
freezes
in
two
and
a
half
weeks
or
something
so
I
expect
to
be
buried
in
pull
requests
for
the
next
couple
of
weeks.
C
If
you
have
pull
requests
that
you
want
me
to
look
at
the
sooner,
you
get
them
to
me
the
better,
because
there's
nothing
worse
than
missing
the
boat,
because
you
sent
it
to
me
two
days
before
the
code
freeze
and
I
didn't
have
any
time
to
look
at
it
and
then
I
feel
terrible
and
you
feel
terrible
and
things
didn't
get
done
so
the
sooner
the
better
and
I'm
not
just
speaking
for
me,
like
everybody,
who's,
doing
reviews
here,
right,
dan,
dan
casey,
antonio
cal,
everybody
who's
signing
up
to
shepherd
stuff
across
the
finish
line.
C
C
C
Yeah
that
one's
going,
but
there
were
a
bunch
of
other
ones
that
wanted
to
move
forward.
I
don't
know
if
andrew's
here,
I
think
his
name
was
on
a
few
of
them.
So.
C
C
I
think
someone
is
still
working
on
it
from
google,
I'm
not
sure
whose
name
is
attached
to
it.
At
this
point
I
know
mache
was
was
looking
at
it.
They
they're
still
hoping
I'm
not
sure.
If
they're
hoping
to
make
23
anymore
or
not
yeah,
I
think
they
may.
They
may
have
decommitted
from
23.
E
C
They're
just
sitting
on
it
and
polishing
it
to
a
high
shine
before
they
send
it
in
right,
which
is
the
wrong
thing
to
do.
No,
I
think,
I'm
actually
I'm
pretty
sure
now
that
they
told
me
that
they
want
to
pull
that
from
23
and
push
it
to
24.
Instead,.