►
Description
Kubernetes Enhancement Proposal (KEP) Reading Club is an initiative by sig-architecture.
KEPs covered in this session:
* https://github.com/kubernetes/enhancements/blob/622562442f91dfe23e1d2001534d301e7af3f2a7/keps/sig-network/2104-reworking-kube-proxy-architecture/README.md
* https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/2879-ready-pods-job-status
A
Okay,
hello,
everyone:
this
is
a
cat
reading
club
and
it's
a
public
humanities
community
meeting
as
all
such
meetings.
It
follows
the
kubernetes
code
of
contact
which
points
them
to
be
excellent
to
each
other.
It
is
recorded
and
the
recording
will
be
posted
on
kubernetes
youtube
channel.
So
please
don't
do
or
say
anything
that
you
don't
want
to
be
public.
A
Today
we
have
two
caps
that
we
will
discuss.
I
will
post
link
to
meeting
an
agenda
in
the
chat
once
again
and
there,
if
you
don't,
have
edit
access
by
the
way
you
can
get
edit
access
by
joining
sick
architecture.
Google
group
kubernetes
architecture
group
and
you
can
then
add
yourself
to
attend
this
and
knows,
but
for
now
we
we
have
two
caps.
One
is,
I
believe,
something
that
was
no.
It's
not.
B
A
Okay,
one
is
about
cube,
reworking,
cube
proxy
architecture
and
another
one
is
track
ready,
pods
in
job
status
status.
I
do
we
have.
We
have
the
author
of
the
first
one
right
or
not
right.
B
I
I'm
not
the
author
of
the
first
one
hey,
I
know
I'm
not
the
author
of
the
first
one,
but
I've
been
working
with
the
ones
who've
been
like
folks,
who've
been
contributing
to
this
sub
project
called
cupping.
A
Great
okay,
so
we
will
start
with
this
cut
so
just
to
remind
how
this
meeting
goes.
Our
stats
start
a
timer
for
10
minutes
during
this
10
minutes.
Everybody
can
read
the
cap
and
after
that,
if
we
need,
then
we
can
spend
a
little
bit
more
time
reading.
If
not,
then
we
will
proceed
to
discussion.
A
Questions
concerns
whatever
you
want
to
say
about
this
gap.
Are
there
any
questions
about
how
this
mythic
is
from.
A
No,
it
seems
that
it's
clear
okay,
in
that
case,
I
will
start
a
10
minutes
timer
and
you
all
have
time
to
read
the
rework
proxy
architecture
gap.
A
So
10
minutes
passed,
I
think
we
can
fat.
We
can
have
another
three
to
five
in
case
anybody
haven't
finished
reading.
So
thank
you
for
posting
links
to
the
example.
A
Okay,
has
everybody
read
the
cat
yep,
very
good,
okay,
questions
comments.
C
I
just
want
to
talk
about
the
decoupling
effort.
That's
being
am
I
audible.
C
Okay
yeah,
so
I
just
wanted
to
talk
about
the
complete
decoupling
effort.
That's
been
going
on,
so
just
so.
I
get
this
right.
This
project
is
basically
aimed
on
kind
of
extracting
the
cube
proxy.
C
B
Thanks
thanks
for
your
question
branch,
you
so
what
this
skip
tries
to
so
okay,
so
first
a
bit
about
q,
proxy
right.
So
right
now
the
q
proxy
implementation
is
is
sort
of
a
daemon
set
which
is
there
on
every
node
of
the
cluster
and
one
of
the
one
of
the
drawbacks
of
the
existing
implementation
of
q
proxy.
Is
that
it
kind
of
watches
the
api
server
throughout
right
for
all
the
endpoints
that
are
being
changed
and
whatnot.
B
What
this
kept
tries
to
do
is
solve
that
problem
in
a
way
that
it
decouples
the
kubernetes
service
business
logic,
implementation
with
the
actual
back
end
implementation,
so
q
proxy
tries
to
proxy
the
service
endpoints
from
the
service,
ip
node
to
the
pod
endpoints
and
at
the
same
time
it
uses
a
particular
backend
like
initially
it
was
user
space
proxy.
Then
there
was
ip
tables,
then,
and
and
so
on
and
so
forth.
B
So
with
this
implementation,
we're
trying
to
decouple
all
the
general
aspects
of
the
cube
proxy
with
the
back
end,
so
that
user,
space
implementation
or
iptables
implementation
shouldn't
be
doing
the
same
things
right
so
shouldn't
be
doing
the
common
things
that
can
be
decoupled
from
the
implementation.
So
anyone
can
add
an
implementation,
irrespective
of
you
know
the
q
proxy
or
this
kubernetes
service
business
logic.
Does
that
make
sense?
Have
I
answered
your
question.
A
I
like
this
proposal,
I
am
interested
in
use
cases
that
it
mentioned,
so
the
proposal
mentioned
a
few
projects,
a
few
implementations
of
products
like
silvium,
or
what
else
was
there
calico,
for
example,
I
mean
that,
are
they
are
they're
implementing
to
proxy
backend
right,
yep,
yeah
yeah.
I
see
that
there's
like
section
user
stories
that
is
mostly
tbd
but
yeah.
I
I'm
looking
forward
to
seeing
this
this
user
stories.
Are
you
working
with
these
projects
somehow
to
develop
that.
B
Yep
yep,
that's
a
great
question,
then
so
yeah
so
we've
been
working.
You
know
around
this
cap
in
this
sig
network
sub
project
called
as
skipping,
which
is
you
know,
q
proxy
new
generation
sort
of
and
what
we're
trying
to
do
over
here
is
with
with
kipping.
As
I,
as
I've
already
talked
about
this,
the
decoupling
you
know
of
the
q
proxy
general
aspects
with
the
back
end
part
of
it.
B
So
so
we've
been
trying
to
have
the
q
proxies
the
kapping
server
over
here
and
implement
the
existing
backends.
So
we've
been
trying
to
implement
iptables
backend,
which
is
basically
porting
the
back
end
from
the
q
proxy
code
base,
as
it
is
right
now
from
there
to
this
coupling
project
and
I've
been
involved
with
the
user
space.
You
know
porting,
so
one
of
the
things
with
user
space
was
like.
B
This
was
like
the
original
q
proxy,
which
is
just
trying
to
you
know
it
uses
a
round
robin
algorithm
to
proxy
the
service,
endpoints
and
whatnot,
and
then
it
was
transitioned
using
to
something
like
iptables,
and
you
know
ipvs
most
recently,
so
in
this
sub
project
we've
been
trying
to
implement.
All
of
these
cupping
back
ends,
implement
the
entire
coming
step
project
and
we've
been
looking
for
help
right.
B
So
if
anyone's
interested
to
you
know
to
hang
out
with
us,
we
have
a
weekly
meeting
every
friday
that
happens
at
around
8
30
p.m.
8
30
a.m.
I
pst
so
hang
out,
like
you
know,
just
come,
hang
out
with
us,
we'll
be
happy
to
pair,
and
you
know
I
personally
don't
come
from
a
networking
background.
So
I've
been
trying
to
learn
about
all
of
the
networking
aspects
through
this
project
so
yeah.
So
we've
been
looking
for
help.
D
I
just
missed
when
you
said
like
after
what
you
said
like
a
few
proxy
generally,
it
stays
on
each
node
that
is
built
and
it
watches
the
apa
server
before
any
kind
of
changes
that
happen.
After
that,
I
kind
of
missed
okay.
B
Yeah
so
q
proxy,
the
current
implementation
of
q
proxy,
is
sort
of
a
daemon
set,
so
it
it
will
be
there
on
every
node
watching
the
api
server
for
the
endpoint
changes,
and
with
that,
as
you
scale,
the
cluster
up,
there
will
be
a
considerable
load
on
the
kubernetes
api
server.
B
So
what
kaping
tries
to
do
is
decouple
that
so
that
there
is
a
sort
of
a
cupping
server
and
cupping
client,
so
the
server
can
watch
the
api,
server,
endpoints
and
whatnot
without
putting
load
on
every
no
without
putting
load
on
the
api
server
and
then
give
the
state
to
the
cupping
client,
which
can
then
be
proxied
to
the
back
end.
So
this
is
one
of
the
things
that
copying
is
trying
to
solve.
The
other
thing
that
cupping
is
trying
to
solve
is
the
current
implementation
of
cube.
B
Proxy
is
kind
of
complicated,
with
multiple
back
ends
and
multiple
back
ends
doing
almost
similar
things,
sort
of
with
the
service
topology
and
all
of
this
all
of
those
things.
So
a
back
end
is
supposed
to
do
like
iptables.
Backend
is
supposed
to
generate
iptable
rules
and
whatnot,
but
apart
from
that,
it
is
also
involved
in
the
service
topology
and
all
in
the
current
q
proxy
implementation,
with
cupping
we're
trying
to
decouple
that
right.
B
So
a
backend
can
do
the
only
thing
that
it's
supposed
to
do,
which
is
just
proxy
the
end
points
and
create
networking
rules
and
whatnot.
So
if
you're
interested
in
this,
even
if
you
are
not
coming
from
a
networking
background,
that's
totally
cool.
But
if
you
are
interested
in
spending
some
time
to
understand
what
is
happening
over
here
and
you
know,
contribute
to
this
sub
project,
then
we
are
very
welcome.
B
Yeah
along
the
lines
here,
you're
correct,
you
know
something
that
watches
the
api
server
and
then
sends
the
state
to
the
copying
client,
something
in
those
lines.
Yeah,
that's
right,.
D
B
So
yeah
yeah
I
mean
so
this
is
not
just
what
so
q
proc
the
initial.
The
current
implementation
of
q
proxy
is
not
just
watching
api
server
for
a
for
one
particular
thing
right:
it
has
to
maintain
its
state,
it
has
to
check
for
the
endpoints
that
are
changing,
so
there
are
multiple
things
happening.
B
I
was
trying
to
kind
of
put
it
in
simple
words,
wherein
trying
to
say
that
it
just
watches
the
api
for
the
endpoints,
but
then
there
are
multiple
things
happening,
so
anything
that
is
the
current
implementation
of
q,
proxy
watches
for
the
service.
Endpoint
changes
right
so
that,
if
the
end
points
change,
then
the
corresponding
service
proxies
can
be
service.
Ips
and
nodes
can
be.
Ports
can
be
proxied
to
a
particular
port
endpoint
right.
So
so
there
are
multiple
things
that
is
happening
and
one
of
the
problems
with
that
is.
B
It
is,
as
you
scale
the
cluster,
since
every
node
will
do
the
same
thing
again
and
again,
it
will
put
a
load,
a
considerable
load
on
the
api
server,
which
is
what
we're
trying
to
avoid.
C
So,
just
to
add
on
to
that
the
the
the
contributing
thing
that
you
just
said.
So
I'm
I'm
interested
to
contribute
to
this
in
contributing
to
this,
but
I'm
not
sure
about
the
beginner
thing,
because
I
I
I
mean,
like
you
mentioned,
I
have
I
myself.
I
don't
have
like
much
background
in
networking
and
all
that,
so
is
it
like
beginner
friendly,
I
mean
I
I
I
don't
want
to
be
like
in
a
pla
like.
C
I
don't
want
to
kind
of
slow
down
the
development
that's
going
on
like
I
don't
want
to
kind
of
be
asking
questions
and
all
because
a
lot
of
questions,
a
lot
of
very
general
questions,
because
the
kept
mentioned
that
I
mean
the
repository
mentioned
that
the
team
is
very
small
at
the
moment.
So
I'm
not
sure
if
they
could
be
able
to
handle
the
beginners
that
are
coming
over
to
the
kpng
repository
yeah.
B
Oh
so
first
thing
everyone
is
welcome.
I
have
myself,
I'm
a
beginner
and
I've
been
asking
all
of
these
dumb
questions
and
people
have
been
answering
and
everyone
has
been
helping,
so
that
shouldn't
be
a
barrier
that
you're
a
beginner
or
whatever.
You
should
be
able
to
ask
all
the
questions
that
you
that
you
want
to
ask
first
step
would
be
to
ask
these
questions
on
sig
network,
and
the
second
step
would
be
to
join
us
on
friday.
B
At
8,
30
am
pst,
you
know
for
the
cupping
meeting
and
we'll
make
sure
that
we
find
something
for
you
to
work
on.
Having
said
that,
one
thing
that
I
would
like
to
point
over
here
is
cupping
is
not
a
project
wherein
you'll
be
able
to,
or
this
is
something
that
that
happened
to
me,
wherein
I
was
not
able
to
come
to
speed
like
within
within
a
few
weeks
or
something
like
that.
I
had
to
take
a
considerable
amount
of
time.
B
So
it's
it's
a
slow
project
because
and
it's
difficult,
but
at
the
same
time
the
team
and
all
of
us
are
there
to
help
you
yeah.
You
know
you
can
ask
your
questions.
You
can
learn
a
lot
and
if
you're
interested
in
you
know
kind
of
not
giving
up
and
sticking
at
it,
then
please
hit
us
up
on
sick
network,
so
yeah,
it's
just
because
you're
a
beginner
for
that
matter.
That
shouldn't
be
a
barrier.
I
myself
as
a
behavior.
C
Yeah,
that
makes
sense
I'll
yeah
I'll
I'll
be
in
touch
yeah
thanks.
A
Yeah,
thank
you.
I
have
one
question
or
maybe
that
question
but
thought
what
I'm
thinking
about
is
in
kubernetes
clusters
managed
by
cloud
providers,
and
I
with
that
implementation
I
can
just
use
an
eq
proxy
backend
and
cloud
providers
shouldn't
restrict
that
in
any
case
right
or
can
we.
B
As
far
as
I
know,
the
default
one,
the
default
backend
is
ipvs
for
q
proxy,
but
then
the
the
mode
for
q
proxy
can
be
changed
through
a
config
map.
So
you
can
change
it
to
be
something
like
ip
tables
or
something
of
that
sort,
and
that
should
work,
but
I
may
be
wrong,
so
the
safe
thing
would
be
to
just
ask
this
question
and
sign
networking.
A
A
Okay,
thank
you
for
that.
I'm
asking
because
I
I'm
also
not
like
not
sitting
in
cube
proxy
all
the
time,
so
I
I
don't
know
sure
about
it,
but
I
remember
being
very
disappointed
by
the
fact
that
eks
didn't
support
ipvs
and
yeah,
I'm
wondering
if
that's
if
that
proposal
would
make
proxy
implementation
more
flexible.
If
I
want
to
have
hosted
kubernetes
class,
the
host
managed
background
provider,
not
managing
myself.
B
Yeah,
I
mean
that's
a
good
question.
If,
if
a
particular
cloud
provider
doesn't
support
a
particular
backend,
then
what
are
does
q
proxy
do
and
I
think
that
if
it
doesn't,
if
it
doesn't
support
a
particular
backend,
then
you
can
fall
back
on
ip
tables
or
something
of
that
sort.
But
then
you
can
also
have
other
cni
providers
like
calico
and
then
you
know
use
them,
but
yeah
I
mean
I'm
not
completely
sure,
but
the
answer
so
feel
free
to
you
know
ask
this
on
sick
network.
A
Okay,
thank
you
yeah.
I
I
I
guess
it's
also
probably
depends
on
the
cloud
provided
or
something
here.
D
A
Cool
are
there
any
other
questions,
so
this
step.
A
I
think
we
are
good
in
that
case
we
can
have
20
minutes
left.
So
that's
enough
time
to
read
the
great
second
cup
and
then
have
10
minutes
for
discussion.
The
second
cat
is
the
truck
ready
pots
in
drop
status.
A
Okay,
it's
it's
been
almost
10
minutes.
Has
everybody
read
the
cap
or
doesn't,
but
they
need
some
more
time.
A
A
A
Yeah
looks
like.
A
A
little
addition
to
top.
D
A
Okay,
yeah,
I
I
guess
it's
I
see
when
where
it
can
be
useful
to
have
a
field
saying
how
many
pots
are
actually
ready.
I
see
I
can
imagine
workloads
that
could
use
that
field.
A
I
can
also
see
where
it
can
be
very
confusing
for
people
and,
I
suppose,
they're
they're.
As
with
anything
in
a
complex
system
they're,
it's
it's
probably
impossible
to
avoid
some
confusion,
but
just
the
fact
that
it
has
to
be
synced,
and
ideally
there
there
will
be
some
slows
for
that.
It
should
be
very
fast,
but
still
like.
A
A
second
is
not
nothing
so
sometimes
discrepancies
might
be
confusing
to
people,
and
another
thing
is
that
currently
the
the
jobs
status
has
counts
for
active,
succeeded
and
failed
thoughts
and-
and
each
part
can
be
only
in
one
of
that
states
right
and
with
this
new
ready
field.
A
The
ready
pods
will
be
also
included
in
active
count,
so
this
field
won't
be
disjoint
anymore.
If
I
understand
that
correctly,
if
I
don't,
if
I
miss
something,
then
correct
me,
but
as
I
understood
that
that
active
fields
will
be
active,
count
will
be
pending,
plus
running
bots
and
ready.
Will
be
only
running
and
past
readiness
probe,
so
they
would
be
kind
of
satisfied.
A
D
D
A
A
The
latency,
I
guess
it
can
be
called
suspects,
set
to
be
between
500
milliseconds
and
a
second
just,
oh,
not
long.
But
if
you
have
very
time
sensitive
workloads,
then.
A
A
For
5
000
notes,
but
not
for
things
like
that,
not
for,
like
all
updates
of
or
of
all
the
fees,
I
guess
yeah.
I
I'm
wondering
if
I
guess
there
should
be
some
like
scavenge
run
for
for
for
such
change
when
this
asynchronousness
could
cause
these
big
discrepancies
in
very
large
clusters.
B
C
Two
cents
on
this:
I
don't
think
this.
The
asynchronous
nature
would
be
a
problem,
or
at
least
it
wouldn't
be
a
new
problem,
because
the
job
status,
active
field
does
the
same
thing
like
it
does
the
same
thing
for
running
and
bending
phases.
So
we
need
to
do
that
for
lady
phases
and
that's
it.
So
I
think
the
implementation
should
be
the
same
till
that
point
and
after
that
we
just
pull
for
the
daily
phases,
so
the
asynchronous.
C
If
it
is,
if
it
is
even
now
there,
then
it
would
be
there
with
the
with
the
implementation
implementation
for
the
daddy
parts
cap.
So
yeah
I
mean
if
the
scalability
tests
are
good
right
now,
then
it
should
be
good
for
active
as
well.
I
mean
lady
as
well
yeah,
because
the
underlying
implementation
is
mostly
going
to
be
the
same.
D
A
Yeah,
I
see
that
also.
There
is
a
metric
drop
sync
duration
seconds.
A
I'm
not
sure
if
this
is
the
metric,
they
propose
this
metric,
like
in
production
readiness
section,
I'm
not
sure
if
this
is
metric
for
think
of
this
specific
field
or.
A
Yeah
but
I
guess,
if
it's
measured
somehow
then
it's
if
there's
magic,
then
it
like
would
be
the
packables.
A
We
are
almost
at
time
we
have
a
minute
any
more
questions
or
comments
to
this
clip.
A
Thank
you
all
very
much
for
joining.
I
will
stop
recording
now.