►
From YouTube: SIG Kube-Proxy Bi-Weekly Meeting (APAC) for 20220202
Description
SIG Kube-Proxy Bi-Weekly Meeting (APAC) for 20220202
A
Welcome
everyone:
this
is
the
keeping
apac
meeting
and
we
are
here
on
2nd
feb
2022.
A
B
Hi,
I'm
ready,
I
I
mean
I
use
kubernetes
in
clusters
for
quite
quite
a
long
time.
I
just
wanted
to
kind
of
understand
what
this
group
does
here.
Thank
you.
A
A
Okay,
so
I
don't
have
any
specific
agenda
as
such
from
my
side,
just
a
couple
of
updates:
slash
call
for
help
for
user
spaceport,
so
for
user
space.
You
currently
add
this.
This
is
like
the
latest.
A
A
So
yeah,
so
if
anyone
wants
to
dive
into
user
space
proxy
port,
then
maybe
this
is
the
issue
to
start
with
and
yeah
like
that's
the
only
update
I
have
from
user
spaceport,
so
vivek.
If
you
want
to
tell
us
whatever
you've
been
up
to
please.
D
Go
ahead
yeah,
so
I
think
I
p
table
mostly.
Things
are
somewhat
stable,
like
at
least
there
are
only
few
test
cases
and
we
know,
like
I
think,
two
or
three
of
them
and
I
think,
even
for
dual
stack,
including
ipv6
v4,
all
of
them
it's
few
of
them
are
failing,
and
one
is
contract
related
or
two
are
contract
related
and
one,
I
think,
should
be
fixed.
Now,
once
I
mean
we
had
actually
committed
the
change
for
the
multiport,
so
yeah,
but
at
a
high
level.
D
D
It
will
be
all
done
and
then
I
was
actually
looking
at
like
one
more
ip
tables
back
in
which
would
be
little
better
performance
than
what
we
have
now
so
that
I
mean
I've
been
thinking
for
that
from
two
weeks,
but
I
do
not
get
a
time
to
actually
start
working
on
it,
and
then
there
is
few
more
like
refactoring
restructuring,
which
I
was
doing,
but
yeah
I
mean
I
didn't
know,
could
not
complete
that
so,
but
I
think
I'll
put
it
in
chunks
like
those
restructuring
code
yeah.
So
that's
pretty
much.
A
So
do
you
need
any
help
around
this?
Like
I,
I
think
or
yeah
I
mean:
how
do
we
unblock
you
if
you're
blocked
yeah,
I
think.
D
For
the
new
ip
table
back
end,
it
will
be
difficult
for
someone
new
to
actually
pitch
in,
but
I
think
I
think
we
already
discussed
for
the
unit
test
case
or
like
right.
I
mean
which
we
don't
have,
so
that
will
be
even
a
good
starting
point,
even
start
with,
because
that
might
be
easier
to
even
understand
what
the
code
does
and
other
things
so
yeah,
I
think
in
ipvs
or
ip
table.
If
someone
can
start
looking
at
that,
if
there
was
a
help,
that
would
be
good.
A
Cool
so
basically,
we
need
unit
test
cases
for
ip
tables
and
ipvs
right,
and
if
people
can
help
us
here,
then
that
will
be
cool.
C
A
A
Yeah,
so
for
folks
for
joining
this
for
the
first
time,
what
we
do
over
here
is
basically
we're
working
on
this
project
called
as
kpng,
which
is
trying
to
come
up
with
the
new
architecture
sort
of
for
q
proxy,
and
there
are
a
couple
of
problems
with
q
proxy.
You
know
when
we
try
to
scale
it
and
things
like
that.
Also
q,
proxy
code
right
now
is
pretty
complex
and
q.
Proxy
code
is
something
that
you
will
find
in
kubernetes.
A
A
Vanities
pkg.
A
And
proxy,
so
this
is
where
the
current
q,
proxy
code
resides
and
kubernetes
6
skipping
is
where
we
are
trying
to
work
on
this
project
called
as
kpng.
A
If
you
look
at
the
cube
proxy
code,
it
has
got
a
couple
of
packends
like
user
space,
iptables
ipvs
and
if
you
look
at
cupping,
so
one
of
the
problems
with
q
proxy
is
that
if
you
dive
into
these
this
discord
there'll
be
like
a
lot
of
overlapping
stuff
in
ip
tables
user
space-
and
you
know
all
the
other
backends.
A
One
of
the
things
that
cupping
tries
to
do
is
sort
of
decouple
these
aspects
so
that
the
part
that
has
to
hold
the
api
server
for
service
endpoint
changes
and
things
like
that
can
recite
in
one
place
and
the
other
part
like
sort
of
the
back
ends
can
reside
in
you
know
other,
like
sort
of
decouple
it
from
the
the
part
that
holds
api
server
and
as
of
now,
these
are
the
back
ends.
That
sort
of
we
have.
I
think,
if
you
get
cupping
up
then
by
default.
A
Nft
is
the
back
end
that
comes
up,
but
you
can
also
switch
to
these
other
back-ends.
What
I
was
talking
about
a
bit
earlier
was
the
user
spaceport.
So
we
have.
We
are
trying
to
port
user
space
back
in
from
q
proxy
to
cupping,
and
this
is
the.
A
This
is
the
working
work
in
progress
pr,
so
one
of
the
things
that
we're
trying
to
do
over
here
is
try
to
pull
this
code
and.
A
Patch
in
user
space
mode
for
keeping
and
see
if
that
works,
and
one
of
the
initial
errors
that
we
have
where
we
are
stuck
at
right
now
is
this.
So
if
anyone
wants
to
dive
into
this,
feel
free
to
you
know,
look
into
this
issue,
and
things
like
that
to
to
get
to
know
about
what
cupping
is
more
in
detail.
There's
a
there's,
a
work
in
progress
skip
which
is
already
linked
here,
and
there
are
a
couple
of
resources
on
how
you
should
get
involved.
A
Apart
from
this,
there
are
a
couple
of
other.
A
Resources
mainly
in
this
test
e
to
e
sh
direct
file
and
also
in
this
directory,
that's
the
hack
directory
on
how
to
get
how
to
get
cupping
up
and
running,
and
this
is
where
you
can
try
out.
You
know
the
user
space
mode
and
things
like
that.
So
if
anyone
wants
to
hack
with
cupping
feel
free
to
you
know,
start
with
copying
hack
directory
or
by
reading
the
kep,
you
know
whatever
works.
A
Unit
tests
for
ip
tables
and
ipvs.
So
if
you
look
at
both
these
back
ends
and
the
other
back
ends
that
we
have
in
place,
we
don't
have
any
unit
tests
in
place.
So
this
can
be
like
a
very
good
starting
point
for
someone
who's
new
and
wants
to
get
involved
to
start
writing
unit
tests
for
this
vivek.
Would
you
suggest,
looking
at
the
unit
tests,
which
are
there
in
place
over
here.
D
A
So
yeah
this
can
be
like
a
good
starting
point
to
look
at
how
to
come
up
with
unit
is
because,
if,
if
you
dive
into
this
code,
like
there'll,
be
sort
of
similarity
between
the
the
existing
ip
tables
back
end
of
q,
proxy
and
sort
of
the
new
backend,
that
vivek
has
been
working
on
so
yeah.
This
is
another
area.
If
anyone
should
get
anyone
wants
to
get
involved,
we
we
don't
have
an
issue
as
of
now,
but
after
this
meeting
I'll
go
ahead
and
create
an
issue.
B
A
couple
of
questions
I
have
so
kpng
is
it
targeted
to
be
integrated
into
upstream,
sometimes
in
the
sense,
what
is
the
plan
to
kind
of
replace
the
q
proxy
in
the
main
kubernetes
repo?
The
second
question
is
other
than
simplifying
the
code
where
there
is,
there
is
a
lot
of
duplicity.
B
A
Yeah,
so,
regarding
the
first
question,
so
this
is
the
cap
that
is
in
progress
which
talks
about
you
know
upstream
kubernetes,
basically
accepting
kpng,
but
we
are
not
there.
Yet.
We
still
have
to
work
on
kpig
and
make
sure
that
you
know
it
is
capable
of
replacing
cube
proxy
going
forward.
So
we
are
still
working
on
making
keeping
capable
of
that
to
address
your
second
question.
Yes,
it
does
address
the
sort
of
efficiency
improvements
in
in
at
a
very
high
level
overview.
A
Q
proxies,
has
sort
of
a
demon
set
kind
of
an
architecture
as
of
now
wherein
on
every
node.
There
are
these
demon
sets
of
proxy
created,
which
consistently
try
and
pull
the
api
server,
and
things
like
that
and
as
and
when
you
scale
up
your
cluster
like
7,
000,
8,
000
nodes.
This
can
lead
to
performance
constraints
on
the
api
server,
and
things
like
that
right.
A
So
yeah
cupping
also
tries
to
address
things
like
that
by
reworking
the
architecture
so
that
it
doesn't
have
to
sort
of
continuously
pull
api
server.
A
part
of
cupping,
which
we
call
copying
server,
can
do
that
and
then
send
whatever
are
the
changes
in
the
service
endpoints
and
things
like
that
to
the
particular
back-end
implementation
and
things
like
that
vivek.
If
you
want
to
add
to
this,
please
go
ahead.
C
B
D
Yeah
and
even
like
for
the
back
ends,
also
like
even
like
what
we
are
trying
to
do
is
like
make
it
better.
Wherever
we
see
there
are
issues
with
respect
to
the
upstream
kubernetes,
backends
and
other
than
that,
even
like,
as
regis
was
saying
like
the
kpng
server
itself,
like
has
been
made,
keeping
in
mind
the
performance
and
the
scalability
things,
and
one
more
thing
is
like
it's.
Actually,
if
you
see
other
than
the
performance,
it's
like
easy
to
actually
put
your
own
kind
of
back.
D
End
too,
like
if
you
want
a
custom
back
end,
which
would
have
been
little
cumbersome
if
you
had
done
it
with
the
cube
proxy.
But
here
since
you
have
a
kind
of
a
kpng
server
which
is
like
a
umbrella
and
you
can
put
a
small
portion
of
code
which
can
actually
plug
into
the
kpng
server
and
do
the
work,
what
you
need,
and
even
you
can
create
multiple
back-ends
for
the
same
kpng
server.
So
you
can
have
multiple
back-ends,
which
each
can
do
its
own
work
like
based
on
the
requirement.
B
Okay
sounds
good.
This
is
really
interesting.
I
I
would
be
interested,
maybe
I'll
go
ahead
and
the
details
and
probably
come
back
with
some
questions.
If
I
have
anything
in
one.
A
Sure,
like
feel
free
to
reach
out
to
us
on
the
sig
network,
kpng
channel
on
kubernetes
and
also
hang
out
with
us
in
these
meetings.
So
this
is
sort
of
like
the
architectural
diagram
that
we
have
for
clipping,
and
this
is
what
you
know.
Vivek
was
talking
about
like
adding
back
ends,
and
things
like
this
over
here,
so
the
back
end
does
only
the
like.
A
B
B
Plug-In
architecture
for
helium
or
other
things,
or
does
it
kind
of
work
with
ip
tables
only
because
one
of
the
things
where
the
ip
tables
usually
seem
to
be
slower
in
kubernetes?
Is
that
being
addressed?
Sorry
if
it
was
already
covered
in
the
docs,
but
I
just
wanted
to
know.
A
Yeah,
I
I
don't
think
we
are
there
yet
with
celium
we
have
so.
We
also
have
other
backends
like
ipvs
and
nft,
and
things
like
that
right,
which
which
focus
on
the
improvements
which
are
not
there
in
ip
tables
and
vivek
yeah,
feel
free
to
pitch
in
vivek
is
sort
of
the
person,
our
go-to
person
for
all
of
the
back
ends
and.
D
For
being
yeah,
I
think,
regarding
that
I
mean
it's
not
a
cni
per
se,
but
like
right
I
mean
you
still
need
to
have
a
cni,
so
like
it,
it
is
just
about
what
kind
of
back
end
you
choose
right,
I
mean
say,
for
example,
if
you
have,
you
still
need
the
calico
to
be
there
for
having
the
networking
right.
I
mean
it's
just
that
how
you
are,
how
are
you
actually
doing
the
service
networking
and
things
which
the
kubernetes
actually
does
right.
A
We
do
have
celium
support
here.
Right
I
mean
we
can
get,
we
can
either
use
celium
or
calico,
and
things
like
that
right.
Yes,
yes,.
A
Cool
yeah
I
mean
this
is
all
I
had
from
my
side
for
the
agenda,
at
least
like
if
you
folks
have
any
other
questions
or
want
to.
You
know,
have
any
other
discussion
topics.
Then
please
go
ahead.