►
From YouTube: Kubernetes SIG Network Bi-Weekly meeting for 20201217
Description
Kubernetes SIG Network Bi-Weekly meeting for 20201217
A
B
We
have
a
couple
quick
items:
regular
business.
There
are
two
maintainer
candidates
here:
they're
both
red
hatters
for
the
srov
network
operator,
sebastian,
shankman
and
federico
palini.
B
Does,
let's
see
I'm
just
taking
a
look
at
the
list
here
and
I
believe
that
we
should
have
quorum?
Does
anyone
have
any
objections.
B
All
right,
in
that
case
they
are
accepted,
and
otherwise
there
was
a
pull
request.
Was
that
from
you
adrian?
I
believe
it
was
for
a
couple
nvidia
members
to
be
added
yeah.
Thank
you
for
getting
that.
I'm
sorry!
I
forgot
you
guys
on
my
last
round,
so
I
just
merged
it
this
morning.
So
thank
you.
I.
B
A
Which
reminds
me
everybody,
please
add
your
attendance
to
the
attendees
at
the
top
of
the
agenda.
I
will
paste
the
agenda
doc
into
the
zoom
chat.
If
you
don't
happen
to
have
that
handy.
That
is
how
we
track
attendance
for
purposes
of
voting
and
membership
and
such
thanks.
B
Awesome-
and
I
believe
that
concludes
our
regular
business.
I've
got
one
item
on
the
agenda.
I'm
a
little
disappointed
that
we
don't
have
martin
kennelly
from
intel
today.
I
know
this
time
of
year
is
sometimes
difficult
to
get
everyone
together,
but
during
the
multis
maintainers
meeting
last
week,
tomo
martin
and
I
had
a
pretty
good
conversation
about
service
abstraction
and
one.
B
The
thing
that
really
brought
it
up
is:
we
were
talking
about
dynamic
attachments
for
multis
and
that
architecture-
and
you
know,
martin,
was
excited
about
the
possibilities
that
might
open
up
and
while
it
may
be
another
process,
another
controller
that
actually
performs
this.
B
It
was
good
to
to
get
that
conversation
going
a
bit
and
I
was
kind
of
hoping
that
we
could
spend
some
more
time,
as
a
bigger
group
to
you
know,
cover
that
and
to
help
align
martin
with
some
of
the
work
that
tomo
and
peter
have
done
along
these
lines
so
anyways,
I
added
some
raw
notes
from
our
maintainers
meeting
just
to
help
go
there
without
martin's
input
a
little
bit
of
a
at
a
loss
of
where
to
start
here,
but
I
might
ask
for
tomo
if
you,
if
you
want
to
kick
this
off
a
little
bit.
B
Another
thing
is,
I
believe
you
know
I,
I
guess
one
kind
of
the
state
of
affairs
here.
As
far
as
I
understand
it
is
that
tomo's
previous
prototype
was
using
endpoint
controller,
but
we've
moved
on
to
end
point
slices,
and
so
that's
kind
of
the
next
detent
point
in
this
proof
of
concept
is
to
get
employees.
D
Yeah
so
so
the
previously
I
did
the
plc.
At
that
time
I
modified
the
upstream
kubernetes
code
to
modifying
the
endpoint
to
adding
the
modus
interface
ip
address
so
and
then
the
and
then
now
I'm
also
the
prototyping
or
the
next
version.
I
mean
that
they
are
separate
from
the
kubernetes
upstream
implementing
the
endpoint
controller
and
then
this-
and
this
is
the
actually
the
endpoint
controller
and
then
now
the
upstream
design
is
moved
to
use
endpoint
slice
instead
of
the
end
points.
D
D
So
at
that
time
the
we
need
the
sound
component
which
manipulate
ib
tables
to
implementing
service
abstraction,
so
that
this
is
the
pretty
missing
stuff
and
then
I'm
I
have
the
planning
and
the
architecture
stuff.
So
but
today
I
don't
have
the
time
yet
so
the
that
is
the
my
to
do,
but
before
that
now
I'm
working
on
the
modes
network
policy
stuff
so
that
so
the
the
service
abstraction
priority
is
next
of
the
network
policy.
That's
the
what
we
talked
about
in
the
motors
maintenance.
B
B
A
A
I
know
we
discussed
that
in
the
past
I
kind
of
lost
track
of
whether
that
was
valid
or
still
still
workable
or
not.
I,
the
idea
there
is
that
we
have
with
dual
stack.
We
have
the
capability
for
multiple
ips.
A
There
are
a
little
a
couple
of
issues
with
it
right
now
that
we're
trying
to
work
out
in
sig
network,
but
maybe
that's
still
a
possibility.
D
Down
one
questions,
so
do
you
mean
that
multiple
ip
means
the
one
ip
address
from
the
primary
interface
and
then
the
another
thing?
Is
the
modus.
A
Yeah,
the
the
general
thought
there
was
that
right
now
the
api
server
restricts
pod
ips
to
two
one
that
has
to
be
v4
and
other
one
has
to
be
v6.
F
C
F
F
A
Here
for
the
future
was
that
we
could
try
to
get
labels
on
each
ip
address
in
pod
status,
pod,
ips
and
that
would
be
conveyed
from
cni
into
the
runtime.
And
then
the
runtime
would
just
send
those
labels
back
up
to
cube,
which
then
posts
the
api
server,
and
so
one
option.
There
was
that
if
each
of
the
ips
had
labels,
you
could,
for
example,
set
the
label
to
be
like
the
network,
attachment
definition
name
or
something
like
that,
and
then
you
could
have
services
that
had
a
pod
ip
selector
and
that
service
would
say.
F
Yes,
just
to
be
sure
that
that
would
mean
that
cni
would
have
to
contain
something
like
how
we
have
what
we
call
pod
agent,
like
that
actively
listen
on
what
happens,
because
a
new
prefix
can
be
for
added
to
a
network
at
any
point.
Yes,
so
the
network
assignment
or
address,
address
change
assignment
exchange
could
happen
at
any
time.
During
the
pod's
lifetime,
which
then
sort
of
I
mean
we
decided
in
columbus
that
the
address
assignment
has
to
be
taken
out
of
the
immutable
part
that
they
are
mutable.
A
This
would
be
one
way
to
maybe
do
it
in
a
cube
native
sort
of
way
that
might
even
allow
us
to
do
it
in
the
normal
kubernetes
endpoints
controller,
as
opposed
to
having
to
write
a
bunch
of
other
code
to
do
it
too,
but
you're
very
right.
That
we'd
need
to
add
some
kind
of.
I
mean
at
least
for
v6
dynamism
to
the
network,
plug-in
or.
F
Doing
some
slides
and
that
I
showed
with
class
r
when
I
have
discussions
with
them,
they're
not
ready,
but
I
should
be
able
to
they're
not
sort
of
completely
detail
and
talk
about
cni's
more
like
how
do
we
deal
with
the
orchestration
and
so
what
are
the
needs
for
vrfs?
And
how
do
we
handle
that?
I
should
be
able
to
go
actually
present
them
in
this
forum
sometime
in
towards
the
end
of
january,
if
you're
interested
yes,.
F
We
can
do
it
just
sort
of
as
part
of
the
red
hat
collaboration.
We
have.
A
Yeah,
I
mean
I
mean
the
other
parallel
streams
we
have
going
on
are
investigations
into
how
to
do
dynamic
attachments
with
like
an
implementation
like
multis
or
cactus,
or
something
else.
A
If
we
have
that
another
part
of
the
puzzle
would
be
in
the
short
term
cni
check
and
if
plugins
implement
cni
check
well,
actually,
okay,
that's
not
gonna
work.
I.
F
Also
looked
at
sort
of
different
ways
if
you
start
using
vrfs
in
a
container
or
inside
the
pod,
what
are
the
alternatives,
and
so
how
are
the
different
ways
of
doing
this
and
one
way,
and
actually
it's
the
one?
I
don't
really
like-
requires
change
to
the
cubelet,
but
there
are
other
ways
to
sort
of
have
some
sort
of
call
it
a
trampoline
that
starts
different
containers
for
in
different
having
different
default,
vrfs
and
sort
of
some
scheme
to
have
eth0
being
able
to
use
a
normal
loopback.
F
I'm
the
container
that
runs
there
and
then
use
ipv6
and
talking
to
other
containers
and
that
are
have
different,
vrf's
and
then
being
able,
then
to
describe
when
you
start
a
container
inside
a
pod
saying
which
vrf
sets
should
have
as
its
default
and
so
on,
because
otherwise
it's
I
don't
see
how
anyone
can
write
a
go
based
container
and
do
multi-vrf,
and
you
try
to
use
a
library,
because
I
mean
you
cannot
specify
anywhere,
which
socket
that
should
be
used
for
for
an
operation.
F
F
What
networking
was
in
kubernetes
when
when
he
has
to
have
one
interface,
but
when
breaking
out
and
sort
of
looking
at
multi,
multi-homing,
vrfs
and
so
on,
the
it
gets
much
wider,
and
I
think
we
so
we
need
to
come
up
with
some
templates
for
how
people
can
work
in
that
environment
and
maybe
some
some
libraries
that
makes
it
simpler
for.
F
People,
that's
because
we're
doing
it,
then
it's
it's
tricky.
A
A
I
mean
I,
I
think
we
do
have
ways
to
go
to
get
things
to
be
more
dynamic
and
you
know,
like
I
said,
there's
a
lot
of
there's
work
going
on
in
a
couple
of
areas
to
make
that
happen.
I
think
you
know
it's
encouraging,
because
you
know
we
kind
of
had
a
lull
in
that
work
for
a
while,
but
you
know,
I
think,
we're
at
the
point
where
that
stuff's
getting
pushed
forward.
A
F
Is
that
the
problems
that
multi-homing
brings
it
looks
easy,
but
it's
not
and
sort
of
just
to
step
in
there.
If
we,
if
we're
really
going
to
push
for
it,
it's
better
to
sort
of
be
honest
and
say
try
to
avoid
if
you
can
and
sort
of
have
the
tools
ready.
So
so
we
don't
get
this
that
oh,
these
things
are
available
and
then
people
start
pushing
out
containers
or
pods
that
just
work
in
one
system.
F
It
doesn't
work
in
another
and
it's
all
based
on
sort
of
how
routing
is
set
up
and
how
the
the
platform
underneath
is
set
up,
and
I'm
sort
of
very
scared
to
give
ammunition
to
people.
That
is
saying
that
it
should
only
be
one
network.
A
That'd
be
great
looking
forward
to
seeing
that.
A
One
of
the
things
is
one
of
the
other
parallel
efforts
is
to
I
mean
again,
move
to
the
cni
to
something
like
grpc
and
allow.
You
know,
updates
back
of
the
ip
information
that
would
better
handle
something
like
ipv6
router
advertisements.
F
Change
name
on
it,
some
at
some
point
change,
name
of
it
right
to
I
don't
know:
what's
happened,
something
more
with
the
overall
networking
and
I
like
what
you've
done.
I
mean
I,
I
look
at
ovn
and
sort
of
to
to
not
have
to
work
with
the
cubelet
and
with
the
cube
proxy
anymore,
a
relief
at
the
same
time,
if
everyone
is
going
to
do
their
own
control
component
for
kubernetes,
that's
not
good
either.
Then
we're
going
to
get
such
wide
divergence,
depending
on
which
network
you're
running
inside.
F
I
sort
of
I
had
one
purpose,
one
so
that
you
can
set
it.
It
could
be
a
shared
control
component,
but
an
easy
way
to
fill
in
with
sort
of
your
own
data
plane
components
because
those
those
can
be
different.
F
I
do
think
that
there's
a
possibility
to
share
overall
the
top
level
component
and,
of
course,
that
one
needs
to
be
generic.
Then
it
shouldn't
care
if
you're,
using
an
act
or
not.
When
you
deliver.
B
All
that
I
had
on
here,
I
really
was
just
looking
to
kick
off
the
conversation
a
little
bit
and
keep
this
rolling
thanks
dan
and
pair,
I
got
a
bunch
of
notes
from
you
guys
feel
free
to
take
a
look
and
correct
them.
They're
they're
terse
in
in
parts,
but
then.
F
A
I'm
back
on
the
4th:
okay,
okay
cool,
but
yeah.
We've
got
a
meeting
that
this
group
has
a
cancelled
meeting
on
the
31st.
Since
we
assume
a
lot
of
people
are
going
to
be
out
and
then
our
next
meeting
would
be
the
14th
after
that.
So
I
think
most
people
will
probably
be
back
by
the.
A
A
All
right
do
we
have
any
more
discussion
on
service
abstraction
or
anything
like
that.
E
Yeah
doug
just
kind
of
posted
that
he
thinks
he
lost
audio,
so
he
kind
of
interrupted
pair
middle
of
a
sentence.
I
don't
know
if
he
finished
his
sentence
or
was
kind
of.
E
A
All
right
in
that
case,
unless
doug
somehow
comes
back,
do
we
have
anything
else
to
discuss
today.