►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20220721
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20220721
A
Let's
start
off
with
some
triage
as
per
usual
dan.
Do
you
still
have
them
up,
since
you
were
looking
at
them,
yep.
A
B
B
I
think
they
need
some
external
configuration
for
aws.
If
that's
what
they're,
using
otherwise
other
cloud
support
or
cloud
provider
support,
we
don't
usually
do
support
issues
so
like
what
do
we
want
to
keep
this
one
open
for
a
little
bit
and
see
if
it
gets
resolved
by
next
meeting
or
close
it
out.
B
Okay,
well
I'll
I'll,
assign
myself
leave
it
open
for
a
little
bit
and
then
close.
It.
B
So
this
one
is
a
little
bit
older.
We
probably
covered
it
last
meeting
and
it's
kind
of
in
the
same
boat.
It's
more
of
a
support
thing.
I
don't
think
it
has
much
to
do
with
cube
at
this
point,
so
leave
that
one
open
too.
B
All
right
and
sorry,
this
one
how
to
set
multiple
node
port
ranges.
I
don't
believe
that
we
support
this.
Yet
at
this
point,
because
service
node
port
range
only
takes
a
single
range
as
far
as
I
can
tell
so,
this
one
would
need
some
kind
of
pr
or
enhancement.
C
D
C
D
D
C
B
D
D
B
B
B
D
I
know
what
happened,
I
know
what
happened
and
I
think
the
person
has
appear
so
when
you
have
a
an
old
port,
the
test
allocates
one
randomly,
but
the
test
has
one
heart
called
not
porn,
so
eventually
the
the
random
part
hits
the
they
assign
it
in
the
test.
Statically
and
it
plays,
but
I
mean
the
logic
to
change.
The
logic
is
very
common.
D
Topology,
how
to.
E
D
F
B
B
F
F
B
Well
and
that's
fine,
like
you
know,
we've
triaged
this
issue,
you
know
we
there's
been
a
lot
of
discussion
and
I
guess
once
we
figure
out
whether
we'll
fix
it
or
not,
then
we
can
make
that
termination
and
close.
D
B
A
Cool
thanks
dan
looks
like
the
next
item
is
just
a
reminder
that
the
125
code
freezes
in
two
weeks
on
august
2nd.
A
So
if
you're
working
on
code
keep
that
date
in
mind,
I
don't
know
if
I've
ever
put
that
on
wanted
to
say
anything
else
about
it.
A
But
I
will
assume
not
and
then
after
that
was
antonio.
D
Yeah
we
need
to
thank
ricardo
for
his
espanol
containers.
I've
tested
the
pr
the
vr
I
I
had
to
fight
with
the
promotion
machinery,
but
now
I
have
a
pr
and
it's
working.
If
rob
and
the
other
person,
I
don't
know
that,
tested
the
image
and
checked
the
abilities
or
anybody
else
want
to
test
the
image.
D
Yes
comment
on
the
vr.
If
not,
we
should.
I
think
that
we
should
merge
before
code
freeze,
so
this
is
start
to
solve
and
because,
once
we
cut,
I
think
that
in
openshift
they
are
going
to
to
get
the
the
images.
I
don't
know
if
we
have
to
do
more
steps,
but
the
point
is:
if
we
want
to
move
in
this
release
with
this
image,
we
should
start
testing
as
soon
as
possible.
G
Hello,
can
you
hear
me
great?
I
wanted
to
give
a
little
info
of
something
that's
going
on
in
sig
multi-cluster,
which
is
the
sig
I'm
most
memory
of.
G
So
I'm
going
to
share
my
screen
because
I
did
bring
a
few
slides,
but
I
think
possibly
many
people
in
this
call
know
about
this
project
already
either
because
they've
come
to
sig
mc.
I
have
come
here
and
talked
about
it
a
little
bit
in
connection
to
the
mcs
api,
which
is
the
sigmc
project
and
also
went
to
cluster
api
a
while
ago,
which
has
some
overlap
here,
but
I'm
going
to
recite
a
little
bit
here
for
sig
network
and
kind
of
put
it
on
the
record.
G
The
and
hopefully
everyone
can
see
my
screen.
But
basically
I
just
want
to
share
some
information
about
this
about
api
project
and
particularly
that
the
crd,
which
is
the
way
it's
implemented,
is
currently
in
alpha
and
then
ask
or
just
socialize
a
little
bit
any
potential
use
cases
that
sig
network
or
its
related
projects
might
want
to
collaborate
on.
If
that
is
relevant
and
just
make
myself
available
as
a
point
of
contact
for
that.
G
So
just
the
brief
background
is
there's
this
cap2149
for
sig
multi-cluster.
That
describes
this
about
api,
which
is
also
available
in
alpha
kubernetes
six
slash
about
api,
and
the
point
of
this
is
to
be
a
cluster
scoped
cluster
property
crd.
That
just
has
like
key
value
pairs
of
arbitrary
properties
about
a
cluster,
and
the
idea
is
to
make
information
about
a
cluster
discoverable
from
like
one
central
place
and
make
the
cluster
sort
of
self-aware.
G
So
I
think
historically,
things
like
this
may
have
been
achieved
through,
like
annotations
on
like
node
objects
or
like
some
sort
of
like
custom
cluster
object,
but
like
specific
to
a
certain
api
like
the
cluster
lifecycle
apis.
But
this
is
trying
to
go
for
something:
that's
a
little
bit
more
abstract
and
potentially
more
general
purpose,
even
though
we
did
come
up
with
it
at
sig
mc
for
a
specific
purpose,
so
we
described
it
for
for
the
purposes
of
the
mcs
api.
G
We
have
these
two
crs
that
have
well-known
names
that
have
to
follow
some
certain
properties
so
that
they're
useful
to
us
in
that
context,
and
so
you
can
definitely
click
through
the
cap.
If
you
want
to
know
all
the
details
about
that,
but
there's
one
for
a
cluster's
name
and
there's
one
for
a
clusters:
cluster
set
membership,
which
is
the
unit
of
clusters
working
together
in
a
multi-cluster
world
small
little
asterisk.
It
so
happens
that
we
want
to
rename
the
one
previously
known
as
idk
studio.
G
So
if
you
like
to
give
your
feelings
about
names
of
things,
there's
a
survey
link
at
the
bottom.
So
you
can,
you
know,
make
your
opinions
known.
So
this
is
like
some
background
about
this
project
and
why
sigmc
specifically
the
mcs
api
needed
it,
but
the
crd
itself
is
very
general
purpose.
G
Its
names
and
values
that
value
can
take
any
form
and
again,
the
overall
intent
of
the
crd
is
to
store
any
arbitrary
properties
about
a
cluster
that,
in
this
cluster,
local
crd
and
there's
room
in
the
cap,
that
other
implementations
or
just
anybody
else
could
store
any
other
arbitrary
properties
they
want
as
long
as
they
follow.
Some
follow
some
restrictions,
which
is
basically
not
to
conflict
with
the
ones
already
reserved,
have
to
use
some
sort
of
suffix
and
can't
use
the
reserved
case.
G
I
o
or
kubernetes
io
suffixes,
unless
it
goes
through
like
api
review
and
everything.
So
some
ideas,
just
generally
that
are
part
of
the
cap,
are
like.
If
some
cool
implementation
has
some
sort
of
like
special
fingerprint
thing,
they
could
store
some
like
arbitrary
json
or
if
you
wanted
to
store
like
the
network
name
of
the
cluster,
for
your
mcs
implementation.
G
Maybe
you
could
suffix
that,
but
then
this
kind
of
leads
into,
like
maybe
there's
some
stuff,
that,
like
six
or
sig
subprojects,
might
want
to
store
here
and
maybe
even
make
them
well-known
names
that
like
follow
this
kubernetes
suffix.
Can
you.
E
G
Yeah,
I
mean
I
think
that
is
probably
still
to
be
determined.
What
the
best
practice
is,
I
think,
there's
a
lot
of
heritage
of
making
singleton
crds
and
also
to
put
sora
mentioned
before
annotations
on,
like
node
objects
and
just
go
and
find
find
a
way
to
go,
find
them.
G
So
I
think
this
roadshow
is
kind
of
to
gauge
some
interest.
We
definitely
needed
this
for
mcs
somewhere
and
we
chose
to
go
this
way
and
we
want
it
to
be
generally
useful
so
that
other
properties
are
all
in
the
same
place.
So
we
don't
have
to
keep
running
around
to
like
different
implementations,
but
I'm
not
sure
I
have
the
clout
yet
to
say
like
this
is
the
case
where
you
use
this,
and
you
know
this
is
the
case
where
you
use
that.
G
Okay,
so
this
is
the
that's
kind
of
the
general
situation.
I
do
think.
Maybe
I
can
put
together
in
the
cap
some
more
thoughts
about
why
putting
your
arbitrary
properties
here
instead
of
an
annotation
or
in
a
singleton,
can
be
covered,
but
just
some
brief
thoughts
of
maybe
what
sig
network
might
have
some
jurisdiction
over
and
want
to
store
somewhere,
though
you
all
are.
Definitely
the
experts
so
like
ignore
me
if
I'm
wrong,
but
there's
just
some
ideas
brought
up
within
sigmulti
cluster.
G
If
this
is
kind
of
based
off
of
someone
who
had
experience
where
someone
wanted
to
pull
all
the
inventory
of
a
node,
sider
and
pod
cider,
so
they
could
analyze
it
holistically,
maybe
something
about
cluster
project
ideas
or
regions,
though
those
could
be
too
implementation
specific,
unless
there
is
some
way
that
this,
a
general
standard
that
this
information
should
be
here,
might
be
useful
to
any
sig.
G
And
then
cluster
vpcs
is
kind
of
in
the
same
category
and
this
kind
of
was
also
inspired
because
there
was
some
talk
about
trying
to
represent
a
network
name
or
at
least
a
default
network
name
in
the
this
cluster
property
in
the
about
api
as
a
cluster
property
for
use
in
multi-network
mcs
scenarios
and
there's
some
pr
history
about
that
too.
But
we
got
a
little
stymied
thinking
about
the
multi-network
case.
G
So
these
are
just
some
ideas
from
us
happy
to
take
a
note
from
anybody
here.
If
there's
something
this
is
making
you
think
of
that.
Some
sort
of
you
know
property
that
any
of
your
code
has
needed
that
you,
you
know,
smash
an
annotation
somewhere,
but
might
put
here
and
I'll
happily
take
a
note
to
follow
up
or
again
just
making
myself
available
as
a
point
of
contact,
you
can
dm
me
or
find
me
in
the
sig
multi-cluster
slack
channel.
H
Just
a
quick
thought
I
mean
just
generally
for
the
network
team.
I
mean
good
things
like
the
name
of
the
cni
plug-in,
be
a
being
used
in
that
cluster,
be
a
useful
property
to
make
available.
A
I
mean
there's,
there's
not
a
standard
way
today
to
see
what
what
your
cluster
is
running
through
the
api,
so
not
a
common
thing,
but
I
certainly
see
people
have
no
idea
what
they're
using
and
like
come
to
me
for
help
with
something
that's
not
calico,
for
example,
but-
and
I
know
that
on
the
calico
side,
we
have
long
since
used
our
own
crd
for
purposes
similar
to
this.
So
if
this
had
existed,
we
might
have
used
this
instead.