►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20210108
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20210108
A
We
are
recording
this
is
the
first
kubernetes
sick
network
of
2021
january
7th
and
I
think
tim's
going
to
kick
us
off.
I.
B
B
All
right,
can
you
guys
see
me,
you
got
my
screen,
yes
yep.
I
got
it
all
right,
all
right,
so
I
loaded
up
today
there
were
31
issues.
I
they
get
old
pretty
quickly,
which
is
good
news.
I
opened
up
the
first
20.
I
went
through
a
few
and
were
able
to
close
them
out
or
or
address
them
myself.
The
rest
let's
go
through
many
of
these
are
test
flakes,
and
so
we're
just
gonna
need
people
to
sign
up
to
see
if
we
can
understand
what's
happening
and
how
we
can
fix
them.
B
So
I'm
asking
for
volunteers
number
one
kubernetes
poll,
kubernetes
edeu
ubuntu
gce
network
policies,
which
flaked
so
it
looks
like
network
policy.
C
B
I
think
all
right,
I'm
going
to
mark
it
as
accepted
awesome.
This
one
I
looked
at,
looks
like
a
feature
request
arbitrary
fqdn
as
hostname
inside
a
pod.
This
was
just
a
couple
days
old.
D
B
Even
did
so
that
was
different
in
that
they
wanted
that's
true.
I
thought
that
too,
and
I
was
ready
to
close
this.
They
wanted
give
me
the
full
kubernetes
style
hostname,
as
my
hostname
fqdn
is
my
hostname,
so
food.service.names
or
service.namespace.cluster.local,
as
my
if
I
say
hostname,
show
me
that
so
that's
the
kept
that
got
implemented.
B
B
My
feeling
on
it
is:
we've
tried
to
be
really
rigid
to
give
people
the
the
best
most
consistent
experience,
and
I've
heard
this
come
up
enough
times
that
I'm
almost
ready
to
just
give
in
on
it
and
say
you
know
if
you
break
your
own
pods,
it's
your
own
pods
right,
like
I'm,
not
sure
that
there's
actually
any
negative
implications
of
letting
people
set
this
for
themselves.
B
It
doesn't
mean
we're
going
to
put
random
thing
in
dns.
It
just
means
that
when
they
run
hostname
they'll
get
whatever
they
wanted
to
see
there
right.
So
I
would
like,
if
anybody
has
thoughts
about
that,
to
take
a
look
at
this
one,
so
actually
I'll,
probably
just
assign
it
to
myself.
Sorry.
E
Tim,
I
was
reading
this
one
and
I
was
thinking
like
yes,
workarounds
for
poorly
designed
older
software
are
feel
kludgy,
but
at
the
same
time
like
if
it
removes
an
obstacle
from
people,
and
it
still
follows
the
principle
of
lisa
prize
where
it's
not
going
to
arbitrarily
change.
What
host
name
does
unless
someone
tries
to
then
I'm
not
sure.
I
see
a
lot
of
harm
here.
B
F
B
F
F
B
Yep,
so
my
feeling
is
like,
as
the
ultimate
escape
hatch,
this
probably
seems:
okay
cal,
I
assigned
you
also
anybody
else
who
wants
to
feel
free
to
jump
on
this
one.
I'm
gonna
leave
it
open
for
myself
to
have
a
read
through
and
give
some
comments
on
later.
G
Yeah
this
is
this:
is
it
for
in
case
there's
some
other
feature
that
goes
into
this
kind
of
anticipating,
adding
those
default
labels?
I
think
jay
was
talk,
was
working
on
this
a
little
bit
more
yeah.
That's.
C
B
Cool,
maybe
we
can
follow
up
on
the
state
of
that
cap
after
we're
done
with
triage,
okay.
B
Okay,
next
ipvs
cube
proxy
cannot
delete
old,
endpoints
rebooting,
a
master
lars
assigned
himself.
B
I
haven't
read
this
one
yet
so.
H
B
G
You
can
yeah.
This
was
another
one
where
we
found
a
really
interesting
test
case
and
I
think
nils
added
that
in
and
then
we
realized
that
there's
a
little
bit
of
work
that
we
could
do
to
just
kind
of
make
these
tests
a
lot
more
clear
in
how
we
organize
them.
C
B
A
member
since
you're,
okay,
cool
yep
yeah,
I
was
gonna,
say
that
in
general,
thanks
for
bringing
that
up,
for
you
know
the
regulars
here
who
are
filing
things
that
we
know
are
real,
you
can
go
ahead
and
triage
accept
them.
Yourself.
Cool
sounds
good.
B
I
certainly
don't
want
to
discourage
people
from
filing
issues,
but
we
don't
need
to
triage
issues
that
you
know
we
know
are
bugs
or
feature
requests
or
whatever
add
the
ability
to
target
an
ingress
class
controller
in
a
specific
name
space.
So
I
read
this
number
and
commented
on
it
earlier.
B
I
You
can
assign
me
as
well.
This
is
something
like
I
talked
with
alejandro
a
little
bit
about
this
as
well
or
something
that
seems
similar
for
nginx
ingress.
I
know
some
users
deploy
ingress
controllers
in
different
name
spaces
and
with
you
know,
this
ingress
should
be
provisioned
by
this
and
that
used
to
be
their
concept
of
ingress
class,
but
now
ingress
class.
I
It's
a
little
less
clear,
like
what
does
controller
name
mean,
should
a
controller
like
if
you
have
multiple
instances
of
the
nginx
ingress
controller?
How
do
you
interpret
that?
There's
some
ambiguity
in
the
api
spec
around
this?
So
if
that's
where
they're
going
with
this,
I
can
follow
up
a
bit
more.
B
Okay,
all
right,
I
initially
I
I
thought.
Maybe
it
was
asking
for
like
the
ability
to
deploy
an
ingress
controller
in
my
own
namespace
and
create
my
own
classes
and
then
use
those
classes
in
my
own
namespace,
like
totally
self-self-determined,
which
is
vaguely
interesting.
I
don't
really
understand
this
sort
of
half
step,
so
we
can,
I
commented
on
there,
but
you
feel
free
to
jump
in
also
cube
proxy
tcp
timeout
for
close
weight.
We
default
value
is
60
minutes,
not
60
seconds
and
there's
history
here.
B
There's
an
old
bug
in
the
30
000
range
that
we
encountered
and
we
bumped
the
time
up
from
60
seconds
to
60
minutes
when
you
do
the
math
on
any
high
qps
connection
like
this,
it's
not
gonna.
It's
obviously
not
gonna
work
like
you're
gonna
have
huge
amounts
of
contract
records.
I
think
maybe
we
should
go
back
and
revisit
that
old,
30
000
class
bug.
It
was
really
it
was
around
gce's
metadata
server
and
it
not
closing
it
like.
We
did.
B
It
was
half
closed
and
for
some
reason
it
wasn't
closing
the
other
end
of
the
connection,
and
so
it's
a
fun
read
and
it
was
a
fun.
I
remember
the
bug.
It
was
a
really
fun
one
to
understand,
but
I
kind
of
agree
with
the
bug
reporter
here.
60
minutes
feels
egregious.
B
So
I
I
tagged
bowie
on
it.
I
don't
know,
if
always
here.
J
I'm
pulling
me
out
on
vacation,
so
oh
ninon
spoke
up.
Guess
he
gets
assigned.
B
Just
assigned
it
to
me
yeah,
my
first
thought
was
well.
You
could
just
bump
up
the
number
of
contract
records.
But
honestly,
I
don't
understand
exactly
in
this
case,
why
they're
not
like.
Why
is
the
engine
x
tests
that
they're
running
in
leaving
connections
in
half
open
state,
also
or
half
closed
state?
I
guess,
but
still
at
60
seconds,
that
seems
ridiculous.
B
A
Assign
this
one
to
me,
casey.
A
B
B
B
J
B
J
B
Oh,
oh
yeah,
okay,
so
now
we're
into
the
like
november
bugs.
How
are
we
on
time
we're
at
time?
So,
let's
just
stop
here.
There
are
a
bunch
of
bugs
about
12,
bugs
that
are
older
than
december
that
we
can
probably
go
through
and
either
age
out,
some
of
them
or
ping
these
bugs.
E
E
L
That's
this
one
13
hours
ago,
yeah
9,
7,
7,
9
7.
For
some
reason
I
can't
see
yours
who's,
gonna
get
100k.
M
B
F
B
This
is
the
thing
that
we
were
arguing
about
in
november
december:
right
cal,
the
the
auto
clearing
of
node
ports.
Yes,
so
almost
certainly,
this
is
fixed
in
master
and
not
in
whatever
they're
on
18..
So
I'll
take
this
one
I'll
take
this
one.
I
will
respond
and
link
them
to
the
appropriate
pr
cool
anybody
else
have
any
triage-ish
stuff.
They
want
to
talk
about.
A
Cool
thanks.
Tim
next
item
was
about
validation
on
service
load,
balancer
ips.
A
That
goes
rob
and
sorry.
I
don't
know
how
to
pronounce
your
name.
N
So
I'll
just
quickly
say
hi,
because
I
think
this
is
the
first
time
I've
spoken
at
this
meeting.
My
name
is
swaitha.
I
also
work
at
google
with
rob
on
the
same
team
so
quickly
to
kind
of
describe
what
those
fields
are
on
the
service.
Spec.
There's
a
couple
of
load,
balancer,
ip
or
load
balancer
source
range
fields
that
allow
users
to
add
white
space
accidentally
into
their
into
the
field,
which
eventually
fails.
N
I
don't
think
there's
there
might
be
a
couple
of
ones
for
the
load,
balancer
source
range,
where
it's
been
trimmed
somewhere
along
the
process,
but
essentially
there
is
no
validation
that
trims
that
white
space
or
reports
an
error,
and
since
this
kind
of
gets
into
an
issue
of
backwards,
compatibility,
we're
wondering
if
there
is
support
for
tightening
the
validation,
mostly
because
if
this
is
wrong,
like
when
a
user
specifies
the
they're
going
to
run
into
an
error
eventually.
N
F
So
this
this,
these
values
are
eventually
used
by
the
whoever
implements
the
the
low
balance
right
so
cloud
or
the
bare
metal
load
balancer
that
that's
out
there
and
whatever
they
do.
You
will
have
to
trim
the
white
space
right.
F
They
cannot
use
that
like
cider
with
a
white
space
somewhere
before
or
after
so
in
my
own
view,
if
we
started
either
auto
training
so
as
we
get
the
data
with
trem
and
validate
or
we
just
validate
that
shouldn't
be
a
problem
downstream,
but
there
might
be
a
problem
facing
users
who
were
used
to
insert
a
cider
without
space
at
the
end.
So
I,
if
we
are
to
do
something,
I
think
we
should
trim
and
validate
like
how
to
trim
the
values
that
come
in
and
just
validate
them.
H
H
I
Oh
yeah,
that's
that's
the
problem.
We
have
right
now.
So
what
we've
done
right
now
is
we've
done
some
kind
of
intermittent
patches,
so
coup
proxy
consumes
some
of
these
fields.
So
we've
updated
that
to
trim
white
space
before
it
processes
these
values,
but
it
feels
like
that's,
not
a
complete
solution,
because
we
don't
want
an
api
where
everyone
has
to
trim
a
value
before
they
use
it.
F
B
B
Where
I
could,
I
could
make
an
argument
that
you
know
trailing
for.
For
these
specific
fields,
trimming
leading
and
trailing
white
space
is
always
safe.
There
is
no
syntax
for
an
ip
or
a
cider
that
includes
white
space
right,
so
it
should
always
be
safe.
To
do
this,
that
doesn't
mean
that
it's
precedented,
and
so
I
don't
know
how
somebody
like
liggett
would
feel
about
starting
that
opening
that
can
of
worms,
because,
honestly,
what
would
stop
us
from
doing
this
for
all
api
fields?
B
Why
wouldn't
you
want
to
trim
and
remove
white
space
from
from
every
single
api
field,
or
every
single
I
don't
know,
and
then
there's
the
where?
Where
to
do
it
right?
Ideally,
we
would
do
it
in
one
place:
sort
of
generically.
Practically
there
isn't
such
a
place,
and
so
we
would
have
to
do
it
either
in
defaulting
or
in
the
rest
stack
somewhere.
B
Yeah
I
mean
strategy
is
the
advantage
of
being
ignoring
versions.
Right
bridgette
says,
may
violate
the
principle
of
these
surprise.
Possibly
that
said,
if
we
were
to
start
enforcing
it
at
the
api
level
and
say,
hey
you
fed
me
an
ip
address
that
has
a
white
space
at
the
end,
we
will
almost
certainly
break
somebody
who
was
doing
that,
and
it
was
just
working
before
right
because
we
worked
around
it
partially
in
cube
proxy
or
because
we
whatever
and
if
we
start
tightening
that
it
will
be
a
breaking
change.
B
I
Yeah,
I
think
most
consumers
of
this
api
are
already
trimming
white
space,
but
I
it's
impossible
to
find
every
use
case
and
we've
already
run
into
now,
two
relatively
significant
bugs
where
white
space
wasn't
being
trimmed
and
I'd
hate
to
think
there
are
more
out
there
right.
How
is
white
space
making
its
way
in.
I
B
B
Yes,
yeah,
but
but
we've
always
allowed
it.
So,
on
the
other
hand,
if
we
do
it
this
way
and
we
auto
trim
it,
then
we
will
create
an
apply
loop
where
apply
will
repeatedly
say:
oh
this,
isn't
what
I
wanted
you
to
apply.
It
has
a
space
in
it.
Let
me
apply
a
new
version.
We
would
accept
it,
trim
it
make
the
application
and
then
apply
we'd
come
back
and
say:
oh
this
isn't
what
I
expected
you
to
have.
Let
me
do
it
again.
B
So
I
think
that
we
should
talk
with
api
machinery
folks,
because
maybe
I'm
misunderstanding:
how
the
apply
loop
would
work,
but
I
bet
I'm
not
so.
Is
there
a?
Is
there
an
open
bug
or
something
on
this.
O
N
B
B
B
Cool
all
right:
well,
let's
carry
the
discussion
in
that
bug.
It
may
be
that
we
simply
don't
have
a
a
good
answer
here.
Maybe
we
use
the
api
warnings
mechanism
to
send
back
a
warning
that
says:
hey
we're
accepting
this,
but
just
so
you
know,
you've
got
trailing
white
space
in
a
field
that
really
doesn't
allow
it.
B
Okay,
swept
it
on
rob,
do
you
have
more?
You
wanted
to
go
through.
A
Cool
thanks
guys.
The
next
item
was
to
check
in
on
the
state
of
a
cap
from
20
d14.
I
assume
that
is
meant
to
be
for
building
a
new
cube
proxy.
P
So
for
those
who
don't
know
yet,
the
idea
was
to
split
the
proxy
into
parts,
one
part
with
kubernetes
business
logic
and
another
part
with
the
ability
to
apply
that
logic
like
iptables
or
are
already
doing
so.
I
started
on
that
id
and
that's
some
input
and
some
descriptions,
and
this
seems
to
to
go
to
a
new
proxy
that
would
be
split
with
a
part
in
the
cluster
that
will
serve
an
api
to
simplify
clients
and
there's
a
discussion
about
using
xds
tools
to
move
data
around
that.
P
So
for
the
current
state
of
what
I
did
in
this
project
for
now,
there's
nothing
about
xds,
but
I
did
that
but
split
in
parts.
So
you
have
proxy
stuff.
That's
working
with
that's
connecting
with
with
your
cluster
as
the
current
proxy
proxy
does,
and
this
part
then
allows
to
plug
multiple
backhands
to
it.
So
you
just
have
the
ability
to
to
have
a
simplified
model
and
to
serve
multiple
clients
from
only
one
connection
to
the
api
server.
P
P
I
have
a
fake
proxy
that
I
need
to
stop
the
previous
one
effect
proxy.
That
allows
me
to
feed
the
fake
data
to
to
this
it's
written
from
a
ml
file,
but
just
to
for
presentation
purpose,
and
there
are
two
things
so
I
can
connect
and
if
my
my
node
have
its
node
model
that
is
simplified
and
unlimited
to
what
the
node
needs
to
to
see
and
there's
a
global
log.
P
P
P
P
If
you
want
to
see
the
simplified
style
states
for
for
the
node,
it's
just
having
one
service
with
a
cluster
ip,
some
external
ips
and
two
backends,
and
all
of
that
is
going
through
diff
and
it
works
the
same
way
if
a
proxy
was
coming
from
a
real
server,
which
is
something
I
can
do
easily
by
just
starting
my
q
proxy
2
instead
and
it
will
feed
with
with
the
data
from
the
real
from
a
real
server
which
is
not
in
the
same
states.
B
So
I
think
this
is
super
cool.
Obviously,
we've
talked
about
this
before
what
I
want
to
think
about.
I
guess,
is:
how
do
we
proceed
if
we
want
to
build?
B
Let's
assume
that
we
we
want
to
use
this
right,
and
I
think
we
actually,
we
still
have
to
prove
it
and
do
some
benchmarks
and
and
trials,
but
let's
assume
that
those
will
all
be
successful.
How
do
we
want
to
proceed
on
this
in
an
incremental
fashion
that
we
can
actually
integrate
this
into
the
code
base
in
a
way
that
people
can
consume.
P
Yeah,
it
can
be
external,
it
can
be,
maybe
not
in
the
I
mean.
The
idea
is
to
split
vector
basis
project
anyway.
So,
let's
start
by
not
putting
it
in
the
main,
the
main
repo
and
what
I've
done
on
the
clusters
I've
been
testing
is
that
I
just
put
some
labels
on
my
nodes
and
with
basically
q,
proxy
v1
and
v2,
and
the
diamond
set
is
using
that
as
a
node
selector
and
I
can
incrementally
deploy
the
proxy
on
the
clusters.
F
I
have
yeah,
I
I
have
to
agree
on
that
statement.
I
don't
think
we
should
integrate
the
code.
I
think
it
should
be
a
standalone
somewhere
else
and
it
gets
built
and
released
as
part
of
the
entire
release
cadence
that
we
have
having
it
outside
the
code
will
simplify
a
lot
of
things
and
it
can
be
as
simple
as
oh
hey
you
have.
This
default
implementation,
this
alternative
implementation,
and
then
this
can
be
experimental,
follows
the
beta
alpha
and
all
the
stuff,
and
then
once
it's
ready,
we
can
just
switch
off
the
old
one.
B
So
we
have
in
general,
I
agree,
and
I've
advocated
before
for
trying
to
get
cube
proxy
out
of
tree,
so
so
agreement
in
general.
On
that
there's
still
the
distinction
between.
B
Do
we
move
it
out
of
tree
or
do
we
move
it
to
staging
because
there's
some
value
to
being
able
to
make
atomic
changes
across
them,
but
there's
also
some
accidental
couplings
that
happen
that
we
may
want
to
just
discourage
from
the
beginning
right
and
in
terms
of
getting
incremental
adoption.
I
know
that,
like
all
the
xds
stuff
is
very
much
up
in
the
air
as
to
whether
we
want
this
actual
protocol
to
be
xds
or
not.
B
I
think
there's
a
lot
of
good
reasons
to
do
so,
but
until
we've
done
it,
we
can't
really
say
whether
it
was
a
bad
idea
or
not.
Would
it
make
sense
to
make
this
available
as
a
library
also
so
that
I
could
build
a
monolithic,
cube
proxy
replacement
that
links
this
as
a
library
and
calls
out
into
my
go
based
plugin,
not
not
a
plug-in
in
a
plug-in
sense,
but
like
a
an
interface
right?
B
I
feed
you
an
interface,
and
you
call
out
to
me
and
then
we
could
say
like
it's
literally
a
drop-in
for
cubeproxy
and
it
uses
this
library,
and
once
we
have
some
confidence
in
this
library
we
can
say
well
here
we
make
the
library
switch
to
send
xds.
Instead,
and
now
your
q
proxy
can
get
even
simpler.
P
They
were
trying
to
put
that
as
a
library
that
could
be
consumed
by
the
proxy
and
I'm
not
sure
the
proxy
itself
is-
is
very
well
suited
as
a
library,
I've
kind
of
done-
something
that's
communicating
inside
the
process.
But
it's
really
like
just
I'm
doing
that
to
do
library,
but
it's
kind
of
forcing
the
library
shape
on
something
that's
serial
decoupled
api.
H
P
B
I
don't
think
in
that
case
it
would
be
a
literal
drop-in,
but
it
like
modulo
the
configuration.
So
if
you
erase
the
configuration
it
could
be
a
functional
drop-in,
but
what
I
think
we'd
want
to
do
is
not
build
a
monolithic
thing
that
has
ipvs
and
user
mode
and
ip
tables,
but
like,
let's
assume,
let's
call
this
kp2
right.
So
I
build
kp2,
ipt
and
kp2
ipt
is
the
iptables
main.
I
have
a
program
which
is
my
main.
I
call
lib
kpt
and
I
pass
an
interface
and
lib
kp2
red
sorry.
B
We
need
a
better
name,
this,
this
library
or
whatever,
takes
over
and
effectively
becomes
the
new
main
right,
but
it
calls
out
to
me-
and
I
have
the
ip
tables
logic
so
now,
if
I
want
to
run
an
ip
tables
mode.
I
deploy
that
onto
my
nodes
and
if
I
want
to
run
an
ipvs
mode,
it's
a
different
binary.
P
Yes,
but
we
already
have
the
you
can
see
my
my
screen
again:
yeah
yeah,
okay,
so
the
way
it's
done
already
allows
some
separation,
as
I
have
really
the
proxy
itself,
which
is
this
is
one
pod,
just
one
pod.
The
idea
was
to
use
two
containers.
P
Both
are
the
q
proxy,
but
one
is
just
bringing
up
the
api
itself
that
will
listen
on
a
unique
socket.
So
it's
a
local
file
and
you
have
an
f
table
thing.
For
instance,
that's
here,
and
this
one
gets
the
capabilities
for
net
administration
and
its
own
flags.
P
P
For
the
yes,
yes,
I'm
I'm
not
yet
sure
where
it
would
fit.
If
we
want
to,
because
we
have
that
kind
of
global
data
around
the
cluster
and
the
local,
not
data,
that's
simplified,
and
so
I'd
say.
If
we
have
xds
there
will
probably
be
something
near
the
a
kubernetes
api
that
will
convert
that
to
xds
data
and
then
xds
will
under
the
transport
and
then
we'll
convert
that
back
to
the
local
node
interface
with
something
that
wouldn't
beat
your
proxy
2
here.
P
P
P
C
P
F
You
want
what
I'm
trying
to
hint
at
is
I,
I
love
the
plug-in
model
for
reasons
that
includes
c9.
People
can
build
their
own
plugins
that
either
make
the
entire
feature
set
or
just
implement
parts
of
it.
Let's
say
somebody
does
offload
in
a
certain
way.
Somebody
comes
and
do
xdb
somebody
that
finds
a
certain
network
card.
That
does
things
something
funky,
and
all
of
that
the
thing
I
see
was
was
obviously
not
what
I
see
it's
very
cool.
F
Thank
you
very
much
for
that
the
thing
the
problem
I
have
was
sockets
in
between
the
two
layers.
As
we
introduced
a
new
set
of
failure
modes
right,
one
of
them
can
fail
and
then
debugging
becomes
interesting
in
addition
to
a
new
set
of
challenges
around
building
and
shipping,
and
all
that.
But
arguably
building
and
shipping
is
one
time
problem
and
you
can
safely
solve
it.
The
failure
modes
of
one
container
restored
in
the
middle
of
request
becomes
a
total
new
failure
mode
that
we
did
not
deal
with.
F
So
that's
why
I
was
asking:
can
we
do
something
around
loadable
plugins,
I
know
go,
doesn't
support
that
except
it
wasn't
right.
Now,
as
far
as
I
know,
maybe
somebody
knows
some
details,
but
the
idea
of
I
want
to
stitch
a
proxy
that
consists
on
some
parts
from
this
person
and
some
other
person
from
that
person.
To
me
it
sounds
appealing,
especially
as
we
move
on
to
edge
networking
and
all
of
the
things
that
we
expect.
B
But
the
restrictions
are
so
draconian
as
to
make
it
useless.
They
have
to
be
compiled
with
the
exact
same
compiler
version
and
the
same
exact
version
of
every
common
dependency.
So
it's
just
not
useful.
It's
not
useful
unless
you're
building
them
yourself
right.
B
B
P
P
B
B
Should
we
should
we
make
a
sigs
repo
for
this,
like
do
you
want
to?
Let
me
back
up.
Is
it
your
intention
to
contribute
this
to
kubernetes
entirely.
B
Okay,
so
perhaps
we
should
open
a
ticket
to
make
a
new
sigs
repo,
and
you
could
actually
move
it
with
the
history
into
a
kubernetes
sig's
repo,
so
that
we
have
a
cla
in
place
so
that
everybody
can
contribute
if
they
want
to.
B
To
kubernetes,
or
is
it
or
org,
there's
a
repo
that
is
for
the
github
org
admins
to
create
a
new
repo
for
you.
B
A
B
Right
yeah
feel
free
to
brainstorm
on
it
a
little
bit
when
we
open
the
the
issue
on
that
org
you
can.
We
can
come
up
with
some
clever
names.
Oh
there's
thanks.
B
Awesome
all
right
did
you
have
anything.
B
S
Yeah
so
I
put
in
that
and
hey
everyone.
So
I
think
some
of
you
are
already
aware
that
we've
been
collecting
like
a
policy
related
use
cases
in
the
network
policy
subgroup,
and
some
of
us
within
that
group
has
been
looking
at
introducing
an
admin
focus
resource
to
complement
the
network
policies
at
clusterscope,
and
so,
let's
call
it
trust
network
policy.
S
S
So
what
we
would
probably
need
is
about
two
thirty
minutes
time
slots
in
you
know
back
to
back
sig
network
meetings
or
I
don't
know
how
we
want
to
do
it,
but
we
would
need
a
chunk
of
time
to
you
know,
presented
present
the
proposal
to
all
of
you
guys
and
we'll
send
a
slide
deck
and
documents
on
the
mail
thread
so
that
you
know
everyone
can
anyone
who
is
interested
can
read
it
beforehand,
and
then
you
know
so.
B
I
think
you
just
like,
I
don't
think
we
have
any
precedent
for
guaranteeing
slots
other
than
signing
up
for
the
agenda.
So
if
you
have
the
agenda
doc,
you
can
sign
up
for
jan
21
already,
and
you
put
yourself
on
there
mark
mark
that
you
want
a
full
30
minutes
so
that
we
can
keep
track
of
that.
And
then
you
can
even
add
a
block
above
for
jan
20
or
whatever
february
4th
or
whatever.
It
would
be.
T
Oh,
no,
I
think
that's
that's
great.
We
can
just
add
ourselves
to
the
agenda
and
we
can
also
link
the
the
presentation
in
there.
So
if
people
are,
you
know
they
wanted
to
see
the
deck
beforehand,
and
you
know
I'm
prepared
with
questions.
I
think
that
would
be
that'd
be
great,
but
yeah.
We're
very
excited
to
present
this
to
you.
N
A
B
Yeah,
I
I
wanted
to
just
hand
the
floor
to
jay
to
tell
us.
Where
are
we
with
that
proposal
around
namespace
name
labels?
Oh.
C
Yeah,
I
think
it's
all.
I
was
just
waiting
for
it
to
merge,
so
we
could
iterate
on
like
whatever's
left.
I
don't
think
there's
anything
contentious
I
just
I
I
figured
liggett
was
just
under
water
from
christmas,
so
I
just
stopped
asking
I
stopped
pinging,
but
I
can
I
can
check
again.
I
didn't.
S
B
That's
very
nice
to
start
spamming
again
very
polite.
It's
not
christmas
anymore!
I'm
giving
people
this
week
and
then
I'll
start
spamming
people
next
week,
okay,
but
for
what
it's
worth.
People
are
spamming
me
already.
So
it's
fine!
Okay!
Maybe
it's
not
too.
C
C
Yes,
okay,
I'm
gonna
start
spamming.
People
also
I
mean
my
thing
is
I
would
I
was
talking
ricardo
about
this.
We
would
like
to
merge
these
together,
because
so
we
can
have
a
nice
story
about
here.
The
api
just
has
these
two
new
things
now
in
this
new
release,
as
opposed
to
titrating
them
in
randomly
and
we're
going
to
have
to
check
cni
providers.
B
Yeah,
I
understand
the
that,
but
I
like
ricardo's
change
is
going
to
need
to
go
alpha
because
it's
new
field-
and
this
one,
maybe
doesn't
so
they're,
going
to
appear
more
randomly
to
users
like
I
wouldn't
sit
on
this
for
two
quarters.
Until
the
new
field
is
beta
right,
I
would
just
get
the
namespaces
one
in
as
soon
as
possible.
It's
just
it's
part
of
one
larger
narrative,
but
the
timing
won't
line
up.
C
Is
there
anyone
other
than
ligo
who
can
do
the
final
lgtm
who's
been
active
on.
B
C
J
U
Just
just
just
for
the
knowledge
and
manuel
is
stepping
down
from
ingress
controller
ownership.
I
don't
know
if
you
all
are
aware
of
that.
So
I
think
this
is
something
that
we
need
to
take
a
look
and
take
care,
probably
because
this
is
the
ingress
controller,
that
most
folks
that
are
not
in
cloud
providers
use.
B
Yeah
good
point:
thank
you
for
bringing
that
up.
Yeah
alejandro
ping
me
this
morning
he
he's
been
struggling
to
keep
his
head
above
water
with
all
the
work
between
a
real
job
and
ingress
engine
x
maintainership.
B
He
hasn't
really
gotten
anybody
who
stepped
up
to
take
on
a
substantial
amount
of
that
work
and
so
he's
planning
to
just
step
down
unilaterally,
which
leaves
the
project
more
or
less
unmaintained,
unless
some
body
or
bodies
want
to
step
up
to
maintain
it
as
a
community.
I
think
it's
an
important
project.
I
don't
personally
have
time
to
help
right
now,.
K
Sorry,
I'm
sorry
a
little
bit.
Are
you
talking
about
the
ingress
engine
x
thing?
Yes
yeah!
I
was
gonna
bring
this
up
as
well.
I
I
I'm
gonna
try
to
reach
out
to
manuel.
I
also
reached
out
to
I
don't
know
how
to
pronounce
elfin
where
we're
the
two
of
us
are.
The
only
other
reviewers
left
on
the
project,
aside
from
bowie,
not
sure
if
bowie
had
in
your
interest.
B
I
I
would
be
surprised
knowing
knowing
all
the
things
I've
always
got
on
his
plate.
Yeah
I'd
be
surprised.
This
is
this.
Is
the
real
fun
part
of
the
open
source
community
right?
We
we've
all
leaned
on
alejandra
for
a
long
time
on
this
project.
It's
an
important
project.
It
needs
some
help,
so
I'm
sure
he
would
be
happy
to
see
anybody
who
wants
to
help
step
up.
B
We
don't
need
people
to
volunteer
here
and
now,
but
you
know
do
think
about
it,
especially
those
of
you
who
work
at
big
companies
that
tend
to
use
kubernetes
and
think
if
you
can
help
maintain
this
important
project.
B
F
This
is
one
of
the
statement
that
tim
throws
looking
at
the
camera,
but
he's
actually
looking
at
few
people,
and
might
I
know
exactly
look
so
I
promise
I'll
try
to
find
find
find
some
help
on
this,
but
I
can't
promise
an
answer
yet
so
yeah
yeah,
I
think
I
think
we
should
be
able
to.