►
From YouTube: SIG Network Meeting 20201001
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we're
recording
this
is
kubernetes
sig
network
meeting
from
october
1st
2020.
bridget's
gonna
start
us
off
with
a
bit
of
issue
triage.
B
Great,
I
just
need
to
be
able
to
screen
share
every
time.
B
A
B
Okay,
how
does
that
look?
Does
that
look
like
a
yep?
All
right
looks
like
a
triage
board,
great
we'll,
we'll
start
with.
Oh
for
spooky
october,
weird
output.
D
B
Triashed
sure
okay
static
ip
for
pods,
with
stable
network
identifier.
D
A
Depending
on
what
it
is
they're
trying
to
do,
I
didn't
look
fully,
but
I
know
we
have
other
issues
like
this
as
well.
Is
this
the
right?
Yes
yeah?
That's
me.
B
Okay
mission
tests.
D
G
G
E
G
E
B
I
Well,
first,
I'm
not
sure
which
city
it
would
belong
to.
Is
it
api
machinery
or
instrumentation
or
instrumentation,
because
is
that
the
service
proxy
they're
talking
about.
B
D
H
This
isn't
this
isn't
q
proxy
it's
under
nodes,
oh
just
and
the
node's
proxy
is
to
pull
it's
going
to
pick
a
node
and
send
you
to
a
node
right,
but
they
have
noticeably.
Oh.
I
bet
that
if
you,
I
bet,
there's
some
angle
brackets
in
there
that
they
didn't
properly
quote.
That's.
Why
there's
two
empty
slashes.
H
We
typically
do,
but
also
we
try
to
be
helpful,
so
I
think
we
should
bounce
this
to
sig
node
and
see
what
they
think
it
may
just
be.
It's
port
x
done.
D
H
I
B
I
H
H
Sorry
did
they
say
topology
in
there.
I
can't
see
it.
H
Well,
I
mean
the
good
news
is
that
the
service
topology
is
is
not
going
anywhere,
so
you
can
assign
this
to
me
and
I'll
break
their
hearts.
K
F
E
I
just
kind
of
already
fixed
this,
but
this
is
definitely
my
for
me.
Rob
scott.
B
M
B
E
Yeah
this
this
feels
like
a
question
that
may
be
better
spent
for
traffic
than
kubernetes
itself.
Yes,
assignments
to
me,
I
can
follow
up.
L
B
This
one
assigned
to
me
it
looks
like
you
were
talking
to
the
person.
H
I'll
take
a
look
at
it
again.
It's
fine
okay,.
A
Cool,
thank
you
bridget
next
thing,
if
I
can
find
my
window
was
dan,
you
wanted
to
talk
about
dual
stack
status.
O
H
So
I've
been
for
reasons
crazy,
unable
to
look
at
these.
I
have
them
all
open.
Now
I
started
looking
at
them
in
depth
this
week
and
cal,
and
I
have
been
going
back
and
forth
about
some
of
the
lower
level
details
of
the
api
transformations
I
think
other
than
the
api
transformations.
The
rest
are
reasonable.
Prs
like
I've
scanned
them.
I
haven't
done
deep
reviews
seems
to
officially
like
their
own
right
place.
H
It's
just
the
same
machinery
stuff
that
has
been
back
and
forth
and
super
super
supple,
so
I
actually
have
reasonable
confidence
that
we
will
be
okay
with
respect
to
deadlines
about
cal.
If
you're
here,
you
may
disagree
with
that
and
receiving
end
a
lot
of
silence.
I
still
think
that
we'll
be
okay
and
I'm.
This
is
the
very
top
of
my
pr's
to
review.
O
Okay,
so
I
mean,
since
you
said,
deadlines
which
deadlines
are
relevant
and
what
is
our
absolute
last
date
that
we
need
to
make
a
decision
on
whether
to
move
to
beta
for
120
or
not
to
move
to
beta
for
120.
B
I
wanted
to
make
sure
we
talked
about
that
actually,
because
we
had
talked
a
little
bit
on
github
in
the
enhancements
issue
right
and
I
100
agree
that
we
want
to
get
this
stuff
landed.
But
given
that
we're
talking
about
making
a
giant
change
to
the
networking
of
kubernetes
itself,
I
feel
like
it
would
be
irresponsible
of
us
to
say.
Oh
well,
even
though
we
rewrote
everything
from
the
last
alpha.
H
H
I
feel
pretty
good
about
the
keywords
that
we've
been
doing
on
the
api.
I
think
we've
discovered
a
lot
of
stuff
and
we've
looked
at
it
under
a
microscope
that
we've
never
looked
at
this
stuff
before
and
and
honestly
cal
has
invented
some
new
techniques
within
the
api
machinery.
So
my
feeling
is:
if
we
alpha
this
cycle,
we
have
a
really
reasonable
shot
at
beta.
M
M
G
G
H
I
have
asked
api
machinery
to
take
a
look
and
make
sure
that
we're
not
missing
anything
in
the
new
stuff,
but
daniel
smith's,
immediate
suggestion
when
I
presented
the
problem
to
him,
was
exactly
what
we
were
planning
to
do
anyway.
So
I
feel
pretty
good
that
they
are
in
alignment
in
order
to
try
to
make
some
of
these
a
little
bit
better.
I
sent
them
a
separate
pr
which
I
added
to
our
agenda.
H
I
can
throw
discuss
it
today
when
my
name
comes
up
on
the
agenda,
but
a
much
smaller
pr
to
get
some
discussion
on
which
shows
the
technique
that
we're
talking
about.
O
O
Well,
so
we
have
an
enhancements
freeze
on
the
6th
of
october
and
code.
Freeze
is
the
12th
of
november.
J
If,
if
we
consider
any
of
those
critical
for
well,
I
guess
we're
not
going
to
do
that.
Yeah
I
mean
I
guess
we
need
to
at
least
figure
out
if
any
of
the
other
pending
caps
are
critical
to
get
in
this
cycle,
because
if
so,
there
needs
to
be
agreement
on
them
by
enhancement
freeze,
but
I
think
they're
all
things
most
well,
they're
all
smaller
pieces,
so
they're
things
that
we
could
potentially
land
in
121
and
still
be
confident
about
calling
it
beta
and
121.
H
Excellent
point:
I
have
not
I've
not
fully
digested
them,
but
you're
right
strictly
speaking,
we
should
get
those
keps
approved
since
they
are
all
directly
related
and
in
the
sake
of
pipelining
cal,
do
you
are
you
acting
as
approver
on
those
related
caps.
H
Yeah,
why
don't
you
act
as
a
prover
and
that
way,
while
you're
waiting
for
me
they're
not
also
waiting
for
me
yeah.
M
M
Yes,
so
the
first
three
are
the
api
stuff.
Those
are
the
stuff
that
we
will
be
locked
out
once
we
declare
beta,
the
rest
of
the
stuff
is,
is,
can
fixed
can
match
any
point
of
time
so
in
terms
of
critical
like
commitments?
Yes,
the
interface
is
this
reverse
tree
all
right,
the,
but
the
rest
of
the
code
is
equally
important
in
terms
of
the
functionality,
routing
and
so
on.
Lars
has
done
larson
and
antonio
done
significant
amount
of
validation
with,
but
we
need
more
if
you
can
afford
more
right.
M
I
am
I'm
getting
nightmares
like
you,
you
you
sit
down
and
you
know
if
what
we
did,
what,
if
this
happened,
and
that
happened
and
so
on
right
so,
but
I'm
focused
to
be
honest
and
focus
more
on
the
api,
at
least
for
the
last
10
days.
Just
to
get
this
thing
done.
O
B
Oh,
I
was
just
saying
if
we
get
this
merched
in
you
know
early
here
in
the
release
cycle,
then
we
hopefully
can
see
if
there's
any
knock-on
effects
that
we
weren't
expecting
right
because,
like
the
unknown
unknowns
are
the
ones
that
we
can't
predict
anything
about.
The
timing
of
tim.
H
I
was
just
gonna
say
that
exactly
there's
always
a
probability
that
we
missed
something
that
is
important
and
we
won't
really
find
out
until
we
have
a
diversity
of
eyeballs
trying
the
thing
out
and
finding
what
we've
missed.
So,
while
I
feel
pretty
good
about
this,
I
also
felt
pretty
good
about
it.
Last
time.
O
Yeah
I
mean
like
we
all
stood
up
on
stage.
Well,
you
and
cal
did
last
year
for
kubecon
and
talked
about
it
and
we're
almost
a
year
later-
and
you
know,
like
I
said,
we're
still
discovering
things.
How
do
we
not
lose
momentum?
I
guess.
J
O
J
I
was
just
going
to
say
one
problem
we
had
before
is:
is
that,
like
I
mean
nobody
had
started
using
it
and
the
handful
of
people
in
the
community
who
started
using
it
immediately
ran
into
bugs,
which
I
guess
a
lot
of
them
weren't
bothering
to
report
either
that
or
nobody
else
was
actually
testing
it,
because,
like
we
fixed
a
ton
of
bugs
between
118
and
119
and
dual
stack
such
that
it
was
pretty
much
unusable
before
that.
J
So
I
guess
we
at
least
know
that
people
can
try
it
out
now,
which
which
is
good.
I
We
can
try
and
put
that
call
out
there.
I
think
the
other
thing
that
happened
last
time
is
that
you
know
the
team
at
red
hat
did
some
soap
tests
after
the
fact-
and
I
I
don't
know-
go
made
wave
the
magic
clayton
wand
as
soon
as
this
and
and
team
to,
because
there's
a
lot
of
use
cases
that
you're
testing,
that
on
an
upstream
about
migration
and
that's
where
we
caught
a
lot
of
the
problems
which
was
when
you
turn
it
on.
When
you
turn
it
off,
feel
defaulting
blah
blah
blah
blah.
I
So
I
don't
know
if
we
can,
we
can
light
up
those
test
paths
a
little
earlier
in
the
release
cycle
rather
than
rc1,
and
see
if
we
can
get
more
signal
out
of
whatever
is
internal
to
to
red
hat
on
use
case,
my
user
migration
between
versions.
I
think
that
was
where
we
got
caught
and
and
feature
flag
flipping.
I
So
I
don't
know
if
anybody
has
the
because,
because
that
all
came
out
of
the
woodwork
afterwards,
the
other
thing
we
could
do
is
push
on
it
and
go
back
to
the
release
team
and
say:
can
we
make
a
big,
concerted
effort
comes
wise
to
say
here's?
We
really
need
a
call
to
action
here
if
you're
interested
in
this.
We
need
to
test
it
because
we
want
to
move.
B
I
also
think
this
is
a
year
later,
and
this
is
my
last
meeting
of
my
day.
It
is
my
seventh
meeting
of
my
day.
My
first
meeting
of
my
day
was
with
people
who
are,
you
know
chomping
at
the
bit
to
be
using
dual
stack,
pretty
much
instantly
and
we
keep
having
to
tell
them
that
it
is
not
in
any
way
production
and
we
can
help
you
set
up
a
test
cluster
so,
like
I
think,
there's
interest
out
there.
I
E
O
Yeah,
no,
I
mean
I
was
thinking
in
terms
of
you
know.
We
had
a
big
push
last
summer
with,
I
think
the
first
round
of
giant
prs
for
dual
stack
here.
We
are
a
year
later
with
the
second
push.
I'd,
be
really
sad
if
we
are
in
the
middle
of
2021
with
a
third
big
pr
for
dual
stack,
but
I
I
do
agree
we're
in
a
better
place
and
just
to
close
out
the
conversation,
since
we
have
a
lot
more
stuff
on
the
agenda.
O
In
summary,
as
a
sig,
please
tell
me
if
this
is
accurate
as
a
sig,
we
are
not
comfortable
declaring
the
api
beta
at
this
point
in
the
release
cycle.
We
want
to
close
out
the
prs
listed
above
and
make
sure
that
the
keps
there
also
get
discussion
and
agreement,
and
then
at
that
point
we
could
revisit,
but
we
don't
think
that
it's
going
to
be
ready
to
declare
beta
for
1.20.
B
I
think
it's
accurate,
except,
I
think
we
are
biting
off
more
than
we
can
chew,
and
it's
not
realistic
for
us
to
be
looking
at
the
date
of
october.
First
and
saying
we
might,
we
might
no
we're
not
going
to
let's
not
put
that
pressure
on
ourselves,
let's
be
kind
to
ourselves
and
say
what
we
can
do
is
deliver
an
alpha.
That's
really
solid!
That
works
that
we
are
ready
to
stand
behind
and
make
any
changes
that
we
actually
have
to
make.
I
M
That
are
not
done
anywhere
in
the
apis,
all
right,
yeah
and
the
cases
that
we
don't
it's
the
the
unknowns
unknowns.
Last
time
we
were
dealing
with
plural
polarization.
Now
we're
dealing
with
linked
field,
clear
on
updates
and
stuff
that
really
really
scary
right.
I
am
fairly
confident
that
it
works,
but
I
haven't
seen
it
running
100,
so.
O
Right
yep:
no,
I
mean
that
that's
totally
fair,
I'm
just
trying
to
bring
some
clarity
to
the
questions
since
we
keep
dancing
around
it.
H
H
J
Pod
ips
are
already
pretty
much
like
they're,
mostly
not
feature
gated.
If
the
cni
plug-in
reports
dual
stack,
pod
ips,
the
the
then
kubelet
will
use
them.
M
Data
and
alpha
doesn't
mean
really
anything
to
the
code.
It
means
to
the
user,
so
it
doesn't
really
mean
mean
like
alpha.
Data
stage,
doesn't
really
mean
anything
code
wise
this
stage.
There
are
users
to
answer
temp
questions.
There
are
users
who
have
all
have
a
problem,
a
partial
problem
of
egress
to
dual
stack.
They
don't
necessarily
want
to
increase
dual
stack,
but
they
want
to
be
able
to
consume
others
like
external
types
that
are
happen
to
be
sex
v6
from
a
cluster,
that's
b4,
so
there
might
be
value
there.
In
your
question.
A
K
Yeah,
thank
you
casey,
so
I'm
going
to
try
to
be
really
fast,
but
that
there
is
a
lot
of
discussion
here.
So
in
april,
jay
started
to
gather
some
information
about
who
wanted
to
join
a
discussion
about
the
evolution
of
network.
I
K
And
me
as
an
as
an
end
user
with
a
cluster
admin
profile
decided
to
join,
because
I
just
didn't
wanted
to
use
vendor
specific
network
policy
apis.
You
have
features
like
cluster
school
network
policies
or
policies
priorities.
I
think
that's
important
to
make
clear
what
we
are
trying
to
do
there
and
instead
having
this
as
a
part
of
kubernetes
api,
or
at
least
with
trying
to
get
some
constants,
we
started
to
together
some
github
issues
and
customary
issues.
K
Some
nice
ideas
from
from
discussions
from
the
group
under
andrea
helped
us
to
to
turn
this
into
an
official
project
and,
and
then
we
winship
took
that
the
document
that
was
pretty
messed
and
made
a
nice
organization
that
turned
some
some
things
clear
for
us
like.
We
may
have
this
feature
now
in
v1.
K
K
So
I'm
here
as
the
messenger,
please
don't
kill
me
to
show
you
the
first
user
stories
that
we've
been
discussing
in
the
last
two
meetings
that
could
become
additions
to
the
current
view
and
api,
except
to
the
last
one.
That
abhishek
is
probably
going
out
also
to
bring
you
because
that's
a
big
effort
and
something
really
different,
and
the
idea
here
is
that
we
can
discuss
this
tool
to
open
the
cat
issues
and
start
writing
the
cap
proposal.
K
We
are
still
discussing
some
user
stories
and
trying
to
figure
out
how
how
is
this
going
to
become
a
new
object
and
the
new
data
model,
so
there
is
a
link
in
the
in
the
agenda.
If
you
want
me,
I
can
share
you.
The
screen.
That's
okay,
casey,
go.
I
K
Yeah,
so
here
it
is,
so
those
are
the
the
three
use
cases
that
we've
been
mapping
as
something
plausible
to
be
added
to
v1
and
the
effort
or
the
discussion
level
that
we
think
that
that
it's
going
to
to
bring
us.
So
the
first
one
is
part
range
and
port
set,
because
the
actual
network
policy
sports
field
in
english
and
every
policy
that's
an
array
that
needs
a
declaration
of
each
single
part
to
be
contemplated.
N
K
K
So
like
the
problems,
a
user
wants
to
allow
egress
to
upload
ports
on
a
cloud
on
another
cluster
like
30
000
to
32
thousand,
and
the
user
will
have
to
declare
each
part
as
a
single
field
in
the
network
bodies,
and
we
are
trying
to
to
to
solve
this
and
also
a
three
group
of
users.
Once
you
have
all
their
pots,
you
communicate
with
a
range
of
parts
like
6000
to
9000,
except
to
port,
the
redis
part
which
they
consider
insecure.
K
So
this
is
what
we
are
going
to
try
to
solve.
In
the
first
user
use
case,
I'm
going
to
move
forward,
then
we
can
probably
discuss
later
so
the
next
one
is
select
namespace
by
naming
network
policy.
I
know
there
is
a
lot
of
discussion
with
seek
architecture.
I
know
that
team
is
is,
is
pulling
into
an
issue
also
like
discussing
about
virtual
labels
and
so
on.
K
So,
but
we
decided
to
put
this
here,
because
this
is
something
that
we
see
a
lot
of
people
asking
and
we
don't
know
if,
like
is
it
okay
to
wait
until
virtual
labels
became
the
consensus
or
not
so,
api
supports.
Only
selecting
bot
by
label,
which
is
fine
in
namespace
by
labels,
which
is
not
fine
due
to
problems
specified
below
like
namespace,
are
often
standalone
and
may
not
need
to
be
logically
categorized
using
labels
like
cube
system
or
referencing
by
name
prevents
unnecessary
labeling
of
namespace
to
fit
into
the
network
policy.
K
Api
and
also
a
group
of
users
wants
to
allow
erasing
their
pods
from
any
pods
in
another
namespace,
but
they
don't
want
to
trust
the
label
as
a
selector
of
the
namespace,
because
per
cluster
and
back
any
user
can
put
their
own
labels.
So
if
I
can
write
into
my
namespace,
I
can
put
a
label
on
my
space
and
I
can
turn
turn
it
into
a
roman
space,
and
this
is
a
medium
effort
because
I
know
there
is
a
lot
of
discussion
to
to
be
made
about
here
in
signature
and
the
last
one.
K
It's
it's
ones
from
abhishek.
K
That's
about
a
cluster
scoped
network
policy
and
this
one
we
we've
been
discussing
about.
Okay,
this
needs
a
different
data
data
model.
How
how
we
know
that
this
is
an
enforcing
cluster
scope,
or
this
is
like
a
suggestion.
Cluster
scope,
network
policy,
but
the
limitations
from
v1
is
that
the
api
only
supports
naming
spaces
groups,
but
you
cannot
create
a
route
that
applies
to
just
set
up
namespace.
K
The
api
only
supports
expressing
intent
of
a
developer
rule
and
does
not
capture
the
requirements
of
an
administrator
role.
So,
as
I
said,
as
I
am
here
like
a
an
end
user
with
cluster
admin
profile,
this
is
something
that
I've
been
seeking
a
lot,
and
I
only
have
this
by
vendors
and
not
like
a
vanilla,
an
upstream
core
api
object.
K
Although
those
are
the
problems
to
be
solved
and
the
bigger
one,
I
think
that's
the
closer,
it
means
once
that
audience
space
have
a
default.
Then
I
wrote
inverse
for
angers
and
address,
but
that's
up
to
the
developer,
to
open
a
union
space-specific
network
policy
or
maybe
opening
everyone
to
communicate,
to
keep
the
dns.
K
L
Yeah,
so
just
just
in
case
it
wasn't
clear.
So
these
are
the
use
cases
that
we
just
want
to
start
writing
caps
for
and
so
we're
looking
for
feedback
from
the
sig
on
whether
these
ideas
aren't
crazy
and
they're
reasonable
and
trying
to
solve
real
problems
before
we
put
in
the
effort
and
start
writing
cups
for
them.
A
I
mean
I,
I
can't
speak
for
everybody,
but
my
sense
is
that
these
are
a
reasonable
set
of
of
scenarios
and
it's
probably
worth
progressing
them,
and
I
agree
that
the
third
one
is
large
and
maybe
not
quite
as
ready
to
like
jump
in
and
write
code
on
right
now.
But
it
is
certainly
a
use
case
that
we
hear
a
lot.
P
About,
if
you
introduce
the
third
one
like
don't,
you
need
to
like
have
some
kind
of
priority
in
the
network
policy
resources.
Q
Q
D
Q
As
part
of
the
cluster
scoped
policies,
we
are
kind
of
like
looking
at,
you
know,
precedence
and
you
know
how
to
enforce
those
policies
and-
and
you
know
those
are
the
kind
of
questions
we're
trying
to
answer.
Q
You
know
gobin
zhang
young
and
I
have
been
meeting
weekly
on
for
for,
to
write
and
solve
those
problems.
So
we
are
preparing
some
google
doc
and
once
we
have
like
a
relative
set
of
answers
and
maybe
a
sketch
of
an
api,
maybe
we
can.
You
know
first
show
this
to
the
sub
project
and
and
then
you
know,
once
we
have
a
broader
agreement,
we
can
come
back
here,
but
but
that
is
a
very
large
effort.
Q
So
it's
it's
it's
not
something
that
will
be
immediately
available,
but
at
least
I
think
you
know
if
we
all
agree
that
this
is
something
that
you
know
we
can
spend
time
on.
Then
I
guess
you
know
we
put
efforts
into
the
to
answering
all
the
questions
like
precedence
and
the
responsibilities
of
administrators.
A
So
we
don't
have
a
lot
of
time
to
talk
about
details
for
these,
but
just
anybody
have
any
objections
to
this.
The
subproject
making
cups
for
any
of
these.
A
D
R
R
I'm
alex
so
we,
like,
I
wrote
exam,
does
we
have
like
a
bunch
of
clusters
in
production?
Basically
all
of
zamasu's
hosted
communities
and
recently,
as
a
part
of
migration
to
service
mesh
and
eastern,
our
service
mesh
team
asked
like
that.
They
would
like
to
have
different
service
side
blocks
in
clusters
between
clusters
like
right
now
we
have
the
same
server
inside
the
block
in
each
individual
cluster,
so
I
can
came
up
with
the
solution.
R
How
we
can
do
that
and
I
tested
it,
but
I
would
like
to
get
more
some
feedback
from
sig
network,
because
you
definitely
know
better
how
it
works.
So
my
current
approach
to
is
the
following,
so
I
patched
like
to
proxy
and
either
the
second
field.
We
should
basically
all
serve
beside
the
block,
and
this,
if
also
recycle
block,
we
specified
the
external
eyepiece
that
using
that
server
inside
the
block
would
be
treated
as
a
classifier
piece,
so
it
would
generate
proper
ip
tables
rules
for
four
of
the
class
types.
R
This
is
like,
first
part
and
second
part
the
api
server.
I
disabled
validation,
to
allow
updating,
class
types
and
added
logic.
If
cluster
ip
is
set
to
empty
string,
then
basically
try
allocate
ip
address
again
and
assign
it
to
a
service
so
and
all
other
stuff
like
to
support
multiple,
to
have
supporting
certificates
like
different
different
communities.
Ip
addresses
community
service
ideas
and
whatnot
so
and
so
they're
allowed
to
be
basically
I'll
roll
out
q
proxy.
R
That
would
allow
me
to
have
three
cluster
ids
from
all
sides:
server
side,
the
blog
and
from
a
new
server
inside
the
blog
as
this
and
then
wrote
custom
utility
that
would
basically
update
all
of
these
services
and
it
would
put
a
cluster
id
from
also
inside
the
block
to
external
ip
in
the
service
and
get
empty
class
id.
So
when
I
submit
this
to
api
survey,
test
server
would
allocate
new
class
id
and
it
would
not
prevent
any
existing
routing.
So
it
would
like
no
apd
I.
R
I
would
not
expect
any
ip
table
changes
for
old
cost
aps
and
then,
after
all,
services
are
done
I'll,
just
roll
out
just
a
second
time
by
removing
external
ip.
So
after
the
rollout,
everything
should
just
get
lost
by
a
piece
and
right
now.
So
this
is
like
tldr
approach,
so
I
would
like
to
get
any
like
what
am
I
missing?
Basically,
hopefully
it
was
clear.
R
H
And
how
maybe
I
missed
it?
If
I
have
two
cider
blocks,
how
do
you
ensure
that
new
services
always
come
from
the
correct
cider
block,
so
you
can
drain
the
old
one.
R
So
api
server,
you'll
use
only
and
like
after
so
a
gas
server
would
be
updated
to
use
only
use
on
the
side.
The
block
it
would
complain
that,
like
there
are
services
from
also
inside
the
block,
but
the
patching
coupe
proxy
would
generate
continue
to
generate
ip
tables
rules
to
to
treat
them
as
a
plus
type
is
like,
like
that,
yeah
so
about
two.
R
So
inside
the
blocks
only
two
proximals,
basically
so
q
proxy
would
properly
handle
like
the
old
server
side
of
the
block
and
and
then
you
won
and
after
the
rollout
is
done.
You
just
yeah.
This
is
like
I.
I
need
that
that
one
so
because
I
don't
want
to
break
any
currently
running
workload,
and
then
we
can
just
we
roll
the
cluster.
R
So
we
reschedule
all
the
pauses
basically
and
then
dns
would
return
to
them
on
the
plus
type
like
kodian,
as
it
would
like
deal
with
external
ip.
So
it
looks
pretty
distinctive
to
me
so,
but
I'd
like
to
get
you
know,
feedback
on.
A
That
so
I'm
guessing
that
lots
of
us
are
still
kind
of
digesting
the
approach
a
little
bit.
Is
there
a
way
that,
like.
A
S
E
A
For
where
you
can
kind
of
write
down
the
approach,
you've
been
thinking
about
and
give
your
folks
more
of
a
chance
to
digest
and
provide
feedback
that
way.
R
Okay,
I
mean
I'm,
so
the
question
is
like
I'm
not
sure
if
everyone
in
community
needs
that
one,
so
I'm
not
sure
like,
like
I'm
fine
to
like
just
patch,
to
be
personal
in
our
own
for
a
formal
migration
and
be
deal
with
it.
So
I'm
not
asking
about
pushing
like
having
this
support
to
my
grade
in
upstream
communities,
unless
there
is
a
request
for
that,
it's
not
the.
H
H
R
Okay,
then
I'll
which
asking
slack
like
what's
the
pro
process
of
writing
cap
and
you
follow
up.
Thank
you.
A
Thanks
so
we're
right
about
10
minutes
left,
we've
got
still
a
few
two
years,
so
we
may
not
get
through
all
of
them,
but
let's
do
our
best
matt.
You
are
next.
T
Hey
everybody
matt
fenwick,
here
kind
of
sort
of
new
to
the
kubernetes
stuff,
so
I'm
working
on
migrating
the
network
policy
tests
ede
tests
over.
We
have
a
pr
up,
and
you
know
working
through
that.
So
I
just
wanted
to
ask
anybody.
You
know
like
for
a
little
bit
of
advice
or
help
on
you
know,
since
I'm
pretty
new
to
this
kind
of
navigating
this
process,
you
know
figuring
out
how
to
get
the
the
pr
to
the
next
level
and
so.
T
G
Just
today
we
had
a
somebody,
did
a
refactor
and
we
had
one
failing
for
15
days
and
it
was
just
a
simple
thing.
So
what
I
suggested
is
just
starting
parallel,
you
know,
and
and
with
more
incremental
steps,
because
if
you
are
going
to
run
the
pr
with
a
lot
of
a
lot
of
changes,
it's
going
to
this.
This
code
is
not
great,
because
you
should
try
to
engage
with
the
sig
testing
forks
too.
G
D
Yeah,
that's
like
yeah.
We
talked
about
that
in
slack
today,
so
yeah
matt.
I
guess
the
next
thing
to
do
is
we
need
to
just
move
the
labels
for
the
network
policy
stuff
so
that
it
doesn't
collide
with
the
original
one.
So
we
can
have
orthogonal
and
antonio
had
a
good
idea,
which
is
just-
and
I
think
we
merged
it
and
update
to
the
cap
to
be
like
10
as
10
to
10
run
10
like
cycles
where
with
no
changes
to
the
tests
and
passing
and
everything
in
ci.
D
G
My
I
my
suggestion
is
just
learn.
One
test
only
so
create
all
the
all
the
scaffolding
that
you
have
with
all
the
all
the
things
to
create
these
this
year,
the
sales
and
all
these
things,
but
just
run
one
test
and
with
that
test
I
I
I'm
volunteering
to
to
test
in
in
our
ci
too.
You
know
and
from
that
we
can
start
iterating,
because
if
you
are
going
to
run
a
2000
lines
test,
nobody
is
going
to
be
able
to
to
review.
D
G
G
T
All
right,
cool
yeah,
I
appreciate
it
and
you
know
just
to
be
clear:
there's
no
really
real
urgency.
For
my
end,
I
just
want
to
make
sure
that
I'm
doing
everything
I
can
to
you
know
to
help
move
this
process
forward
and
make
it
really
easy
for
everybody.
You
know
who's
like
looking
at
it
and
stuff
like
that.
Q
It
and
everyone
else
is
helping.
Somebody
wants
to
add
new
test
cases
and
doing
test
cases
to
cover
additional
scenarios.
Shall
we
target
in
the
original
format
or
the
new
format
or.
T
T
D
A
A
Thanks
guys,
oh.
A
E
You
are
next
yeah
mine's,
pretty
quick.
I
have
a
couple
caps
out
that
kept
prs
out
there
right
now.
E
This
would
be
a
kind
of
an
evolution
of
topology,
although
they're
two
pr's
they're
closely
related
one,
is
about
subsetting,
endpoint
slices
and
basically
updating
cube
proxy,
so
it
could
consume
endpoint
slices
that
were
delivered
to
the
zone
or
region
it
was
sitting
in
and
the
second
one
is
much
larger
and
that
is
about
how
we
would
actually
subset
those
endpoint
slices.
E
I
think
it's
valuable
to
have
these
separate,
because
the
second
cup
is
all
about
how
the
endpoint
slice
controller
would
behave,
and
the
first
cap
is
about
how
you
could
publish
any
endpoint
slice
from
any
original
location
and
deliver
it
to
a
specific
coup
proxy.
Only
so
it's
kind
of
a
more
generic
smaller
feature,
they're
they
both
kind
of,
are,
are
tied
together.
E
I
think
it
is
reasonable
to
you
know
if
we
had
to
choose
just
one:
the
first
endpoint
slice
subsetting
is
kind
of
a
dependency
of
this
plan
for
evolving
topology,
aware
routing,
but
I'm
really
really
interested
in
feedback,
because
there
are
some
changes
proposed
here,
especially
on
the
topology
one
and
interested
in
what
what
people
think
about
the
the
new
potential
approach
and
obviously
a
reminder
that
we're
super
close
more
close
than
I'd
like
to
think
to
that
enhancements.
E
A
Cool
and
then
one
more
item
from
minhan.
P
Yes,
it
should
fit
so
this
is
not
even
a
kept
pr
yet
so
it's
a
draft
cap,
I
like
share
it
with
google
docs,
so
that
the
discussion
will
be
easier
on
the
google
doc
than
on
a
pr
view.
So
this
cap
is
about
supporting
external
workload
outside
of
the
kubernetes
cluster.
P
A
Great
thanks
man
on
if
you're
interested
in
that
please
check
it
out,
and
so
that
brings
us
to
time
for
the
day.
So
thanks
everybody
for
coming
I'll,
see
you
all
in
two.