►
From YouTube: SIG Network meeting 2018-01-11
Description
SIG Network meeting 2018-01-11
A
A
B
A
B
A
A
B
A
D
C
C
And
just
for
background,
the
proposal
started
with
the
question:
how
do
I
always
talk
to
the
locking
agent?
That
is
on
the
same
machine
as
me
right,
and
we
then
expanded
that
a
little
bit
more
to
encompass?
How
do
I
talk
to
the
database
server?
That
is
in
the
same
zone
as
me,
or
how
do
I?
How
do
I
find
the
proxy?
It
is
in
the
same
rack
as
me,
where
is
a
known
machine?
Rack
are
effectively
arbitrary
right.
C
C
It's
a
little
different
in
service,
fundamentally
because
all
these
other
ones
are
effectively
like
a
true
term
expression,
there's
a
left
hand
and
a
right
hand
and
we're
saying
well
the
left
hand
is
always
the
same
as
me
right
so
find
me
something
in
the
same
rack
as
me,
but
I
think
that's
fine,
and
so
there's
the
proposal
and
the
proposal.
Thank
you
is
the
simplest
possible
API
to
this.
It's
perhaps
almost
certainly
too
simple,
but
it's
a
really
good
starting
place
to
figure
out
if
this
is
going
to
work
for
people.
E
This
is
John,
hey,
you
know,
I
put
a
couple
comments
on
there
and
and
I.
Don't
have
I
have
the
same
use
case
for
the
local.
On
the
same
note,
because
we
have
doing
dns,
that's
very
latency
sensitive
and
we
don't
want
to
cross
the
network
we
don't
have
to
in
in
our
system.
But
I
am
concerned
about
the
idea
of
building,
in
the
specific
logic,
for
this
specific
kind
of
routing
directly
into
kubernetes
Mensa's.
E
We
can't
chase
all
the
different
use
cases
in
kubernetes
directly
for
every
single
way.
Somebody
might
want
to
route
something
so
what
I
was
suggesting
is.
Instead,
we
we,
we
have
those
hook,
points
that
called
call
out
to
something
else
to
make
a
decision
now
in
the
case
of
what
you
defined
here,
that
could
actually
be
a
piece
of
code
that
that
takes
exactly
the
topology
API
you've
defined,
but
it
puts
it
in
a
CR
D
instead,
and
it
makes
a
decision
based
upon
the
same
logic
you
have.
E
C
I
mean
you
basically
just
describe
this,
do
and
I
think
that's
fantastic,
but
I
think
it's
far
more
complicated
than
what
we
need
for
the
simplest
possible
case
and
I'm
hoping
and
maybe
I'm
wrong
and
I.
Haven't
read
your
comments
so
all
eyes,
I'm,
hoping
that
the
simplest
or
close
to
the
simplest
possible
thing
will
actually
satisfy
some
large
body
of
users
and
more
complicated
stuff
will
come
along
as
sto
and
similar
things
mature,
I.
Don't.
E
D
E
But
nonetheless,
I
mean
the
I,
don't
think
what
comes
suggesting
actually
more
complicated
and
what
you're
suggesting
in
that
and
maybe
I
introduced
it
with
this
whole
Lala
notion
of
generalized
policy
that
made
it
seem
really
complicated.
It's
actually
just
about
where
you
put
you
put
the
code
in
kubernetes,
or
do
you
put
an
extension
when
you
could.
C
E
E
What
proxy
configuration
I
guess
are
you
suggesting?
Because
it
is
it
all
this
is,
is
about
where
Q
proxy
gets
its
metadata
from.
Does
it
get
it
from
some
abstraction
that
existing
communities
are
going
to
get
it
from
a
CRT?
And
if
it's,
if
it's
the
C
or
D,
then
maybe
it's
some
plug
in
the
Q
proxy
or.
E
I'm
suggesting
no
and
maybe
I'm
not
fully
understanding
posle,
but
I'm,
not
actually
suggesting
any
mechanism
that
would
be
different.
This
is
about
how
you
configure
it.
This
is
how
you
configure
you,
configure
it
as
a
as
data
on
a
on
a
core
Nettie's
abstraction
like
service,
or
is
it
just
a
separate
thing
that
defines
how
you
wanted
to
do
this
routing
you
hook
pointing
in
the
different
in
the
different
layer.
Is
the
different
control
points
right?
You
have
different
control
points
in
the
stack
you
have
the
service
discovery
piece
which
happens
before
anything.
E
You
proxy,
which
is
doing
the
actual
router
tables,
which
is
doing
the
actual
routing,
then
you've
got
queue
proxy
tables
we
would
have
to
choose
proxy
or
whatever
is
going
to
configure
IP
tables
would
have
to
read
this
other
CRE
rather
than
read
things
off
the
service.
But
if
you
did
that,
then
you'd
have
to
somehow
put
a
plug
in
to
manage
those
crts
within
within
queue
proxy.
That's
what
I
mean
by
the
hook
point
the
extension
point
of
keep
proxy.
Then
I,
try
question.
E
F
E
You're
saying
I
said
you're
saying
that
the
queue
proxy
plug-in
wouldn't
know
to
interpret
that
into
iptables
rules,
because
it's
it's
not
making
the
routing
decision
itself.
It's
instead
trying
to
set
up
IP
tables
to
make
that
routing
decision,
and
so
it
it's
in
order
to
make
that
work.
But
you
keep
your
nightly
tables.
You
have
one
implementation
that
interprets
that
logic
and
then
to
make
it
work
in
some
other
mode.
You'd
have
got
a
different.
C
Slot
and
then
when
the
EBP
F
mode
comes
in
and
you
have
to
have
another
one
for
that,
one,
the
next
thing
after
that
comes
in
and
I,
don't
think
that
I,
don't
think
that
pretend
mole
but
I
actually
missed
I
misinterpreted,
perhaps
in
your
favor.
If,
if
CR
D
here
was
effectively
a
mapping
of
I'm,
not
sure
how
to
express
it,
but
a
mapping
of
which
end
points
should
be
chosen
from
which
nodes
or
something
like
that
for
each
service,
which
sounds
like
a
fairly
complicated
structure.
C
But
if
it
was
you
know
like
today.
Let
me
back
up
a
second.
What
we're
proposing
with
this
is,
there
is
a
semantics.
The
semantics
is:
look
at
the
labels
pick
the
ones
that
have
the
label
that
matches
my
forehead
for
an
arbitrary
label,
but
all
we
have
is
the
labels
right.
In
fact,
we
only
have
a
one
expression
which
is
equals
equals,
and
we
can
talk
about
whether
we
need
more
expressions
or
not.
C
E
That
makes
sense.
That's
right
exactly
so.
The
the
the
the
the
tension
point
is
fairly
simple.
It's
it's
given
me
that
list
of
end
points
for
this
particular
service
in
this
particular
client,
and
whether
that
and
you're
right
that,
if
that's
the
extension
point,
then
we
don't
need
to
worry
about
it
being
different
for
all
these
different
things.
They
all
use
the
same
extension
point
and
they
configure
whatever
they're
going
to
configure
in
the
same
way
thing
where
the
decision
made
somewhere
else.
C
That's
interesting
to
me,
because
actually
one
of
the
questions
I
raised
on
this
proposal
was
it
sort
of
feels
awkward
that
I'm
defining
this
in
the
service,
which
is
sort
of
the
server
side
when
it
really
feels
more
like
a
client
side
or
an
administrative
policy,
not
a
server
side
policy,
but
service
is
the
only
place
we
have
to
configure.
It
I
like
to
think
about
your
idea
a
little
bit
I'm,
not
sure
that
we
can
have
service
dependency,
our
DS
that
way,
but
I
need
to
think
about
the
rules
there.
A
So
I
had
sort
of
come
to
the
same
conclusion
as
John
when
I
was
taking
a
look
at
the
proposal,
but
maybe
for
slightly
different
reasons,
and
on
that
point
it
seemed
and
I
know
you
were
trying
to
keep
things
as
simple
as
possible
and
you
know
create
the
simplest
possible
API
for
it.
But
I
thought
that
John's
comment
about
well.
This
seems
like
something
that
could
be
generalized
kind
of
rang.
True
for
me,
and
so
I
was
kind
of
wondering
about
you
know,
is
topology
the
right
name
for
this
kind
of
stuff.
A
For
what
cube
proxy
would
do
it's
only
taking
a
look
at
node
labels,
and
so
my
thought
there
was
well
yes,
this
sounds
like
a
great
application
of
labels
and
then
maybe
we
don't
need
the
kind
of
external
plugin
or
something
like
that,
because
people
could
just
manipulate
labels.
But
then,
if
the
only
labels
that
are
available
for
this
decision,
our
labels
on
the
node,
perhaps
that's
a
little
too
limiting.
F
C
C
E
E
And
an
API
that
says:
here's
the
information
about
I
need
to
make
a
choice
in
where
I
want
to
route
this
connection
to
what
endpoint
and
I
get
back
a
list
of
endpoints
in
some
sort
of
order,
and
that's
it
once
we
define
that
APRI,
then
this
proposal
could
go
forward
outside
of
kubernetes
right.
This
proposal
could
become
just
just
what
are
those?
E
What
is
that
it
can
look
like
and
then
at
the
different
control
points
to
keep
proxy
and
wherever
that,
that
piece
of
code
needs
to
implement
that
control
point
and
make
that
request
and
I
don't
think
it's
necessarily
a
service
depending
on
a
CR
D.
There
is
a
service
depending
on
making
an
API
call
to
something
which
may
be
an
extension.
E
C
The
choices
we
have,
our
basically
CR
D
or
first-class
resource
and
I'm
not
sure
that
CRT
would
work
so
means
we
would
have
to
be
thinking
of
this
in
terms
of
a
first-class
resource.
I,
it's
interesting,
I
I
feel
like
I
need
to
think
more
about
how
it
might
look
I'm
struggling
a
little
bit
with
what
would
the
resource
look
like
they
carry
the
mapping
of
for
each
node
for
each
service
choose
these
backends
well,.
E
Actually,
that's
not
quite
what
I'm
suggesting
I'm
suggesting
that,
from
the
point
of
view
of
cube
proxy,
what
you
have
is
an
API
and
then
an
API.
You
cuz
in
that
you
have
this
this
service
coming
from
this
source
with
the
client
coming
from
this
source
and
give
me
back
a
list
of
those
anyways.
What
makes
that
determination
on
the
other
side?
I,
don't
know,
Hieu
proxy
doesn't
know
it
just
makes
an
API
call.
That
could
be
some
code
directly
in
there.
That's
interpreting
some
CRD!
Well,
that
could
be.
F
B
F
C
The
difference
is
that
in
almost
all
those
cases
or
all
that
I
can
think
of
the
behavior
is
the
thing
that
we
can
rely
like.
We
don't
break
portability
in
the
sense
that,
if
I
schedule
a
container
on
any
kubernetes
cluster,
the
container
runtime
interface
ensures
that
container
runs
and
the
semantics
and
the
API
I
give.
It
are
always
the
same
and
if
I
in
picturing,
what
you
guys
are
sketching.
C
E
That
semantics
is
guaranteed
as
part
of
the
service.
That
semantics
is
guaranteed
as
part
of
the
the
policy
is,
what
I
would
call
it?
That's
being
defined
and
and
I
think
we
mike
suggesting
is
that.
Well,
maybe
you
have
different
ways:
you've
got
the
default
thing
that
you
keep,
that
we
already
do
right,
which
is
just
round
robin
or
whatever
it
is
probabilistic
and,
and
that
would
be
the
default
if
you
don't
specify
any
pointer
to
some
the.
F
C
E
We
would
have
to
I
haven't
thought
it
through
that
that
that
deeply,
because
I
I
was
really
just
bringing
up
a
concern.
I
had
with
the
way
that
building
into
kubernetes
but
I
think
what
Mike
was
suggesting
made
a
little
bit
of
sense
to
me.
I'm
gonna
have
to
think
about
it
more,
which
is
you.
You
have
some
sort
of
API
object.
First
class
API
object
that
defines
some.
C
It
seems
like
it
would
be
much
simpler
and
more
sort
of
kubernetes
style
to
be
declarative
here
and
to
say
instead
of
me,
pulling
some
service
to
make
routing
decisions.
You
declare
them
as
as
a
resource
in
communities
right
again
for
each
node.
I
should
be
able
to
look
up
my
node
for
each
service
you're
going
to
give
me
a
list
of
which
backends
I
should
choose
in
which
order
and
I
can
then
program
my
IP
tables
based
on
that
and
because
we
did
it
as
a
kubernetes
resource,
I
can
watch
it.
C
C
A
C
This
was
this
was
helpful.
I'm,
certainly
gonna,
think
about
that
John
and
and
Mike
I
think
it's
there's
there's
something.
There
is
an
interesting
idea.
It
certainly
has
an
opportunity
to
follow
a
lot
of
patterns
that
have
been
established
already
through
karate.
So,
let's,
let's
continue
to
chew
on
that
and
we'll
take
it
to
the
issue.
The
proposal
here-
that's.
A
A
F
A
So
basically
I
mean
that's,
there's
a
lot
of
schedule
info
there
and
stuff,
like
that
feature,
freeze,
etc.
But
if
you
look
at
the
top
of
the
Signet
work
agenda,
we
kind
of
have
our
features
spreadsheet,
linked
they're
at
in
the
important
links
part
and
that's
what
I
was
kind
of
looking
at.
Thank
you
yep.
A
A
C
C
G
C
G
C
More
or
less,
but
if
you
think
it's
not
really
that
useful,
if
you
think
this
is
merely
an
enabler
for
dual
stack
and
dual
stacks,
what's
really
important
then,
and
let's
not
rush
and
may
you
know,
rushing
makes
mistakes
if
we
want
to
leave
it
as
alpha
and
try
to
pursue,
put
all
our
energy
into
dual
stack.
Instead,
then
that's
fine
too
yeah.
A
That
would
be
my
vote,
so
does
that
mean
that
this
feature
probably
will
not
move
out
of
alpha
or
whatever
for
110,
because
it
requires
the
multiple
pot,
IP
yeah
I
think
that's
what
it
means
yeah,
okay,
and
for
that
that
is
currently
in
the
feature
spreadsheet
as
not
started
and
low
priority,
except
we've
marked
ipv6
support
as
high
priority,
so
I'm
thinking,
we
should
probably
move
support.
Multiple
product
P
addresses
up
to
a
high
priority.
Yeah.
F
C
So
sorry,
Dane
I
want
to
back
up
a
second
I'm
thinking
about
single
family
v6
only
and
whether
that's
useful
and
I
can
think
of
places
where
that
would
be
useful,
possibly
useful,
maybe
I'm
alone.
But
you
know,
we've
got
some
things
that
we're
thinking
about
that.
That
might
be
interesting
for
I
guess
what
the
question
would
be,
what.
C
C
Broader
sort
of
people
to
kick
their
tires,
yeah
I
mean
the
reality
is
you
know
somebody
mentioned
here
like
nobody,
tests
alphas
really,
and
if
we
really
want
to
get
some
miles
in
the
v6
support
and
shake
out
any
issues.
Maybe
we
make
sense
to
just
finish
the
see
I
call
it
beta
and
110
and
tell
people
it's
it's
single
family,
we're
working
on
the
dual
family
support,
but
single
family
support
for
v6.
We
consider
to
be
beta
and
go
nuts
I
thought
beta
did
imply
more
stability.
It
does
imply
more
than.
F
C
G
H
C
To
do
dual
family
there's
a
whole
bunch
of
changes
that
we
need
and
I
started,
writing
some
of
that
stuff
down,
and
we
had
a
little
discussion
at
Q
Khan
about
it.
But
I
don't
think
we
fully
enumerated
all
of
the
all
of
the
places
that
we'll
need
to
touch,
but
you
just
touched
on
two
or
three
of
them.
C
C
B
C
F
A
G
B
F
F
A
I
A
C
A
A
F
A
J
A
Well,
if
anybody
does
get
that
issue
put
it
in,
so
we
can
track
it
that
way.
We
also
have
deprecated
cube
net
if
anybody
has
better
status
than
I
on
that
one.
But
I
know
that
there
are
some
outstanding
things
for
CNI,
with
respect
to
port
MAP
and
some
other
bits
for
cube
net
that
still
need
to
be
spun
out.
So
I
think
that's
still
in
progress
and
that's
probably
not
going
to
hit
110.
K
C
I
Daniel's
name
is
on
here,
but
he's
sort
of
focused
elsewhere
right
now,
I,
don't
know
if
Chris
is
here,
I,
don't
think
so.
I,
don't
think
anybody
is
absolutely
looking
at
this.
This
is
one
of
those
bugs
it's
been
open
for
a
year
and
a
half
like
we
all
sort
of
think
we
should
do
it,
but
it's
not
important
enough.
I.
A
Mean
I
I
would
say
it's
not
not
in
progress,
there's
still
bits
of
it.
I
know,
there's
a
bandwidth
shaping
plugin,
that's
shown
up
for
CNI
upstream,
and
that's
one
of
the
things
that
cube
net
currently
does
so
there
are
bits
and
pieces,
but
I
don't
think
anybody
is.
You
know
focusing
on
this
specifically,
but
there
are
pieces
of
it
that
are
actually
moving
forward.
K
C
J
C
J
B
D
B
Actually
that
one
I
had
a
question
on
and
I
think
we
should
add
it
as
a
row:
okay,
where
that
one
is
going
and
then
this
one
is
more
and
related
to
the
discussion.
The
Signet
work,
each
I've,
a
coupon
and
I'll,
hopefully
be
sending
out
some
kind
of
survey
to
kubernetes
users
too.
So
we
get
like
some
data
as
to
where
to
go,
that's
kind
of
where
I
think,
at
least
in
110,
we'll
probably
collect
data
and
start
discussions.
B
B
H
A
Procedural
question
here:
Tim
at
what
point
I'm
just
looking
at
the
time
left
for
this
meeting
and
if
we
should
switch
over
to
some
of
the
other
items
like
the
pod,
ready,
plus
plus
or
if
we
should
continue
trying
to
burn
down
this
list
and
keep
it
updated.
The
only
reason
why
maybe
we'd
want
to
continue
doing.
That
is
that
we
have
freezes
in
the
near
future.
A
L
D
C
Right,
there's
no
API
there
will
we'll
want
to
look
at
it
on
our
side
and
see
that
we
are
like
I
would
use
me
as
a
as
a
yardstick
for
being
confident
or
me.
The
other
people
in
this
room.
I
guess-
and
maybe
we
should
see
if
we
can
get
the
sure
folks
are
like
one
of
the
other
major
to
invest
some
time
into
figuring
out
whether
they
think
it's
viable
and
if
so,
then,.
L
M
We
mean
I,
Q
proxy
will
use
IPS
and
IPS
as
GA
doesn't
mean
the
beef
default
operationally
proxy
would
use.
Ipos
no
I
think
that's
two
different
steps.
Okay,
the
reason
we
enable
communities
on
DCFS
with
no
source
and
the
problem
that
we
are
seeing
is
that
Q
proxy
basically
assumes
that
it
is
the
only
entity
using
IP
V
s.
It
is
not
true
in
some
other
platforms,
a
DC
as
being
one
of
them,
and
so
I
don't
know
if
it
should
be
a
criteria
for
GA
that
you
know
it
leads
to
the
error.
C
C
L
H
A
A
E
E
N
The
CI
we
already
have,
we
already
have
in
the
as
you
call
that
test
integration
dashboard.
We
already
have
a
cubed
min
GCE
called
ENS.
Okay,
then,
was
don't
fall.
Fires,
oh,
but
committed,
maybe
up
after
alpha.
So
in
that
is
done.
What
is
not
done?
I
did
a
test
manually
before
ipv6
with
crusty
v6.
So
as
soon
as
we
have
intelligent
test
for
ipv6,
then
I
can
add
the
1
for
called
EMS.
M
Let
me
see
so
just
for
anthem
data.
We
here
we
have
actually
seen
some
issues
with
ipv6
and
ipv4.
We
saw
Co
no
crashes
on
Red,
Hat,
Enterprise
Linux
and
send
to
us
this
with
yes,
when
using
these
he
was
does
not
the
q
proxy
IP.
Yes,
but
it's
something
that
heavy
tested,
I,
think
in
six
width,
I
PBS
or
the
we
have
a
CFL
are
on
different
distro.
That's.
M
C
A
C
A
A
J
C
C
H
H
K
A
A
F
A
A
All
right,
well
I'll,
leave
that
one
alone
for
target
release
status.
If
something
shows
up
in
the
next
few
weeks,
then
we
can
change
that
if
we
need
to
I
also
think
we
should
obviously
skip
multiple
networks
for
the
moment,
since
that's
clearly
not
going
to
happen
for
a
110
release,
yep
exactly
so,
if
nobody
has
anything
else
to
discuss.