►
From YouTube: Kubernetes SIG Network 20161201
Description
Kubernetes SIG Network 2016-12-01 meeting audio recording
A
C
B
We
said,
and
with
a
bunch
of
the
sig
distinct
topics
at
hand
and
the
the
various
leads
of
the
various
areas,
trying
to
figure
out
what
what
was
going
to
go
on
for
1.6
and
there's
a
sort
of
general
sense
across
a
lot
of
people
and
a
lot
of
conversations
that
some
of
the
subsystems
of
the
project
are
healthier
than
others
and
I.
Guess
the
polite
way
to
say
it
and
it
was
there,
was
a
lot
of
people
were
asking
about.
B
You
know:
what
are
we
going
to
pay
down
debt
or
when
are
we
going
to
have
a
stability
release
or
maintenance
release,
or
something
like
that?
And
we
didn't
really
want
to
throw
down
the
law
and
say
well:
16
is
going
to
be
a
stability
release,
only
no
new
features.
We
felt
that
was
a
little
trucko
nian,
considering
that
some
of
the
cigs
and
some
of
the
subsystems
are
actually
operating
fairly
fairly
healthily,
like
storage
has
spent
the
last
quarter,
basically
doing
tests
and
not
much
else,
and
so
it
didn't
seem
right.
B
B
So
from
observing
a
bunch
of
the
other
sayings,
this
sig
operates
a
little
differently
than
most
of
the
other
ones.
Most
of
the
other
ones
are
more
development,
focused
they're
more
like
working
groups
in
some
sense,
whereas
this
group
is
a
more
of
an
idea,
foundry
and
a
sounding
board
for
us
to
check
off
against
each
other
on
our
needs.
But
the
first
question
here
is
like
do
we
own
code
in
the
communities
repo,
and
the
answer
is
clearly
yes,.
C
B
Then
we
have
all
of
the
plug-in
related
code.
We
have
all
of
the
cube
net
stuff.
We
have
all
of
the
cube
proxy
stuff.
We
have
all
of
the
DNS
stuff.
We
have
all
of
the
ingress
stuff,
so
there's
clearly
some
large
volume
of
code
that
we
own
I
would
be
shocked
if
the
owners
files
actually
represent
appropriately
and
I
have
not
updated
them
recently.
I
would
be
surprised
it
would
probably
be
worth
while
to
go
through
those
owners
files
and
see
what
changes
we
might
want
to
make
to
them.
B
B
A
A
B
And
discussion
well,
this
question
is
specifically:
does
your
sig
spend
enough
time
responding
to
user
issues
on
github?
The
stack
overflow
mailing
list
them,
it
didn't
say
slack
but
I
seems
like
is
in
the
same
category
I
mean,
let's
be
frank,
like
one
of
the
top
things
that
people
wrestle
with
when
they're
setting
up
communities
is
networking
right
and
networking
is
a
very
broad
topic,
as
we
sort
of
covered
with
the
code.
B
It's
everything
from
the
network
drivers
all
the
way
up
to
the
load
balance,
so
I
think
people
do
wrestle
with
this
and
I
think
the
short
answer
is
I
mean
my
feeling
is
probably
no
not
enough
time
doing
user
support
on
this,
or
rather
the
user
support,
perhaps
would
be
ameliorated
if
we
had
better
documentation
and
better
examples.
I
think
a
lot
of
people
wrestle
with
the
same
basic
issues
that
we've
done
a
relatively
poor
job
of
documenting.
B
Now
I
think
I'll
pack
of
that,
so
let's
go
through
the
questions.
First
time
we
talk
about,
it
is
the
balance
between
new
features
and
stabilization
good
over
the
last
two
releases,
so
over
the
last
two
releases
like
what
is
in
perception
from
from
the
from
the
peanut
gallery
there,
a
large
number
of
quiet
people
on
this
call.
What
is
the
perception
in
terms
of
our
balance
of
network
development
and
features
versus
stability,
documentation,
testing?
B
G
E
Under
that
family,
I
personally
felt
a
little
frustrated
by
the
speed
at
which
some
of
the
stuff
gets
done.
You
know
it
doesn't
seem
like
we're
going
as
forward
as
quickly
as
I
would
like
to
you
know:
I
mean
portly.
That's
we
can't
necessarily
achieve
consensus
on
some
of
the
stuff,
because
some
of
them
are
hard
problems.
I.
B
B
Like
I
know,
on
our
end,
Jiang's
back
here
in
the
back
bedroom
he's
been
doing
a
lot
of
work
to
help
dns,
skei,
love
and
dns
is
not
really
a
topic
that
we
talked
about
much
here
in
the
cig,
but
I
do
think
there's
an
opportunity
for
somebody
to
argue
with
me.
I
do
think
that
dns
being
a
core
network
service
of
the
system
does
fall
under
the
cigs
purvey
always
also
in
the
back.
Here
has
also
done
a
lot
of
work,
making
dns
scale
better
making
it
perform
better,
actually
measuring
it.
B
For
the
first
time
like
ever-
and
you
know
sir
so
DNS
is
in
minimums
over
here
too,
he
spent
a
lot
of
work
on
the
drivers
and
cute
net
and
porting
all
that
stuff
and
working
on
CRI.
B
I
Billy
so
in
terms
of
the
person
who
was
talking
about
stabilization,
is
there
any
broad
segments
that
you
does
not
stable,
though
he
tackled
to
the
question,
and
he
week
doesn't
have
any
answer
now,
but
maybe
we
should
answer
it
as
a
community
kind
of
want
to
put
effort
into
something
we
would
figure
out
what
the
stack
rank
of
things
whatever
to
do
is
I
was.
F
Not
suggesting
that
it's
not
stable,
I
was
just
saying
that
the
activities
of
the
Signet
workgroup
have
done
little
so
far
towards
even
worried
about
stabilization
and
documentation,
for
instance,
are
even
example.
We
spent
three
or
four
months
designing
and
network
policy
CPI's,
but
we
did
not
even
worried
about
providing
an
implementation
of
network
policies
api's
that
works,
for
instance,
with
cuban
it.
F
B
E
B
Mean
our
docs
are
I
guess
on
average,
no
worse
than
the
rest
of
the
docks,
but
that's
pretty
low
bar
and
we
are
working
on
that.
So,
let's
work
on
this,
we
actually
real
tech
writers
working
now
they
took
a
look
at
our
docks
and
shook
their
heads
and
are
getting
into
the
meat
of
it,
but
they've
got
a
lot
of
work
in
front
of
them,
but
I
do
think
our
docks
are
not
great.
I
know
we
have
a
lot
of
open
bug
reports
that
we
have
not
had
time
to
reproduce
or
investigate.
B
H
Are
missing
the
performance
testing?
We
don't
have
a
benchmark
either
for
any
of
our
networks
back
so
like
415
414
release
like
just
before
a
relief.
Someone
in
our
team
actually
found
that
there's
a
performance
downgrade
due
to
the
different
operating
system
we've
ran
out
like
so
that's
purely
menu
and
random
bump
into
it.
We
don't
have
any
benchmark.
Yes,
so
that's
one
big
missing
pages
right
now,
that's
a
great
one!.
C
I
I,
don't
know
it
can't
like
for
these
things.
I,
don't
know.
If
the
answer
sort
of
online
in
this
discussion
would
it
be
worth
while
certainly
thought
so.
B
B
Some
of
them
are
obvious,
like
we
should
have
better
docs,
but
if
we
can
get
a
little
bit
more
concrete
than
that,
actually
really
helpful
in
figuring
out
where
to
spend
our
time
in
the
coming
release,
and
that
way
we
can
trade
it
trade
off
that
effort
against.
No
should
we
bring
network
policy
to
GA
or
the
other
five
or
six
things
that
are
on
our
hey.
B
There.
Sorry
I'm
going
to
run
our
agenda
to
talk
about
today.
You
know,
there's
a
bunch
of
stuff
in
it
as
a
bunch
of
PRS
that
have
been
open
for
a
while
that
really
need
to
get
merged
and
I
apologize
for
some
of
those,
but
we
need
to
figure
out
like
are
those
the
most
important
and
impactful
things
that
this
thing
can
be
delivered
right.
B
If,
if
we
think
of
16
as
a
pay
down
release,
primarily
which
I
I
would
like
to
advocate
that
we
should,
when
what
are
the,
what
are
the
things
we
can
do
to
pay
down
some
of
the
debt
that
we've
accumulated?
This
is
not
unique
to
our
sig
by
the
way.
That's
the
public
we're
all
disappointed
debt,
as
any
good
project
should,
and
occasionally
we
have
to
take
a
hit
and
pay
it
down
right.
B
B
A
From
my
perspective,
I,
don't
I,
don't
really
know
the
first
place
to
look
if
I
wanted
to
find
all
of
the
debt
that
exists
when
I
certainly
know
about
about
some
that
I'm
familiar
with.
But
okay,
is
there
a
place?
We
can
as
a
group
sort
of
centralized
that
what
things
are
on
the
table
to
try
and
stabilize
and
1.6
sure
would.
B
It
help
like
I,
can
craft
up
some
github
search,
queries
that
will
sort
of
show
us
all
the
bugs
that
are
related
to
the
topics
that
I
think
fall
under
our
Charter,
and
then
we
can
at
least
have
a
look
at
the
bugs.
I
think
that
it's
around
the
number
off
the
top
of
my
head,
but
I'd,
be
surprised
if
it's
less
than
three
digits
are.
B
Okay,
the
last
part
of
this
was
we
should
think
about
our
charter.
We've
never
really
written
a
charter
for
the
sake
of
what
we
think
is
in
scope.
I
just
listed
out
15
things
that
I
think
are
related
to
networking,
but
I.
Don't
think,
we've
ever
actually
formalized
that
it
would
be
perhaps
interesting
to
try
to
write
that
down.
E
B
A
A
G
Yeah
I'm
here
yeah
so
recently,
have
you
been
getting
a
lot
of
requests
for
this.
The
use
cases
that
we
have
been
400
I
mean
people
have
been
pointing
to
is.
One
is
related
to
OpenStack,
where
they
have
a
requirement
to
separate
out
some
of
the
some
of
the
traffic.
They
would
like
to
actually
use
interface
level
separation
for
isolation.
G
J
G
G
D
B
G
B
Can
occur
so
I
think
there's
the
the
ultimate
abstraction
which
are
daily
able
to
data
model
for
this,
which
involves
a
noun
for
network
which
implies
interfaces
and
control.
You
know,
abilities
API
is
that
lets
you
decide
which
pods
join,
which
networks
and
I
think
that
we
may
probably
will
get
there,
but
I
think
there's
probably
in
intermediate
step,
which
would
be
to
allow
multiple
interfaces
without
having
communities
itself.
Be
aware
of
that
fact.
B
E
Partly
that's
the
CNI
driver
because
it
picks
the
first
CNI
config
file,
because
we
never
decided
what
to
do
about
multiple
networks.
Right
problem
with
running
both
of
those
files
would
be
that
they
would
get
run
for
every
single
pod
in
on
that
particular
node,
and
that
may
not
be
exactly
what
you
want
now.
D
G
B
B
G
E
E
E
E
E
But
down
there
yep,
this
would
allow
the
CN
I
plug
in
to
return
an
array
of
diseases,
lot
of
new
faces
post
interfaces,
as
well
as
multiple
IP
addresses,
either
ipv4
or
ipv6,
and
those
IP
addresses
could
be
attached
to
any
of
the
interface
as
the
plug-in
passes
back.
So,
at
least
at
that
point
we
would
have
the
information,
and
then
we
need
to
close
that
through
goober
Nettie's.
B
Okay,
so
it
sounds
like
there's
good
leg
work
happening
there,
but
let's
do
a
reality
check.
The
is
one
of
those
features
that
is
going
to
get
implemented
when
somebody
influenza-
and
it's
not.
I
think
the
sort
of
thing
at
this
point
that,
if
we
can
throw
out
and
that
the
the
core
team
of
people
who
are
currently
working
on
stuff,
is
going
to
implement
I
mean
I,
don't
know
about
I
can't
speak
for
red
hat.
B
B
Okay,
I'm
happy
to
review
designed
to
help
craft
the
design
for
it
and
to
help
Shepherd
the
overall
design
through,
but
I
need
somebody
to
own.
It.
I
need
somebody
to
be
responsible
for
writing
the
design
doc,
comparing
the
different
options
figuring
out.
What
doesn't
work
today,
it
sounds
like
Dan's
already
got
a
lot
of
that
in
his
head
figure
out
what
doesn't
work,
how
we
can
work
around
it.
B
What
the
implications
of
those
workarounds
are
for
the
system
at
large,
prototyping,
implementing,
testing
and
documenting
this
is
not
a
small
feature
and
it
has
some
real
ripple
out
into
the
system.
I
think
it's
something
that
we
do
want.
I
think
this
is
a
great
opportunity
for
somebody
in
the
community
to
step
up
and
help
out.
D
B
E
B
Find
a
way
to
report
that
through
the
API,
how
do
we
designate
which
IP
is
for
what
or
forgery
what
does
it
mean
for
testing?
How
do
we
get
test
coverage
of
this?
How
do
we
document
it
with
the
appropriate
way
to
describe
it
like
what
is
the
actual
use
case
and
affect
people
to
use
with
this?
Those
sorts
of
things
it's
gonna
just
need
an
owner
for
end-to-end
I.
Think
it's
an
awesome
idea.
We
just
need
somebody
ok,
but
if
joe
g
you're
signing
up
for
that,
it
sounds
like
damn
see
a
little
context.
B
E
K
To
see
what
does
the
envision
strategy
for
getting
these
creepy
addresses
to
the
to
the
pots?
You
know
they're
supposed
to
be
different
host
networks
configured
that
you're
saying
this
part
should
get
to
compare
dresses
from
both
that
looks
from
the
host
or
just
you
know
yet.
Another
IP
address
on
if
they're.
E
On
what
are
you
so
I
would
say?
The
first
step
would
be
that
if
you
are
writing
a
CNI
plug-in
that
that
plug-in
can
create
one
or
more
interfaces
for
the
pod
and
that
that
plugin
can
assign
one
or
more
IP
addresses
to
any
of
those
interfaces
and
successfully
passed
actor
I,
wouldn't
necessarily
envision
specifying
multiple
CNI
files.
Yet,
okay,
we
raises
questions
about.
D
D
I've
been
discussing
this
past
year,
I
don't
need
additional
ipam
again,
but
page
nine
people
I've
been
thinking
about
how
to
pretend
it
is
on
on
and
networks
no.
D
I'm
sorry,
I've
been
thinking
about
making
my
c
and
I
plug
in
return
of
floating
IP,
be
used
by
the
couplet
in
probing
pod,
regular
IP,
which
is
the
IP
and
network,
and
for
that
I
don't
need
an
additional
I
am
but
I
without
global
built-in
that
the
operator
level
telling
my
see
my
plug-in.
How
to
make
clothing
I
feel
that
we
will
use
probing.
B
D
Talk
about
reporting,
multiple
IPS
and
the
API
server,
so
the
main
IP
one
use
for
damn
near
everything.
Everything
except
the
exceptions
if
the
main
I
teach
is
all
that
I
pee
on
the
tenant
network,
gotcha
the
exception,
and
we
ve
by
the
public,
yeah
I
would
have
two
interviews
and
also
inception
or
ultimate
completely
fool.
That's.
B
C
E
E
B
D
Right
right
and
where
I
was
going
was
thinking
more
about
Joe
geez
cases.
What
I've
also
fought
about
but
haven't
brought
up
in,
but
I'm
also
getting
my
stuff
like
injecting
network
function.
So
once
I
go
beyond
the
you
space
that
I
have
talked
about.
I
start
whining
talk
about
exclusive
networks
and
maybe
I
Pam
on
each
of
those
networks,
and
so
it
be
kind
of
lead
into
a
bigger
story.
Oh
I,
you
know
that's
why
I
characterize
this
discussion
of
this
thing.
I
think,
is
that
aggression
I
think
dojin
has
raised.
B
So
I
think
this
is
a
great
place
to
where
we
chew
engine
block
and
say
these
are
the
things
that
the
doc
the
first
ordered
proposal
doc
should
cover
I
think
you're
right.
I
think
that
the
the
final
solution
does
involve
a
noun
for
networks
and
some
higher-level
sort
of
admin
api's
to
go
into
this,
but
I
don't
know
that
that's
the
first
deliverable
it's
certainly
not
the
MVP,
and
this
is
the
place
where
the
the
design
proposal
can
start
to
cover
like
generations
of
solution.
B
A
Right
now,
then,
you
had
raised
node
ready
conditions,
yep.
E
Yes,
so
that
was,
there
was
some
stuff
that
I
think
Justin
had
pushed
and
gotten
merged
for
no
trading
conditions
on
AWS
that
dealt
with
admission
control
and
such
and
I
seem
to
recall
that
Jordan
had
some
problems
with
that
approach,
and
it
suggested
a
completely
alternate
approach
and
I
was
wondering
Tim.
If
you
knew
where
that
eventually
landed.
What's
gonna
get
backed
out
and
somebody
was
going
to
go
with
Jordans
approach
instead,
I
mean.
B
E
The
problem,
ok
yeah,
so
the
fundamental
problem
is
that
there
are
some
Network
plugins
that
are
local
and
Leah.
For
example,
the
cube
net
plugin,
the
cube
net
plugin
just
sets
up
a
Linux
bridge
and
assigns
arranged
to
the
bridge
and
pods
get
stuck
on
that
bridge.
The
cube
net
buggin
doesn't
know
anything
about
being
in
a
cloud,
it
doesn't
care
about
being
in
a
cloud.
E
E
B
E
B
Don't
recall
was
Jordans
approach,
but
I
spoke
with
Clayton
and
Justin
at
cube
com
two
weeks
ago,
by
the
way
it's
nice
to
meet
everybody
there
and
I
spoke
with
them
about
this
generic
initializers
idea,
which
I
think
is
something
that
we
need
for
the
system
overall,
the
idea
of
being
there
would
be
some
way
to
configure
a
list
of
essentially
gates
that
were
installed
on
a
note
when
the
node
was
created.
So
as
part
of
admission
control,
I
create
a
node,
the
admission
controller.
B
B
Don't
think
it
overlaps
with
scheduling
I'm,
not
sure
how
I'll
to
go
back
and
read:
Jordans
notes:
okay,
my
feeling
was
Danna
I
read
your
PR
and
I
hated
it
and
I
thought
and
I
thought
and
I
thought.
I
talked
to
Justin
and
I
thought
about
this.
Initializers
and
I
just
can't
come
up
with
a
better
answer.
So
I'm
not.
B
B
A
M
M
B
M
So
I
mean
I
can
give
a
brief
overview
of
you
know
what
I
think
is
being
going
on
in
the
industry.
I
see
a
Williams
here
too,
from
anchor
T,
so
he
could
probably
chime
in
just
as
well
as
I
could
about
the
emergence
of
things
in
the
space
and
why
people
are
interested
in
them.
If
that
would
be
helpful,
I.
B
M
So
I
mean
this
primary
comes
from
an
l7
view
of
the
world
right
where
you
have
a
whole
bunch
of
services
talking
to
each
other,
let's
say
primarily
over
HTTP
and
a
world
where
you
have
pretty
crappy
l7
client,
libraries
for
doing
network
and
people
want
better
behavior
out
of
their
l7
stack.
You
know
whether
they
want
load
balancing
so
like
existing
Kubrat
ease,
load,
balancing
l3
URL
for
load,
balancing
mechanisms,
don't
really
work
all
that
well
for
services
or
virtual
services
like
this.
M
You
know.
This
is
largely
a
pattern
that
google
has
solved
for
itself
internally
by
building
really
heavy
weight,
client
libraries
that
cost
nine
years
worth
of
effort
and
our
aren't
in
a
form
that
we
could
really
realistically
ship
over
the
wall.
Beyond
you
know,
our
efforts
around
grp
see
if
you
look
at
what
other
companies
have
done,
and
you
know
you
could
probably
say
that
netflix
is
probably
got
the
most
brand
awareness
in
this
space.
M
They
also
wrote
ribbon,
which
is
a
library
solution
to
this
problem,
and
then
you
can
look
at
things
like
finagle
right,
which
came
out
of
Twitter
and
is
now
being
used
by
linker
D,
again,
a
kind
of
heavyweight
client
library
solution
to
the
problem
which
linker
d
has
now
packaged
up
into
a
sidecar
proxy
provide
the
equivalent
behavior
for
HTTP
traffic,
and
there
are
lots
and
lots
of
people
building
services.
This
way,
you
know
using
either
fairly
thin
to
moderately
thick
HTTP
or
l7
client
libraries
to
talk
among
services.
M
You
know
that
they're
so
pressed
it
doesn't
really
matter
and
they
all
pretty
universally
are
the
same
set
of
problems
right.
They
all
need
good
load,
balancing
solutions.
If
they
have
moderate
scale,
they
all
need
good
monitoring
because
monitoring's
expensive
to
build
and
building
it
end
times
or
relying
on
client
library,
so
that
ever
if
you
can
be
quite
problematic,
so
that's
kind
of
the
baseline
set
of
requirements.
William
I,
don't
know
if
you
want
to
expand
on
that
bill.
No.
N
I
thought
that
was
really
good,
I
think.
The
only
thing
I
would
add
is
that
you
know
the
heavyweight
client
library
approach
is
also
what
we
did
at
Twitter
or
anything
is
the
move.
Palawan
stack
gets
harder
and
harder
to
maintain
that
approach.
So
that's
why
I
think
there's
an
interest
in
our
too
yeah.
D
Yeah
I
mean
I
thought
gee
getting
on
to
the
question
I
want
to
ask
about
which
is
to
what
degree
can
this
be
separated
through
what
looks
like
a
network
interface
into
what
degree
you
know?
Do
you
really
need
want
to
be
solving
these
problems
in
a
client
library,
that's
integrated
in
the
client
process,
so.
M
M
I
talked
to
a
quite
a
number
of
companies
that
have
had
that
problem
and
I'm
sure
William
has
to
because
that
you
know
the
one
of
them
contains
a
later
date.
So
yeah
engineering
diversity
is
a
real
problem.
It's
a
problem
internally,
at
Google
like
we
have.
We
do
use
a
sidecar
solution
to
solve
this
problem,
particularly
the
thick
client
library
problem
right,
where
we
have
storage
systems
that
have
fairly
thick
stateful
clients
to
be
able
to
do
efficient
work.
M
M
D
M
Connectivity
well,
if
it's
a
certain
class
is
doing
certain
class
of
codes
right,
so
fullback
actions
like
retry
and
failover
do
not
have
to
be
in
the
client
process.
As
long
as
there
is
some
understanding
at
the
protocol
level
about
Ida
potency
or
how
network
failures
are
supposed
to
be
reasoned
about.
Yes,
some
of
the
more
advanced
ones
do
belong
in
the
application
there,
but
then
they're,
probably
application-specific
at
that
point.
So.
K
Do
you
envision?
Do
you
envision
that
there
might
be
the
ability
to
specify
that
traffic
should
flow
free
service
for
a
serious
of
services
implemented
as
a
proxy
running
on
a
part
I'm
guessing,
and
that
these
sort
of
things
can
then
we
switched
into
the
traffic
path
in
a
similar
way,
elated
by
the
same
mechanism
by
which
we
are
currently
putting
network
policies
into
the
traffic
path?
Yes,.
K
M
I
don't
know
if
network
policy
is
exactly
the
right
mechanism.
I
said
I
think,
and
so
I
think
you
know
one
of
the
things
I
want
to
talk
a
bit
was
about
ingress,
because
ingress
is
a
Cooper
Nettie's
configuration
mechanism
designed
to
talk
about
traffic
coming
into
the
cluster
from
outside,
primarily,
although
it
can
be
repurposed
a
lot
of
what
this
really
a
lot
of
these
higher
level.
Networking
functions
want
them
to
tie
to
some
kind
of
DNS
addressable
thing,
particularly
when
you're
talking
about
a
cheap
each
other.
So
it's
almost
like
a
virtual
service.
M
That's
really
specify
a
bunch
of
rules
that
project
should
route
to
downstream
services.
A
lot
some
of
the
policies
are
fairly
implicit
things
like
we
try
and
fail
over.
Yes,
you
might
have
knobs
to
choose
behavior,
but
each
htp.
The
protocol
itself
actually
says
quite
a
lot
of
things
and
you
should
there's.
There
are
quite
reasonable
defaults
for
a
lot
of
the
behaviors
I
haven't
taken
a
detailed
look
at
network
policy,
maybe
Tim.
M
B
M
M
So
ingress
is
probably
a
closest
thing
on
that
side
of
Defense
and
then
there's
load
balancing
which
is
really
you
know,
l7
load
balancing,
isn't
really
covered
all
that.
Well,
the
existing
communities
configure
all
in
any
any
of
these
places
except
the
stuff
that,
in
breasts,
controllers
do
with
services
that
are
bound
to
rent
another.
D
Question
that
I'm
wondering
about
here
on
to
use
a
sidecar
you
either
have
a
the
current
network.
Interface
is,
you
know,
I
p,
to
inject
a
sidecar.
That
does
something
in
other,
has
to
do
a
bunch,
I,
p
tables
rules
to
capture
traffic
to
services
or
use
the
program
model.
Where
the
you
know,
the
main
art
pod
is
connecting
you
know
to
localhost
/
service
thing
or
something
like
that
right.
M
Here,
the
former,
so
if
you
having
Enrico's
dog,
does
a
pretty
good
job
capturing
this,
you
know,
given
that
you
know
at
least
the
presumed
goal
was
that
we
should
be
doing
this
transparently
or
you
know
reasonably
transparently.
The
assumption
was
that
the
applications
didn't
have
to
make
changes,
which
I
think
is
implied
by
the
latter
part
of
your
statement.
M
M
Sent
it
to
the
networking
sig
mailing
list,
so
they
could
read,
dump
it
into
the
chat
and
make
it
like
a
boner
Enrico's,
also
dialed
in
here
by
the
way.
So
yes.
O
Well,
well,
you
look
for
that
stock
I
just
wanted
to
say,
I
work
for
a
large
bank
in
this
idea
of
having
out
of
processed
proxy
and
being
able
to
take
our
thousands
of
legacy
apps
out
of
where
they
are
and
put
them
into
a
more
modern
environment.
It's
a
big
deal
for
us.
So
not
only
do
we
mention
the
the
language
sprawl
which
affects
us
as
well,
but
just
also
just
a
huge
legacy
base
that
we
hope
to
be
able
to
bring
a
cross
into
this
model.
M
M
Is
not
similar?
Yes,
you
know,
I've
had
conversations
with
people
along
those
lines.
I
would
say
you
know.
Obviously,
if
you
can
inject
a
proxy,
then
you
can
inject
behavior
and
that
behavior
could
be
things
like
access
policy
or
authentication
mechanism
or
load
balancing,
behavior
or
like
any
manner
of
things
really.
So,
yes,
they
are
quite
similar
in
spirit
and
maybe
in
specification.
I,
don't
know
again
I
kind
of
defer
to
tim
about
what
the
right
vehicle
for
expressing
this
behavior
is.
B
That
I
wrestle
with
with
entering
sort
of
NFV
territory
is,
and
if
these
sort
of
depends
on
a
really
programmable
network
right
and
a
lot
of
what
people
are
trying
to
do
with
communities,
is
to
cut
out
the
overlays
and
to
get
down
to
more
direct
networking.
Obviously,
that's
not
going
to
be
everybody
and
there's
a
trade-off
to
be
made
here
of
you
know,
control
and
manageability
versus
performance,
and
so
what
I?
B
O
Yeah,
thank
you.
Your
mana
and
and
I
think
that,
unfortunately,
there's
a
lot
of
terminology
overlap
in
between
the
network
world
with
network
service,
function,
training
and
NSE,
and
these
are
the
things
but
my
personal
opinion
is
there
they're,
pretty
separate
things,
I
think
that's
our
operating
at
different
levels.
You
know,
there's
this
sort
of
proxy
stop
this
for
the
four
through
seven
for
the
application
stay
the.
K
I
work
with
chris
at
Cremona.
The
we've
been
discussing
these
these
concepts
of
inserting
functions
into
the
into
their
data
stream
and
I
think,
michael.
He
taught
me
about
the
conversation
you
had
with
you
about
where
you
mentioned,
that
they
are
very,
very
similar
for
us
as
those
who
are
who
are
trying
to
implement
this
it'll
blow
the
behind
the
scenes.
There
are
a
lot
of
as
a
lot
of
similarity
the
way
this
is
implemented,
but
we
understand
that
there
is.
K
A
M
The
immediate
call
to
action
is
to
read
Enrico's
document
about
how
these
proxies
are
expected
to
be
injected
into
the
network.
That's
probably
the
most
immediately
tractable
thing
I
mean.
Obviously
we
want
from
the
kubernetes
point
of
view.
We
want
to
be
able
to
have
this
mechanism
work
in
a
proxy
vendor-independent
way,
so
anybody
can
bring
run
proxy.
M
So
yes,
it's.
My
primary
concern
is
to
make
sure
that
we
can
kind
of
come
to
agreement
about
the
injection
process
and
what
what
that
looks
like
how
it
should
be
integrated
into
Cooper
Nettie's
itself.
You
know
because
there
are
performance
concerns
around
how
you
consume
all
this
information
from
the
AP
eyes
and
listen
to
changes
in
the
network.
M
B
Right
all
right:
well,
thanks
everyone
I'm!
Sorry,
we
didn't
get
to
everything
on
the
agenda
at
Dan,
Winship
I
started
looking
again
at
that
PR
and
playing
with
a
few
things
regarding
the
policy.
I
think
you
should
make
a
decision.
The
order
soon
and
you're
turned
into
fixing
it
and
all
the
code,
generators
and
convincing
Clayton
that
he's
wrong
or
commit
to
abandoning
the
idea.
B
I
just
respond
to
the
pr
I'm
sort
of
leaning
in
that
direction
to
the
engineering
me
really
wants
to
fix
it
because
I
know
it'll
come
up
again
at
some
point
later
and
then
will
curse
ourselves
for
not
having
fixed
it,
but
I'm
not
sure
that
it's
the
best
use
of
time
in
the
medium
term
or
put
another
way.
I've
looked
those
code,
generators
before
they're
hard
to
fix
yeah.