►
Description
Join jay as we go through the kube-proxy backlog, and look at how it is evolving with KPNG, a solution to many of the existing issues around service loadbalancing
A
A
What's
up
Ricardo,
let's
see
Matt's
here,
Matthew
hi
everybody!
We
we're
just
getting
started
here
me
and
this
is
going
to
be
a
really
interesting
session
because
me
and
me,
and
me
and
Mikhail,
were
just
kind
of
hacking.
Around
and
I
was
testing,
kaping
and
I.
Think
I
I
think
I
wound
up
finding
a
tiny
little
bug
involving
egress.
So
we
might
just
hack
live
on
it.
A
So
big
thanks
to
Mikhail
for
joining
the
stream
and
Ricardo
I.
Also
added
you
on
the
stream
and
I
sent
you
a
link.
If
you
want
to
join
the
actual
stream
you're
welcome
to,
but
either
way
so
I
guess
we'll
get
to
get
started.
So
welcome
everybody.
This
is
a
see.
It's
the
April,
the
April,
the
30th.
A
It's
it's
a
my
second
tgik
to
do
and
my
name
is
Jay
and
we
have
a
special
guest
today.
Mikhail
he's
the
creator
of
kaping.
He
he
is
and
he's
also
my
friend
and
he's
he
lives
in
France,
I
think
and
he
he
thinks
he.
A
He
has
a
solution
to
the
coupe
proxy
problem
and
he's
gonna
he's
gonna,
so
so
we're
gonna
walk
through
some
of
that
today
and
we're
gonna
look
at
the
the
like
the
sort
of
the
general
problems
with
cooproxy
that
have
sort
of
led
us
to
the
point
of
like
sort
of
thinking
about
like
what
we
can
do
to
make
it
easier
and
to
understand
what's
going
on
and
stuff
and
so
that's
kind
of
it.
A
I
don't
have
much
for
the
week
in
review,
but
there's
a
few
things
that
I
was
looking
at
that
I
thought
were
were
kind
of
interesting,
so
so
yeah,
let's
get
started
so
the
so
Andrea
is
now
a
part
of
Andrea
is
now
a
part
of
the
not
a
part
of
the
cncf,
but
it's
incubating.
A
So
if
you
look
over
here
on
my
screen
on
the
right
here,
you'll
see
we
have
it's
now
incubating
as
a
Sandbox
project
in
the
cncf,
so
some
of
the
entry
team,
Antonin
and
Abhishek
and
Jun
Jen
and
everybody
else
we're
been
working
really
hard
on
that.
So
that's
really
cool!
That's
like
one
thing:
that's
going
on
and
let's
see
what
else
have
we
got
here?
Submariner,
okay,
so
Submariner
is
a
entry
as
a
cni
provider.
A
For
those
who
don't
know,
we
tend
to
test
a
lot
with
Andrea
and
Calico
over
here
and
sometimes
with
psyllium
for
development,
at
least
when
I
do
demos
and
Submariner
is
like
an
a
a
federation
like
a
data
plane,
Federation
tool.
Does
anybody
know
much
about
Submariner
like
I?
Don't
know,
maybe
pair
knows
about
it,
he
just
joined
I've,
never
used
it,
but
you
know.
Evidently
it
solves
this.
A
Whole
problem
of
I've
got
two
different
clusters
and
I
want
the
pods
to
talk
to
each
other
over
some
kind
of
like
overlay
overlay
or
whatever
you
call
it
and
so
yeah.
That's
all
the
news,
one
second
wow
I.
A
Here
we
go
okay
and
see
what
else
we've
got
okay,
Ingress.
So
another
announcement
for
this
week
is
we
have
we
have
a
blog
post
that
came
out
recently
and
it's
it's
about
the
whole
Gateway
API
stuff.
So
I
don't
know
if
any
of
y'all
are
following
the
gate.
Oh,
so
what
problem
does
it
solve
Heist?
So
Submariner
I
think
putatively
I
guess
is
supposed
to
solve.
A
So
you
know
it's
just
cluster
Federation
right
and
so
it's
in
the
CNC
I.
Guess
it's
in
the
sandbox
now.
So
that's
kind
of
that's
kind
of
the
big
update
there,
but
I've
I've
never
used
it.
So
if
anybody
in
in
the
chat
wants
to
yeah,
that's
the
idea
Jay.
So
the
next
thing
is
this:
this
there's
a
there's,
a
blog
post
that
came
out
I'll
open
this.
A
This
up,
it's
here
about
the
Gateway
API,
so
how
many
of
y'all
know
about
the
Gateway
API
hi
Evan,
hi,
AJ,
hi
pair
hi,
Suresh,
hi
Dennis
from
Germany
Irby
from
London.
What's
up
what's
up
Matt,
oh
I,
I,
I,
okay,
oh
yeah,
I'll
send
it
to
you
Ricardo
the
the
the
the
evolving
it
should
be
in
your
chat.
History,
Ricardo,
actually
evolving.
The
network.
Networking
the
this
this
post
from
the
API
Gateway
group,
the
the
the
the
Sig
network.
A
You
know
sort
of
L7
sub
project
right
the
the
the
idea,
the
Gateway
API.
What
it
does
is
it
it
kind
of
evolves,
Ingress,
so
I'm
sure
you
all
know
about
Ingress
resources,
I'm
gonna,
pull
up
the
Ingress
stuff
on
the
side,
and
so
this
Gateway
stuff
is
gathering
a
lot
of
momentum,
and
so
this
blog
post,
really
kind
of
describes
like
why
it's
valuable
and
stuff
so
I'm
gonna
on
the
side
over
here
pull
up
the
Ingress
API.
A
So
we've
all
kind
of
known
about
the
Ingress
API
for
for
a
while
and
I'm
sure
most
of
us
have
at
least
thought
about
using
it
or
used
it.
It
allows
you
to
sort
of
give
you
an
L7
load
balancer
to
the
outside
of
to
the
inside
of
your
cluster
through
a
through
a
cluster
IP
service
right.
So
anybody,
that's
hosting
an
application
in
production,
knows
about
this.
Ricardo
knows
a
lot
about
it.
I
think
he
was
working
late
on
it
last
night.
A
You
know
anybody
using
the
nginx
Ingress
obviously
has
has
dealt
with
this,
but
it's
not.
It
wasn't
really
built
for
like
it
was
kind
of
built
quickly.
It
wasn't
really
built
for
multi-tenancy.
It
has
like
some
issues
around
that
so
sort
of
this.
A
This
this
group,
in
the
Sig
Network,
like
was
kind
of
working
on
this
idea
of
how
to
make
it
more
modular
and
make
it
work
for
multiple
people
so
that
you
know
developers
could
use
a
set
of
API
constructs
and-
and
you
know,
administrators
could
use
another
set
of
API
constructs
to
manage
the
way
that
gateways
and
Ingress
into
your
cluster
work
worked.
So
all
that
sort
of
conceptualization
sort
of
resulted
in
this
new
API
called
the
gateway
API,
which
is
kind
of
like
replace
I.
A
Don't
know
if
I'm
allowed
to
say
this
I
guess
it's
a
replacement
for
Ingress,
but
maybe
it's
not
but
yeah,
it's
maybe
the
evolution
of
of
Ingress.
A
So
so
so,
there's
a
really
good
post
on
it.
It's
here
on
the
on
the
cncf,
the
kubernetes
blog
2020.,
and
it
talks
about
how
administrators
use
things
one
way
and
users
can
can
modify
these
other
API
objects,
the
HTTP
route
and
the
service
objects
right.
So
so
Ingress
is
moving
forward
and
I.
Think
a
lot
of
this
evolved
from
the
work
early
work
that,
like
the
Contour
folks,
were
doing.
A
They
actually
linked
to
a
real
good
presentation
in
here
that
describes
like
why
they
needed
to
build
this
and
how
there
was
all
this
fractionation
in
the
Ingress
ecosystem
and
how
it
was
just
getting
really
messy
so
and
actually
in
in
in
associated
with
that.
Actually,
there
is
a
new
little
sub-project
type
group
that
Ricardo
started
called
the
to
help
maintain
the
sort
of
sort
of
legacy-ish
sort
of
nginx
Gateway
Ingress
controller.
A
So
if
anybody
wants
to
get
involved
with
that
reach
out
to
Ricardo
Katz
in
the
Upstream
K-8
slack
yeah,
so
there's
that
and
what
else
do
we
have?
We
have
another
post
that
we
came
out
with
about
Network
policy,
conformance
that
sort
of
talks
about
where
we're
at
the
network
policy
sub
project.
So
by
the
way
this
this
week
in
review,
is
a
little
thin.
It's
mostly
focused
on
some
Network
related
issues,
but
if
anybody
has
anything
else,
they
want
to
announce
I'm
happy
to
talk
about
it.
A
So
so
we
we
kind
of
started
the
network
policy
sub
project.
A
long
time
ago-
and
it
was
originally
just
because
we
wanted
to
make
it
easier
to
understand
how
to
evaluate
Network
policies,
so
we
built
these
Matrix
these
table
tests
to
give
you
like
a
like
sort
of
a
truth,
Matrix
and
then
recently,
Matt
who's
here,
wrote
cyclonus,
which
allows
you
to
not
only
test.
A
You
know
we
test
40
or
50
or
30
or
40
policies,
or
something
like
that
in
Upstream
k-8s
and
we
can
validate
any
cni
provider
and
whether
they,
whether
they
support
those
various
policy
scenarios
and
and
all
the
different
connectivity
that
that
you
may
or
may
not
want
once
you
apply
a
policy
but
Matt
went
ahead
and
sort
of
one-upped
us
and
built
something
that
actually
test
hundreds
of
policies
and
it
automatically
generates
them,
and
then
it
tags
the
policies
by
their
various
attributes.
A
So,
if
you're
confused
about
your
cni
provider,
whether
it's
the
right
solution
for
you,
whether
it
supports
all
the
network
policies
you
care
about,
you
can
use
this
cyclonus
to
validate
validate
all
sorts
of
different
policies.
So
we
talk
about
that
and
we
also
talked
about
a
few
of
the
improvements
we've
made
to
the
netpal
API.
We
we
now
have
a
default
label
in
all
k8s
namespaces.
A
So
if
you
fire
up
a
1.20
cluster
you'll
get
this,
you
know
kubernetes.io
metadata.name
and
so
in
any
namespace,
which
means
you
can
make
Network
policies
against
any
against
any
any
namespace.
Whether
or
not
you
have
the
ability
to
label
it,
which
was
something
a
lot
of
folks
had
been
asking
for.
A
There's
also
a
full,
fully
fully
qualified
prototype
of
a
fqdn
network
policy
controller
that
that
came
out
from
the
folks
at
Google.
That's
pretty
cool!
So
if
you
go
here,
it's
real
easy
to
run.
They've
published
the
images
you
literally
just
have
to
like.
Where
is
it
it's
just
like
one
or
two?
It's
like
in
the
examples
I
think.
A
Let's
see
where
do
you
install
annotations
limitations?
Use
cases
yeah
here
it
is
all
you
got
to
do-
is
apply
this
cert
manager
and
then
apply
this
controller
and
then
it'll
start
a
controller
on
your
cluster
and
then
you
can
make
policies
against
the
L7
Network
policy
and
it'll
go
and
it'll.
You
know
inspect
the
downstream
IPS
from
that
and
create
network
policies
on
your
cluster
for
it.
So
that's
kind
of
a
cool
update
and
and
then
we
have
Port
ranges.
So
Ricardo
worked
real
hard
to
implement
Port
ranges.
A
So
now
you
have
Network
policies
that
support
Port
ranges.
We
have
implemented
that
I
in
in
in
in
at
least
a
few
of
the
cnis,
and
you
know
we've
done
it
in
Andrea
for
sure
I
think
we've
done
in
Calico
and
I,
don't
know
if
we
did
it
in
psyllium.
Ricardo
would
know.
I
see
you
here,
Ricardo
I'll
turn
your
stream
on.
C
C
A
Yeah,
cool
cool,
so
yeah
Port
ranges,
at
least
in
some
of
the
cni
providers
we
have
them,
which
is
makes
it
a
lot
easier
to
write
your
policies.
So
as
an
example
of
how
you
might
be
able
to
you
know,
we
don't
have
a
port
range
in
here.
I
guess
you
know
we
we
didn't
do
a
very
good
job
on
this
blog
post.
We
left
the
port
ranges
out,
didn't
we,
but
but
yeah.
So
anyways
Port
ranges,
look
something
like.
Let
me
pull
up
down
here:
Port
range
or
Port
ranges
and
services.
A
A
Yeah
here
it
is
cool
there.
It
is
and
wow
27
Oh.
This
is
because
it's
all
the
API
changes
yeah.
So
here's
the
change
yeah
here
it
is
so
we
now
have
an
end
port
right.
So
when
you
used
to
be,
you
could
only
Define
a
port
for
for
Network
policy
to
Target.
So
all
your
policy
would
would
only
Target
that
specific
Port.
If
you
mentioned
it,
but
now
you
can
actually
have
an
endpoint
too.
So
your
whole
policy
ranges
is
affecting
the
whole
range
of
ports.
A
So
that
is
another
change
that
came
about.
So
we
have
this
post
here
that
we'd
love
for
you
all
to
read
about
and
so
yeah.
So
so
there's
a
couple
of
different
sort
of
Sig
Network
groups
there
that
represented
the
Gateway
folks
and
and
the
network
policy
folks
and
I-
think
that's
all
the
news.
I
got
for
you
and
I'm
just
gonna
sort
of
dive
into
stuff
and
I
think
we
found
a
bug,
so
we
might
just
start
hacking
immediately.
A
What's
up
Vlad
I
just
saw
you
joined
and
well
vlad's
here
Vlad.
What
can
you
post
the
link
to
your
test
framework
that
you've
been
hacking
on
so
I
can
show
some
folks
because
Vlad
is
working
on
making
it
easier
to
write
Upstream
tests
in
case
and
I
thought
I
had
that
link
somewhere,
but
I
haven't
posted
it
in
here,
yet
so
we'll
once
you
post
it
just
just.
Let
me
know:
I'll
take
a
look
at
it.
I'll
show
some
folks
just
interrupt
me.
A
There
is
so
yeah,
so,
okay,
so
all
right.
So
why
why
cooproxy?
Does
anybody
here
want
to
take
sort
of
like
a
guess
at
or
want
to
suggest?
Has
anybody
here
have
any
experiences
with
cooproxy
any
negative
experiences
or
positive
experiences
with
it?
Have
you
ever
tried
to
scale
it
at
100
200
nodes?
Have
you
hit
anything?
A
Oh
cool,
you
added
the
end
port
test.
The
Cyclone
is
good,
okay
good,
so
we
have
another
test
case
in
there.
Yeah
I,
I,
I,
I
I,
think
anybody
who's
run
like
large
clusters
has
hit
this.
A
So,
like
you
know
back
like
in
the
like,
back
in
the
day,
we
used
to
try
to
test
things
that
sort
of
like
large
large
clusters.
You
know
thousand
node
clusters
and
ten
thousand
node
clusters,
and-
and
you
you,
you
know,
you
often
can
hit
us
issues
where
you
have
these
watches
and
the
watches
you
know
this
was
written
up
these
folks,
the
the
openeye.com.
They
actually
wrote
a
good
post
about
this.
A
Let's
see
yeah,
so
they
wrote
a
good
post
about
it
and
if
you
look
in
here
one
of
the
one
of
the
things
that
they
really
talk
about
that's
kind
of
a
that's
kind
of
a
good
general
rule
is
you
know
you
don't
want
to
have
like
Damon
sets
large
Damon
sets
in
a
large
cluster
right,
because
Damon
sits
by
definition,
run
on
every
node
right.
So
one
of
the
problems
with
the
coup
proxy
today
is
it's
Damon
set
right.
A
So
if
I
coop
CTL,
not
this
one,
if
I
coop
CTL
get
yes
dash
n
Cube
system
right
I've
got
a
coupe
proxy
running
on
every
yeah,
I'll
get
pods,
Dash
and
Cube
system
right.
So
I
can
see,
I've
got
a
coup
proxy
running
on
every
node
and
I
can
logs
it
loop
system
right
and
I
can
see.
Not
only
is
it
running
on
every
node,
but
it's
running
a
watch
right,
so
I'm
running
a
watch
here
on
every
node
right.
So
that
means
my
group
proxy.
A
Every
every
every
node
in
my
cluster
is
watching
a
whole
bunch
of
endpoints
on
on
on
my
on
my
cluster,
so
they
have
endpoint
slices.
That's
I!
Guess
one
of
these.
This
is
a
cluster
agent
and
I
I'm,
not
I.
A
They
mentioned
this
as
part
of
the
solution
to
this
problem
of
of
having
too
many
watches,
because
it
really
bogs
down
the
API
server
but
and-
and
they
also
mentioned
endpoint
slices,
which
makes
it
so
that
you
don't
have
to
you-
don't
have
to
pull
down
all
the
end
points
and,
and
that
helps
to
improve
improve
performance
for
sure,
but
there's
other
problems
with
kubeproxy
other
than
just
performance
at
Large
Scale.
So,
like
you
know,
it's
really
complicated.
A
Iptables
itself
is
just
really
complicated
and
forget
about
the
fact
that
we
support
so
many.
So
as
an
example
of
a
Ravi
Ravi
made
this
a
PR
a
long
time
ago,
I
don't
think
this
has
been
merged
yet,
but
we
had
a
problem
in
Windows,
and
so
the
problem
in
Windows
is
that
you
know
you,
you
start
the
Koop
proxy
up
and
you
want
the
the
you
want.
You
want
it
to
run
at
a
certain
priority
level
and
it's
like
a
Windows
specific
thing
to
run.
A
It's
run
it
at
a
specific
priority
level
and
it
doesn't
affect
the
Linux
people.
So
you
get
these
really
weird
long
reviews
where
you've
got
Linux
and
windows
and
everybody
like
sort
of
looking
at
looking
trying
to
prioritize
trying
to
prioritize
where's.
The
pull
rat
pull
request
is
here:
I
thought
there
was
a
pull
request
somewhere
in
here,
but
I
don't
see
it.
A
Yeah,
that's
weird
I.
Do
not
I
do
not
see
the
pull
request.
I
thought
there
was
a
PR,
but
I.
Don't
I,
don't
remember
where
it
is
so
here
it
is
here's
the
pull
request.
So
exactly
so,
you
jump
into
the
code
base
for
the
coupe
proxy
code
base
right
and
you
wind
up
seeing
that
there's.
There's
options
that
conflate
all
these
different.
So
the
coup
proxy
has
an
ipvs
implementation,
an
iptables
implementation
and
two
with
a
user
space
user
space
and
a
kernel
space
implementation
for
Windows
and
as
well
as
Linux.
A
So
it
has
all
these
different
ways.
It
can
run
and
it's
all
coupled
in
the
code
base
and
we'll
look
at
that
in
a
second
and
so
actually
we
can
just
open,
get
pod
up
and
wait
for
it
to
come
up,
get
pod's
my
favorite
new
tool,
I
use
it
all
the
time.
A
So
so
so
you
have
provider
specific
options
that
are
all
coupled
to
the
same
part
of
the
code
base,
and
we
we
can
look
at
that
in
a
second
and
then
to
look
at
a
few
other
issues
that
are
kind
of
typical
to
come
across
when
people
run
into
run
into
problems
like
another,
typical
issue
that
might
get
filed.
A
Let's
load
this
one
up,
Port
ranges.
Oh,
this
is
a
new
one.
Well,
this
is
actually
I.
Don't
think
this
is
I
I,
don't
know
if
this
is
officially
a
a
huge
problem
with
the
with
the
proxy
as
it
stands.
But
this
is
just
one
of
the
issues
I
I
like
wanted
to
mention
that
folks
are
working
on.
Promesh
is
working
on
this
to
the
idea
of
supporting
mult
like
a
range
of
ports
and
there's
all
sorts
of
technical
complexity
around
that.
A
But
it's
not
not
one
of
the
motivations
for
competing
specifically
at
least
I,
don't
think
it
is
Mikhail
could
correct
me
if
I'm
wrong,
yeah
so
and
then
here's
another
one
right
so
so
this
is.
This
is
an
interesting
one
right
so
because
there's
so
many
different,
like
Coupe
proxy
can
write
rules
using
iptables.
It
can
write
rules.
I
should
probably
show
you
this
so
like
you
know,
if
I
go
into
if
I
go
into
a
node
here,
let's
see
let's
go
to.
This
is
a
this
is
a
cluster.
A
A
If
I,
if
I
I'm
gonna
go
root,
if
I
go
I,
T
tables,
Dash,
save
okay,
you're
gonna,
see
that
there's
tons
of
iptables
rules
that
are
written
for
me
by
Coupe
proxy
right.
So
you
know
I,
could
you
know
save
this
to
a
file?
You
know
right
and
I
can
open
it
up
right
and
we
can
look,
and
we
could
see
that,
like
for
every
service
right
every
time,
every
time
I
make
a
service
right.
I
have
all
these
endpoints
that
get
created.
I
have
these
here.
A
They
are
right,
so
you
can
see
each
one
of
these
each
one
of
my
service
winds
up
going
to
a
bunch
of
different
endpoints
right
so
and
then
there's
a
probabilistic
load
balancing
to
each
endpoint.
So
there's
like
a
50
chance,
it'll
go
to
this
end
point
and
then
there's
a
hundred
percent
chance.
It'll
go
to
the
end
point
below
it
right.
A
A
But
the
fact
is
that,
like
you
know,
there's
just
a
lot
of
different
ways
to
implement
a
service
proxy
right,
and
you
know,
even
if
you
were
to
say
that
there's
one
great
way
to
do
it
in
Linux
and
everybody
should
just
use
that
same
way.
We
also
have
Windows
and
windows
is
a
totally
different
operating
system,
and
so
we
can't
use
the
same
mechanism
for
load
balancing
in
Windows
as
we
do
in
Linux,
and
you
know,
and
then.
C
A
You
yeah
you're
welcome
Ricardo,
okay.
Here
we
go
so
let
me
look
at
the
questions.
I've
been
ignoring
you
all
yeah,
what's
up
chocolate.
Is
that
look
better?
Let
me
know
if
this
looks
better.
Okay,
I'm
gonna
keep
going
just
just
just
just
just
go
ahead
and
interrupt
me.
I
I'm,
totally
totally
down
to
be
interrupted
here
so
Suresh.
Why
do
we
name
it
cooproxy?
A
Oh,
this
is
cool.
So
this
is
the
first
time
I've
ever
used
these
little
bubbles
but
see
I
can
I,
can
click
the
bubbles
and
yeah.
So
why
do
we
name
it
Coop
proxy?
What
was
it
proxying
I
think
that
the
reason
it's
called
Coupe
proxy
is
because
so
I
don't
know
like
it's
not
like
I,
don't
know
if
Tim
or
Brendan,
Burns
or
anybody's
here
they
could
really
answer
that
historic
question,
but
I
have
a
I.
A
B
A
A
Why
is
it
showing
me
a
diff
here?
So
if
you
go
in
here,
how
do
I
get
rid
of
this
weird
thing?
Okay,
there
we
go.
It's.
A
A
Okay,
go
Funk,
so
there's
a
go
function
somewhere
in
here
whenever
you
I
think
it's
maybe
this
whenever
you,
whenever
you
connect
to
whenever
you
connect
to
whenever
you
made
a
connection
originally
where
it
would
maybe
it's
proxy
socket
I,
don't
know
where
it
was,
but
it
it
actually
would
create.
Yeah
proxy
Loop,
okay,.
B
A
Be
no
yeah
here
it
is
yeah,
yeah
yeah
see
so
every
single
time
it
used
to
be
when
this
first
came
out
right
every
single
time
that
you
would
connect
through
a
service
endpoint
to
a
pod.
There
would
be
this
like
Loop.
In
golang,
there
was
literally
a
user
space
Loop
that
would
forward
the
TCP
connection
for
you,
so
it
was
just
it
didn't
scale
at
all
right.
B
So,
even
with
a
lot
of
CPU,
you
had
the
problem
that
you
have
at
most
65k
connections
possible.
So
but.
A
A
Yeah
yeah,
yeah,
okay,
so
I,
okay,
so
I
I
didn't
even
know
anybody
ever
measured.
It
I
just
assumed
they
moved
off
it
so
quickly.
It's.
A
Okay
cool,
so
so,
by
the
way,
big
thanks
to
Mikhail
for
joining
us
today
and
Ricardo
as
well,
because
Mikhail
is
has
has
forgotten.
What
do
they
say
has
forgotten
more
about
Kube
proxy
and
most
of
us
will
ever
learn
right
or
whatever.
So.
B
A
He
he
he's
kind
of
a
kind
of
an
expert
he's.
He
worked
with
Ben
Ben,
Elder
and
Tim
Hawkin
and
all
those
people
people
in
the
early
days
so
anyways.
That's
why
it's
called
that.
It's
a
great
question
right,
oh
still,
not
clear,
is
let's
see
here,
make
it
bigger
cool.
Now,
that's
probably
big
enough
right.
A
Wait?
Oh
gee!
Oh
okay,
I
thought:
okay,
okay,
sorry,
my
screen
was
okay.
Can
you
crack
crank
the
font?
Okay
422.?
Is
it
better
now
choco,
it's
good!
Now:
hey,
okay,
krkr,
okay,
I,
like
your
Okay,
so
so
yeah.
So
this
just
didn't
scale.
You
had
to
make
a
new
one
and
it's
always
faster.
When
you
go
through
the
kernel,
you
don't
have
to
send
all
the
stuff
back
up
the
stack
so
the
so
then
they
went
and
they
built
the
IP
tables
proxier
right,
and
that
was
a
huge
Improvement
right.
A
A
Okay
and
now
we
have
now
we
have
a
much
we
now
we
have
this,
this
other
implementation,
which
is
which
is
much
more
performant,
because
what
does
it
do
so,
let's
look
at
what
it
does
so
so
what
it
does
is
it
makes
these
rules-
and
you
know,
as
as
as
as
I'm
sure
we
all
know,
iptables
is
in
the
kernel
itself,
so
you
don't
have
to
go
all
the
way
up
to
this
user
space
program
and
and
forward
packets
manually
one
at
a
time.
A
It's
just
done
right
at
the
kernel
level
and
I
guess
since
Ricardo's
here,
he's
probably
going
to
say,
BPF
is
even
faster
because
you
don't
even
have
to
go
into
the
kernel
or
whatever,
but
I
None
of
the
psyllium
people
are
here.
So
we
we
don't
know
so
nobody's
gonna,
say
that
I
guess
I
don't
know,
but
there's
that
whole
thing,
but
I'm,
not
that
good
at
this
stuff
that
I,
wouldn't
I,
would
really
have
an
opinion
on
that
so
anyways.
A
It
makes
these
like
these
load
balancing
rules
right
so
rather
than
starting
a
function
for
each
one
of
these.
You
hit
the
kernel
and
the
kernel
quickly
follows
these
rules,
so
we
go
from
The
Coop
service
to
the
coupe
service
endpoint,
and
then
we
probabilistically
land
and
then
whenever
one
of
these
endpoints
is
right
and
we'll
walk
through
one
of
these
in
detail.
I
just
wanted
to
show
you
this
is
you
know
this
is
an
Andrea
cluster
and
some
people
actually
kind
of
sometimes
ask
Andrea
versus
Calico.
A
What's
the
difference
from
a
coup
proxy
perspective,
there's
not
a
huge
difference
except
one
thing,
which
is
that
Calico
actually
makes
iptables
rules
for
Network
policies,
so
I,
so
if
I,
so,
if
I
say
Coop
CTL,
let
me
make
sure
these
aren't
here.
A
I'm,
so
bad
at
this
font
thing
I'll
get
better
at
it.
Okay,
keep
going
bigger.
A
Yeah,
so
if
I
go
in
here
and
I
say,
let's
let's
go
and
let's
let's
I
think
this
is
a
kind
cluster.
The
other
one
was
a
tkg
cluster.
This
is
a
kind
cluster,
though
so
to
exec
into
one
of
these
I'm
gonna
have
to
Executive
container
right,
so
Docker
PS,
Docker,
exec,
Dash,
T,
Dash,
I
right
being
sh.
Okay,
so
let
me
get
in
here
and
so.
I
have
the
same
thing
right
so
I'm
in
a
Calico
cluster,
and
you
can
see
we
have.
A
We
have
we
have
a
bunch
of
the
same
types
of
rules
right,
the
same
service,
so
the
service
load.
Balancing
rules
are
the
same.
Whether
regardless
of
what
cni
you
have
your
code
proxies
doing
what
it
does
right
and
in
in
each
case
here
we're
running
iptables
Coupe
proxy.
A
So
we
have
iptables
rules
in
both
of
our
clusters,
but
there's
a
little
slight
difference,
which
is
that
if
I
was
to
say,
for
example,
rerun
this
command
that
I
ran,
iptables,
Dash,
save
F1
and
then
I
was
to
say
make
create
these
Network
policies
right,
create
chef,
okay.
So
now,
if
I
go
create
these
I.
A
Suresh
you
had
like
10
minutes
of
fame
there,
so.
A
What
was
I
doing,
I
forgot,
okay.
Here
we
go
so
now
I
made
now.
If
I
do
iptable
save
F2
okay,
now
I
could
diff
F1
and
F2
yeah.
You
could
see
you
see
how
there's
these
new
all
these
new
rules
that
were
made
right
so
so,
because
I
had
a
network
policy
that
was
on
on
a
Damon
set.
That
means
that
this
particular
node
that
I'm
in
has
all
these
new
iptables
rules
that
were
generated
by
Calico
so
like,
for
example,
Calico
implements
Network
policy
in
iptables.
A
Now,
meanwhile,
Andrea
okay,
if
I
go
back
here,
Andrea
does
not
Implement
Network
calls
using
iptables.
It
implements
them
using
openv
switch,
which
is
a
totally
different
way
of
doing
it,
but
there's
an
equivalent
command
I
could
run
in
Andrea,
where
I
would
see
the
network
policies
implemented.
None
of
this
is
really
relevant
past
the
point
of
just
the
fact
that
okay-
so
you
know,
in
this
case
we're
running
two
clusters-
the
Clusters
have
totally
different
cnis.
A
Both
of
them
have
a
Kube
proxy
running
and
in
both
of
those
clusters,
there's
a
lot
of
iptables
rules,
but
one
of
the
cnis
is
actually
adding
new
IP
tables
rules
on
top
of
it.
If
we
were
to
use
a
different
run,
Coupe
proxy
in
a
different
mode
like
ipvs,
then
we
would
still
have
some
IP
tables
rules,
because
Calico
would
be
writing
them,
but,
for
example,
with
Andrea
we'd
have
no
special
IP
tables
rules.
A
Okay,
so
that's
the
way
all
this
stuff,
because
people
sometimes
get
confused
about
how
iptables
and
ipvs
and
all
that
stuff
enter
interrelate
with
the
proxy.
So
that's
it
depending
on
how
you
run
Coupe
proxy
you've
got
different.
It's
going
to
implement
the
data
plane
differently
and
that's
why
so
so,
there's
a
lot
of
complexity
in
the
current
code
base
and
it's
very
hard
to
add
features
to
Coupe
proxy.
A
At
this
point,
it's
very
hard
to
like
add
generic,
to
reason
about
its
performance
generically
it's
very
hard
to
scale
it
to
thousands
of
nodes,
because
you
have
now
a
Daemon
set
that
has
to
watch
all
these
endpoints,
and
so
that's
why
that's
why
we
have
kaping
so
I'm,
going
to
jump
over
to
oh
and
Ricardo
has
a
good
post
where
he
went
through
some
of
this
and
I.
So
Ricardo
has
this
post
here.
A
If
anybody
wants
to
read
it
where
he
went
through
the
the
whole,
this
is
a
really
good
diagram
of
the
different
iptables
chains
and
the
post
stradding,
the
forward
the
input
and
the
pre-routing
and
then
how
these
all,
how
them,
how
the
different,
like
rules
inside
the
change
wind
up,
sort
of
routing
you
in
between
from
the
kernel
to
the
to
a
local
process
anyways,
so
kaping
takes
so
so
so
we
got
to
look
at
the
code
base
already
because
of
that
question
that
Suresh
asked
so
now
now
what
kaping
says
is
like
well,
why
don't
we?
A
We
have
this
huge
glob
of
different
implementation
details
here.
What
if
we
took
the
generic
stuff
and
that
that
every
proxy
needs
to
do
right
and
then
we
kind
of
and
I
have
a
diagram
of
this
here.
I
think
it's
here
in
my
yeah
here
it
is
so
the
idea
behind
kaping
is
why
don't
we?
A
Why
don't
we
like
start?
Why
don't
we
separate
the
part?
That's
watching
the
API
server
right
so
like,
as
we
saw
in
the
in
the
when
I
was
in
my
yeah
when
I
was
in
here.
A
Right
as
we
could
see
as
soon
as
you
start,
the
coupe
proxy
up
in
one
process,
it's
doing
a
whole
bunch
of
like
watching
of
the
of
the
API
server,
but
it's
also
doing
like
low
level
Network
stuff
too.
The
same
place
so
like.
Why
don't
we
decouple
those?
So
now
you
don't
have
to
run
a
Daemon
set
on
your
cluster
to
watch
the
API
server
anymore.
You've
got
a
decoupling
where
you've
got
one
thing:
that's
doing
this
stuff
up
here
and
then
you've
got
other
stuff,
that's
doing
the
back
end
stuff.
A
A
A
Here
it
is
yeah,
so
let
me
know
if
this
image
isn't
I
think
this
image
is
still
too
small.
So
let
me
yeah
that's
better.
Oh
look
at
that
nice
and
big
okay.
So
so
the
point
of
this
whole
thing
right.
So
you
go
in
here
and
you've
got
you're
able
to
like
just
watch
the
API
server
with
maybe
one
one
pod,
and
then
you
could
have
a
totally
different
set
of
pot.
You
can
have
an
agent
running
on
every
on
every
node,
that's
fine
and
it's
and
it's
and
it's
maintaining.
A
The
maintaining
the
routing
rules
is
right.
It's
maintaining
these
these
rules
and
it's
that's
being
done
on
a
per
node
level,
but
that
doesn't
put
any
extra
load
on
the
API
server.
So
it's
so
it's
source,
so
you
can
have
you
know
you
could
theoretically
have
a
million
of
these
nodes
and
you
wouldn't
have
any
extra
load
on
the
API
server.
So
that's
kind
of
the
two.
You
know
big.
That's
the
big
selling
point
here
right
it.
It
gives
you
logical
decoupling,
but
it
also
gives
you
like
it
gives
you
a
performance.
A
A
huge
performance
boost
a
theoretical
performance
boosted,
which
is
you
know,
which
is
very
obvious
to
perceive
here
right.
So
now,
if
we
go
so
so
now,
what
that
would
allow
us
to
do,
for
example,
is
we
could
actually
look
at
as
an
example.
So
if
we
go
to
Ricardo,
Ricardo
made
his
own
back
end
for
for
which
for
for
kaping,
and
it
doesn't
have
anything
to
do
with
the
you
look.
A
Even
he
made
it
specifically
to
make
me
happy
see
so,
and
so
he
made
his
own
kaping
back
end
and
all
it
does
is
write
these
ipvs
rules.
So
this
is
not
IP
tables.
This
is
ipvs,
but
it's
the
same
idea
right.
So
it
goes
in
and
when
and
if
we
go
here
right,
if
you
go
in
into
where
is
our
where's
our
diagram
SVG?
Here
it
is
right,
so
he
implemented
this
part.
A
Okay,
he
implemented
this
this
part
down
here
and
he
didn't
have
to.
He
was
able
to
just
talk
to
this
local
sink
and
all
of
all
of
the
other
stuff
was
done
by
kapink
right
now.
Kaping
comes
in
tree
with
an
NF
tables.
Implementation
so
turns
out
NF
tables.
Actually
Ricardo
and
Mikhail
explained
this
to
me
this
morning.
A
I
didn't
know
this
NF
tables
is
actually
the
the
underlying
implementation
nowadays,
I
guess
everything
uses
NF
tables
or
or
can
use
NF
tables
under
the
under
the
hood
in
the
Linux
kernel
for
all
this
stuff.
So
IP
tables
is
like
the
I
guess,
the
old
way
of
of
riding
these
firewall
rules,
but
so,
okay
and
before
I
jump
over
to
that
I
just
want
to
say
one
other
thing.
This
is
actually
this
was
made
by
Laurie,
so
the
sub-project
the
coup
proxy
sub
project.
A
You
know
we
have
this
goal
of
like
trying
to
figure
out
how
we
can
wrap
our
head
around
all
these
Coupe
proxy
problems,
build
a
community
around
them
and
so
on
and
so
forth.
So
if
anyone
wants
to
help
with
the
sort
of
day-to-day
grind
of
going
through
issues
figuring
out,
what's
wrong
figuring
out
whether
we
can
do
something
about
it
or
not,
we
have
lots
and
lots
of
issues
to
go
through
and
we
meet
up
every
Friday
morning
at
8
30
PST.
A
So
this
is
just
an
example
of
like
some
of
the
issues
and
you
could
see
like
you
know.
A
lot
of
them
are
configuration
related
right.
So,
for
example,
this
is
one
thing
that
a
lot
of
people
have
been
asking
for
is
the
ability
for
the
the
coup
proxy
and
the
kublet
to
have
a
load,
balanced,
client
right
so
that
you
don't
have
to
do
a
big
dance
around
sending
Coupe
proxy
and
a
load
balancer
endpoint
for
the
API
server.
A
And
so
that's
like
one
issue-
and
you
know
part
of
part
of
what
it
takes
to
just
help
with
this
stuff-
is
just
to
be
aware
and
be
available
in
the
community
to
ask
people
what
to
do
with
these
issues
and
follow
up
with
them
and
stuff.
A
We
have
some
situations
where
we
have
proxies
Implement
different
logic
differently,
so,
for
example,
in
the
windows
Coupe
proxy,
we
haven't
fully
implemented
endpoint
terminating
endpoints,
which
is
you
know
when
a
note
when
a
pod
goes
down
and
the
service
has
an
end
point
flagging.
It
is
terminating
so
load
balancing
stops
against
it,
so
that
there's
a
whole
whole
different
like
range
of
stuff,
and
even
if
you're,
not
a
deep
networking
person,
there's
a
whole
bunch
of
stuff,
that's
just
related
to
like
configuration
right.
A
So,
for
example,
we
we
have
this
priority
flag
issue
that
I
mentioned
earlier.
We
have
config
map
reloads.
This
is
an
issue
actually
I
filed,
but
it's
an
issue
of
refracted
in
Coupe
proxy
right
now
you
don't
actually
hot
reload.
So
if
you
change
the
config
map,
so
the
way
you
configure
Coupe
proxy
right
now
is
through
a
config
map.
So
if
I
could
CTL
get
CM
Cube
system,
if
you
look
here,
then
I
have
a
config
map
and
I
could
edit
it
Coupe
CTL
edit
CM
Cube
proc
C
dash
n
Cube
system.
A
It
turns
out
that
it
accepts
a
lot
of
flags,
though,
but
none
of
the
flags
really
do
do
anything
they're
used
to
generate
a
coupe
config
so
like
we're
trying
to
figure
out,
but
it's
related
to
the
component
config
group
like
what
to
do
about
the
fact
that
people
are
kind
of
confused
by
this,
but
for
now
just
know
that
if
you
change
something
in
here,
for
example,
if
I
decided
to
go
in
here
and
I
decided
to
run,
you
know,
I've
got
I'd,
say:
I
decided
to
I,
don't
know,
but
would
I
change,
maybe
I
would
change,
say:
I
changed
the
UDP
idle
timeout,
let's
say
say:
I
make
this
10
10
seconds
kind
of
a
weird
value.
A
Let's
say
I
make
it
one.
Second,
if
I
was
to
do
this
okay,
this
would
not
affect
anything
right
until
I
restarted,
my
epoxy
okay.
So
just
that's
just
one
example
of,
and
so
we
don't
really
know
what
to
do
with
this.
It
confuses
people
because
they
change
the
config
map
and
they
expect,
because
it's
kubernetes
that
things
will
be
things
will
be
sort
of
reconciled.
But
in
some
cases
that's
not
the
case.
You
actually
have
to
restart
the
coup
proxy
as
a
Daemon
set.
A
There's
a
big
debate
around
whether
that's
the
right
thing
to
do
or
wrong,
and
so
on
so
but
anyways
I
just
wanted
to
make
sure
everyone
knew
that
we
do
have
a
group,
that's
working
to
try
to
create
a
consensus
and
a
community,
a
small
community
of
people
who
are
taking
action
to
try
to
help
organize
and
fix
these
things
and
and
help
each
other
to
triage
these
things,
because
it's
not
something
any
one
person
can
do
so
join
us.
Fridays
8
30
a.m,
PST!
So
we
have
a
you
know.
A
We
can
ask
in
slack
about
other
details
and
stuff
like
that.
Okay,
so
where
were
we
so
kaping?
So
we
have
now
we
can
go
over
to
kaping,
so
I
will
show
you
how
to
get
started
with
it,
and
then
we
will
sort
of
dive
into
an
interesting
problem
that
we
just
actually
found
and
we're
not
quite
sure
where
it
came
from.
But
let's
see
so,
if
I,
if
I,
so
we
have
a
script
in
the
in
the
kaping
repo.
A
So
you
can
check
out
kaping
at
kaping,
you
can
check
it,
get
it's
on
GitHub
and
it's
it's
under
kubernetes,
six
Oops
I
meant
to
go
to
kaping
kaping
there
we
go
all
right,
so
it's
under
kubernetes
six
and
you
can.
You
can
get
clone
it
whatever
and
you
should
just
go
into
the
hack
directory
and
in
the
hack
directory.
We
have,
let
me
make
sure
I'm
up
to
date,
no
I'm
in
the
wrong
branch.
A
Okay
and
we
get
to
master,
let
me
stash
all
this
check
out
origin
I.
Also.
We
had
like
a
little
fire
drill
around
how
to
actually
configure
Coupe
proxy.
It's
kind
of
interesting
and
I
and
I
wanted
to
like
draw
like
a
little
diagram
about
that
and
I
I
guess
I'll,
just
pull
you
all.
Would
you
all
prefer
to
dive
straight
into
kaping,
or
would
you
prefer
to
before?
We
do
that?
A
Take
a
look
at
how
how
could
proxy
is
configured
because
there's
an
interesting
chicken
or
egg
problem
that
sometimes
we
run
into
that
confused
me
about
how
it's
configured
to
talk
to
the
API
server,
even
though
there's
no
actual
endpoints
before
it
comes
up?
If
there's
no
opinion
we're
just
going
to
dive
into
kaping.
A
A
Good
yeah
cool
okay,
so
we
have
a
script
and
it's
just
ping
local
up
and
what
it
does.
Okay
is
it.
It
builds
the
container
it'll
push
it
for
you,
so
it
should
be
easy
for
anybody
to
get
started
and
even
work
on
the
source
code
hack
on
it.
If
you
want
make
a
PR,
but
so
we
make
a
little
roll
binding
that
allows
so
this
essay
makes
it
so
that
it
that
kaping
can
talk
to
the
API
server
and
then
we
we
created
deployment.
A
So
if
you
look
in
here
ping
deployment,
if
you
look
in
here,
it's
actually
Damon
said
so
so
this
here
right,
we
actually
run
it
as
a
Daemon
set.
It's
important
to
realize
you
don't
need
to
run
kaping
in
this
way.
You
can
run
it
in
a
way
that
it's
not
a
Daemon
set.
You
can
have
a
Daemon
set
just
for
the
data
plane
and
a
pod
or
a
deployment
for
the
API
server
interaction,
but
of
course,
for
testing.
There's
no
need
to
do
that.
A
A
When
you
launch
it,
when
you
launch
kaping,
you've
got
the
kaping
server
and
then
you've
got
another
container
that
that
does
the
data
plane
stuff
right.
So
these
are
two
totally
independent
processes
right,
so
this
kaping
server
is
going
to
it's
going
to
publish
whatever
the
the
state
of
the
of
the
API
that
needs
to
be
listened
to
by
the
data
plane
stuff
that
needs
to
write
the
individual,
NF
tables
rules
right.
A
So
let's
run
that
and
let
it
start-
and
we
could
see
we
can
dig
into
the
individual
containers
once
it
starts.
A
There's
nothing
I
need
to
pull
or
rebase
against.
Is
there
Mikhail.
A
Oh,
you
added
that
well,
I
do
need
that
okay,
see,
I
knew
there
was
something
I
needed.
Okay,
so
here
we
go
so
all
right,
so
we're
deleting
the
cluster
that
we
had
so
again.
This
is
kind
of
low
level
stuff
right.
Obviously,
if
you're
a
user
of
coping,
you
wouldn't
need
to
go
and
build
this
image
and
all
that,
but
it's
not
something.
That's
production
like
it's,
not
we
don't
have
releases
and
stuff
around
it.
Yet
it's
still
there's
a
cup
for
it.
You
want
a
diagram,
stress
yeah.
Let's
do
a
diagram.
A
I
just
wanted
somebody
to
ask
me
to
draw
a
diagram.
Somebody
asked
me
to
draw
a
diagram
so
when
you're
con,
so
one
of
the
interesting
problems
with
kubeproxy
is-
and
so
this
is
see
if
this
works
I
I
got
this
Ricardo
showed
me
this
thing
today,
where
you
can
draw:
let's
see
if
it
works,
it
works
okay.
So
so,
when
you
have
Coop
proxy
right,
it
sits
here
group
proxy
right
and
does
anybody
in
the
audience
want
to
take
a
guess
at.
A
A
The
the
API
server
right,
so
group
proxy
goes
and
it
talks
to
the
API
server.
Api
server
talks
to
etcd
right
SCD
is
here
okay,
so
when
coup
proxy
comes
up
right,
the
next
thing
it
needs
to
so
so
and-
and
let's
say
we
have
our
nodes
in
our
cluster.
So
let's
say,
let's
say:
I
have
two
nodes,
one
node
here
and
one
node
here
right.
This
is
my
first
node
and
then
this
is
my
second
node
here:
okay,
so
node
two
and
node
one.
So
you
have
a
two
node
cluster
right.
A
This
is
a
horrible
drawing.
It
makes
no
sense.
Okay.
Here
we
go
that's
better
right,
Okay
cool.
So
if
I,
if
I
have
these
two
right,
so
this
guy
is
also
gonna
have
to
talk
to
the
API
server
right.
So
so
this
is
the
regular
scenario.
Now
there's
a
couple
of
problems
we'll
talk
about
here
when
I
start
Coupe
proxy.
A
That
means
that
both
of
these
need
to
have
some
kind
of
a
certificate
to
talk
to
the
API
server,
but
they're
also
going
to
have
to
have
a
route
to
the
API
server
right
now,
whose
job
is
it
to
make
now?
Most
of
you
probably
know
that
every
cluster
has
an
internal
service
endpoint
right.
So
if
I
go
here,
if
I
say
Coupe,
CTL
get
service,
Dash
a
I.
What
what
group
CTL
get
service
Dash
a
here!
We
go!
Okay,
if
I
do
that.
A
A
10.96.0.1
service
right:
okay,
now
you
can't,
if
you're
running
the
coupe
proxy
in
a
pod
right,
you
can't
you
can't
necessarily
you
can't
easily
access
that
that
that
the
service,
because
this
the
the
coup
proxy
hasn't
created
any
of
the
like
the
well
there's,
no
DNS,
there's
no
there's
no
DNS
to
this
route.
Yet,
and
there's
no
well
there's
DNS
to
the
root,
but
the
root.
Doesn't
it's
not
routable?
Yet
so
you
you
so
to
to
a
API
server
endpoint
so
because
whose
job
is
it
to
make
those
endpoints
Coupe
proxies?
A
So
you
have
this
chicken
or
egg
problem,
and
so
what
you
can
do
is
to
to
make
a
development
environment
for
this.
Normally
what
you
would
do
right
in
the
real
world.
Most
of
you
folks,
you
have
like
a
load
balancer,
so
normally
we
could
get
rid
of
this
right.
You
don't
have
this!
Normally!
What
you
have
is
you
have
a
load,
balancer
right
and
all
of
your
traffic
normally
is
going
to
go
through
a
load
balancer.
A
So
you
know,
so
you
could
kind
of
do
that
right.
So
you
could
have
your
proxy
go
to
your
load,
balancer
and
then
your
load
balancer
knows
how
to
get
to
your
API
server
and
you
have
a
certificate
whatever
you
have
so
so
you
could
do
that,
but
in
this
other
chicken
or
egg
problem,
where
you
don't
have
the
load
balancer
right,
what
we've
got
going
on
here
is.
It
turns
out
that
in
in
Docker
for
our
development
environment
in
kind,
oh.
A
Yeah
so
in
in
in
in
Docker,
I
have
an
API
server
running
on
this
node
right
and
it
turns
out
that
like
I
can
in
Docker,
I
can
rely
on
there's,
there's
a
DNS
name
for
this
node
right.
So
if
I
go
and
if
I
go
into
one
of
my
nodes
here,
let's
go
to.
Let's
go
to
Our
Kind
cluster
I
have
a
kind
cluster
over
here.
A
A
Calico
control
Calico
worker
here
at
his
hostname
right,
so
so
so
like
in
in
it
in
a
kind
cluster
right,
I
have
a
host
name
here,
Calico
worker
right,
so
what
I
can
do
is
I
can
have
my
Coupe
proxy,
because
I
need
to
access
the
API
server
through
valid
DNS
through
a
valid
hostname
right,
because
of
because
they
don't
they
don't
let
you
do
that
insecure
access
thing
anymore,
you
I,
can
go
and
I
can
go
through
the
DNS
Calico
worker
and
then
that
DNS
will
send
me
to
the
API
server.
Okay.
A
So
so
so,
and
the
same
thing
here
right
well,
actually,
in
this
case
it
would
not
be
Calico
worker,
though
right,
because
we
actually
access
it
through
the
Calico
control,
plane.
Okay,
Calico
control
plane,
so
we
hit
the
Calico
control
plane
and
then
the
control
plane
is
the
TLs.
That's
that's
the
TLs
for
the
API
server
endpoint.
So
this
is
kind
of
a.
This
is
kind
of
a
kind
of
an
idiosyncrasy
about
setting
Cube
proxy
up,
because
any
other
Daemon
set
that
you
were
to
set
up
would
be
able
to
well.
C
A
So
that's
how
we
are
configuring,
these
development
environments
for
us,
so
that
we
can
just
run
a
script
and
then
have
thing
running
in
a
container
so
and
for
those
of
you
that
are
interested
in
Windows.
Our
solution
to
this
is
right.
A
Now
in
containerdy
is
not
running
the
code
proxy
in
a
container
at
all
and
instead
running
it
as
a
process
on
the
Windows
system,
because
there
is
no
no
no
easy
way
to
get
to
the
default
to
the
kubernetes
API
server
from
cooproxy
running
on
Windows,
if
you're
in
container
D,
because
you
don't
have
a
bridge,
you
don't
have
a
bridge
for
free
in
containerd
like
you
do
in
Docker.
That's
a
Windows
thing,
though
I'm
not
going
to
get
into
that
right
now.
A
No
because
I
know
I
know
you
all
don't
care
about
Windows,
so
anyways,
let's
see
so
so,
let's
see
so
we
we
kind
of.
We
should
have
Coop
CTL
get
pods
Dash
a
we,
so
so
so
this
is
the
issue
we.
So
we
just
got
this
up
and
running
and-
and
we
actually
found
we
some-
we
might
have
introduced
a
bug
in
the
past
couple
of
days-
somehow
we're
not
sure,
but
we
have
kaping
up
and
running.
So,
let's
just
take
a
look
at
what's
running
inside
of
it
yeah,
okay.
A
A
Wait
something
weird
happened
here,
though,
because
I
executing
to
the
pink
container
shouldn't.
It
have
told
me
that
there's
two
containers
right
this
gave
me
the.
A
A
Get
rid
of
this,
so
let
me
open
a
shell
up
and
I'm
inside
of
my
just
the
same
way
like
we
could
do
the
same
like
so
this
is
so
we
are
in.
We
are.
We
are
inside
of
the
inside
of
the
NF
tables
container
and
we
can
run
I
guess
I
could
just
run
this
command
on
the
docker
node
right,
I!
Don't
really
need
to
run
it
inside,
it's
better
to
run
it
on
the
Node
because
see
over
here
when
I
was
showing
this
to
y'all.
A
B
You
have
you
have
a
script
just
just
before.
Just
do
a
cat
sit
up.
C
B
That's
just
to
set
up
a
mess
crowding
for
normal
products.
That's
another
thing.
A
B
B
A
A
B
A
A
A
B
A
That's
a
little
set
yeah!
Okay!
Here
we
go
so
this
is
the
equivalent
of
what
we
were
doing
in
iptables
right.
So
these
are
all
the
rules
that
are
being
written
by
nft
and
you
could
see
they're
a
lot
easier
to
read
than
the
in
the
iptables
rules
right
so
for
each
chain.
I
guess
we
have
these
routing
rules.
B
Well,
that's
that's
the
thing
we've
seen
before
it's
there
in
Project
mode,
so
they
are
not
ready.
Yet,
okay
in
your
cluster,
okay.
A
B
As
we
are
not
ready,
the
endpoints
are
not
are
not
hadied
by
kubernetes,
so
they
are
not
headed
by
capping.
Oh.
A
A
C
A
A
B
One
two
three
first
big
difference
is
that
it's
what
you
love
just
just
just
just
above
you
can
use
maps
to
to
jump
to
rules
in
in
the
table.
So
it's
much
faster
than
doing
a
sequential
read
of
of
holes
in
iptable,
so
yeah
we
are.
We
have
a
direct
map
to
from
the
service
IP
server.o.1
to
to
the
G
natural
yeah.
A
Exactly
so
sudo
sudo.
A
So
if
we
look
here,
if
we
look
at
the
same
thing
in
iptables
right,
we
look
at
the
complexity
here
right,
10.96.0.1,.
A
Right
we
can
see,
there's
this,
that's
that's,
that's
that's
this
destination
Rule
and
then
we
go
to
the
Kube
Mark
mask
and
that's
where
it
gets
masquerade.
That's
where
it
gets.
Masquerade.
Well,
I
already
showed
it
to
folks
before,
but
we
could
see
that
it's
much
more
difficult
to
read
so
going
back
here,
so
exactly
so
so,
and
if
just
to
complete
the
circle
for
folks
that
are
wondering.
Why
is
this
working?
And
why
is
this
not
working?
A
B
Get
get
ep
the
dash
a.
A
A
And
the
weird
thing
is
that:
well,
actually
it
does
it's
not
weird,
because
if
we,
if
we
get,
if
we
look
at
our
accordion
as
pods,
they
do
have
IP
addresses
because
they're
running
right,
so
Coop
CTL
get
pods,
Dash,
O,
Y
dash
and
Cube
system
grab
core
right.
You
can
see
that
these
got
valid
network
interfaces.
They
got
IP
addresses
the
problem.
Is
they
didn't
come
up?
So
let's
go
look
inside
and
see
here
why
these
pods
didn't
come
up
so
Coupe
CTL
logs,
okay,
and
we
could
still.
A
A
Essentially
you
know,
we've
got
this
is
we've
got,
we've
got
core
DNS
and
core
DNS
is
here
and
cord
DNS
is
trying
to
access
the
API
or
server
through
the
kubernetes
endpoint,
which
is
an
endpoint
that
is
created
by
the
coupe
proxy
itself,
but
it
can't
can't
access
it.
So
there's
no
no
routing
rule,
it's
able,
so
it.
B
We
don't
know
yet
what's
going
on
with
okay
DNS,
obviously
at
the
beginning
it
was
saying
that
it
could
could
not
connect,
but
we
don't
have
those
messages,
see
it's
it's
a
at
9,
00,
PM,
eight
minutes.
So
that's
quite
a
long
time
ago,
yeah.
A
B
A
Clear
why
yeah
so
so
we
just
hit
this
bug
just
now
right
before
the
demo,
so
it's
kind
of
like
well
that's
kind
of
the
point
of
tgik
right.
Is
we
kind
of
do
things
kind
of
we
kind
of
run
fast
and
loose
so
I
guess
the
question
here
Mikhail
is:
do
you
think
this
is
something
that
we
should
even?
Is
there
something
we
could
do
inside
of
kaping
to
try
to
try
to
debug
this
or
try
to
get
this
working
or.
B
I
I
already
tried
with
an
LP
Linux
container
and
I
can
access
the
kubernetes
and
points
so
that's
kind
of
weird,
but
it
doesn't
work
for
called
DNS.
A
A
A
This
is
a
this
is
just
we're
just
we're
just
hacking
here,
so
we
don't
know.
So
maybe
if
we
start
these
core
DNS
containers
a
very
long
time
after
kaping
has
already
started,
maybe
that
will
solve
it
looks
to
me
like
we
have
the
same
issue.
B
A
A
B
A
So
this
is
my
repo
that
I
have
where
I
have
all
my
experiments.
That
I
do
so
could
bring
experiments
and
for
some
of
you
all
okay.
Here
we
go
all
right.
So
let
me
go
to
hack
there
we
go
and
now
I'm
gonna.
Do
my
I
used
to
keeping
local
up
right,
we'll
get
pull.
A
A
B
Just
you
just
you
just
play:
it
keep
cutting
apply.
A
A
B
That's
what
I
wanted
to
to
check?
That's
that's!
Where
I've
seen
the
there
was
a
problem
with
with
the
mass
crediting
that
required
to
access
the
internet.
So
that's
why
we
have
set
up.
B
If
you
do
a
you
won't
be
able
to
Ping
and
you
don't
have
a
DNS
working
since
card.
Dns
is
not
working
yeah
and
you
don't
have
curls.
So
you
need
to
install
care
and
if
you
want
curl,
you
need
to
download
it,
and
so
you
need
to
get
the
best
crediting
working
on
the
net
yeah.
A
A
B
A
B
A
A
You
know
Ricardo
one
thing
I
want
to
do
is
go
over
your
moving
Coupe
proxy
out
of
tree
thing,
so
maybe
you
can
since
you're
on
the
stream.
Maybe
you
can
walk
people
through
that.
If
you
want
to
pull
that
pull
that
up,
we
can
show
that
to
people
and
we
can
maybe
hack
on
this
for
another
like
it's
almost
an
hour
and
20
minutes.
So
let's
try
this
for
another
five
minutes
and
then
we'll
jump
over
we'll
go
through
that
and
we'll
close
up
I
was
just
teasing.
C
Yeah
it's
another
way
of
doing
of
doing
networking
things,
but
instead
of
relying
on
the
care
now
as
as
the
BPS,
does
you
move
everything
to
user
space,
so
you
say
to
the
kernel:
don't
even
get
this
packet
into
the
networking
stack
just
just
send
this
to
something
running
in
the
user
space
so
cut.
So
so
so
so
I
guess
ovs
realize
a
little
bit
in
dpdk,
I!
Guess:
okay,
I,
don't
I
am
not
really
sure
yeah!
Okay,.
A
Cool
yeah
for
folks
wondering
about
these
cnis
by
the
way,
since
we
have
a
couple
of
them
up,
I
think
it's
worth
just
showing
you
and
then
we'll
you
have
your
while
I
do
this
Ricardo.
Do
you
want
to
pull
up
your
PR
and
then
I'll,
just
we'll
hop
over
to.
C
Yeah
I
I
think
you,
you
have
a
huge
copy
paste
into
Andrea
code
right
that
you
were
going
to
show.
A
Yeah
yeah
as
an
example
yeah,
so
when,
when
Ricardo's
talking
about
ovsc,
so
if
you
look
at
the
routes
in
end
node,
that's
running
Andrea.
As
a
cni
provider,
you'll
see
that
everything
talks
to
this
Andrea
Gateway
and
this
Andrea
as
a
cni
provider
uses
openv
switch
and
open
V
switch
is
like
a
basically
a
virtual
switch
that
runs
on
every
node
and,
as
you
can
see,
what
Andrea
does
is
it
uses
this
dot?
One
IP
address:
that's
in
the
Pod
network
name
space
as
the
virtual
switch
for
every
like
container.
A
A
Okay,
this
doesn't
change
the
gateways
right.
So
I
can
I
can
rerun
the
same
command
right
and
I
still
have
the
same
situation.
So
I
have
all
these
gateways
managing
the
same.
If
I
do
right,
you
can
see
now
if
I
run
IPA,
you
can
see
I
have
all
these
virtual
interfaces
right
each
one
for
all
these
different
core
DNS
pods
right.
So
that's
that's
the
whole.
A
What
is
it
called?
So
you
can
see
in
Calico
cluster.
Every
pod
has
a
network
interface
as
well
right
and
if
I
do
route
dash
n,
oh
wow,
I
don't
have
route
Dash
in
on
here.
A
We
go
so
in
a
Calico
cluster.
You'll
see
you
have
these,
so
that's
the
the
way
those
the
routes
work.
They
both
wind
up
using
the
same
I
think
they
both
use
the
node
IPM
controller.
So
your
nodes
kind
of
Shard
up
these
IP
spaces
for
each
one
of
your
containers,
but
but
again
so
this
is
how
you
actually
look
at
the
underlying
cni
implementation
in
each
of
your
providers,
so
yeah
I
can
just
show
them
that.
A
So,
if
you
look
at,
for
example,
one
of
the
reasons
so
Ricardo
is
working
on
moving
copying
a
moving
Coupe
proxy
out
of.
A
Out
of
tree,
so
if
I
go
to
pull
requests
and
I,
say
author,
and
then
you
write,
there's
the
other
one.
We
have
I
actually
have
this
email
pulled
up
here.
So
so,
if
anybody's
interested
in
what
it
means
to
move
something
out
of
tree,
so
we
we
kind
of
brought
it
up
and
get
Pat
a
little
get
pod
a
little
while
ago
and
I'll
reload
that
workspace,
but
so
to
so
all
that
code.
A
That
I
showed
you
earlier
for
cooproxy
right
all
that
stuff
is
in
package
proxy,
and
so
in
order
to
do
this,
what
we
have
to
do
or
what
Ricardo
has
to
do
is
take
all
these
sub
packages.
Contract
IP
set
iptables
all
this
stuff
move
them
into
a
part
of
kubernetes,
where
it's
that's
that
you
can
vendor
from,
because
you
can't
vendor
from
the
core
part
of
the
repo.
So
moving
these
into
like
staging
and
all
that
and
there's
a
it's
kind
of
a
it's
kind
of
a
lot
of
work.
A
So
if
we
look
here's
his
pull
request,
I
think
you
broke
it
up
into
separate,
pull
requests
right
but
like
to
see
why
this
is
valuable.
Why
this
is
important,
Andrea
and
I
believe
Calico.
Does
this
as
well,
but
I'm
not
sure
has
to
yeah
Calico?
Does
it
as
well
right
like
in
order
to
implement
your
own
sort
of
proxy
like
logic
right?
What
you
wind
up
having
to
do
is
copy
sort
of
bits
of
lit,
like
there's
bits
in
here
that
are
copied
from
like
Coupe
proxy
itself.
A
A
We
would
be
able
to
literally
just
have
like
the
Andrea
bits
implementing
this
down
here
or,
if
Calico
wanted
to
implement
their
own
service
proxy
or
if
whatever
psyllium
or
any
other
cni
wanted
to
implement
their
own
service
proxy.
They
could
just
implement
this
part,
and
then
this
would
all
be
like
sort
of
imported
like
and
I.
Think
I
actually
have
an
example
in
the
show's
notes,
where,
as
an
example
of
this,
where,
if
you
go
into.
A
I
was
gonna,
look
for
the
an
example
in
the
code
of
the
we
were
looking
at
earlier.
Where
we,
you
know,
if
you
look
in
iptables,
actually
we
can
just
go
here
if
get
pods
loaded
up,
let's
see
if
get
pod
was
fast
enough.
Oh
it's
still
going
come
on
get
pot
nobody's
working
on
Friday.
You
should
be
faster
here.
We
go
okay,
it's
there!
Here
we
go!
So
if
you
go
into
package
and
you
go
into
proxy,
oh
gosh.
A
A
Yeah
apology
right
so
like
this
stuff
and
Mikhail,
you
can
correct
me
if
I'm
wrong,
like
this
logic,
that's
inside
of
the
proxy
or
inside
of
ipvs,
like
this
logic,
is
in
kaping
right,
so
any
any
proxy
implementation
would
just
get
this
for
free,
but
right
now
we
basically
have
to
have
this
logic.
You
know
inside
of
each
one
of
these
like
directories,
right
we're
going
to
have
to
have
some
kind
of
topology
stuff
in
there,
and
so
there's
all
this
copy
pasting
that
goes
on
right.
A
So
if
I
go
in
here,
I'll
see
some
topology
stuff
here
right
so
you'll
see
you
know
we
wind
up
having
to
deal
with
the
semantics
of
like
what
a
service
is,
what
a
kubernetes
Services,
what
the
topology
of
the
services
stuff
that
has
nothing
to
do
with
ipvs
or
iptables
or
any
other
data
plane
we
have
to.
We
have
to
continuously
copy
that
from
from
one
of
these
implementations
to
the
other
right.
A
So
that's,
that's
that's
one
of
the
things
and
then
so
so
the
cap
I'm
gonna
end
showing
the
cap
for
the
the
kept
for
for
the
kaping
stuff.
So
if
people
can
look
at
it
and
read
about
read
about
it,
let's
see
what
is
the
name
of
the
is
it
called
kaping
this
one
yeah
yeah,
so
the
I
guess
the
overall
thing
I
mean
this
would
be.
You
know,
I
know
a
lot.
A
I
know
everybody's,
like
you
should
read
this
cup,
but
I
think
this
is
really
one
that
you
should
read,
because
it's
not
just
like
an
API
change
that
people
are
debating,
but
it's
a
it's
a
big
paradigm
shift
and
just
from
reading
it
you
can
learn
a
lot
about
how
how
the
service
proxy
works
and
how
it
should
be
working
and
and
so
on
and
so
forth.
Just
from
reading
the
comments,
the
infinite
infinite
book
of
comments-
that's
in
here,
so
this
is
good
homework
for
everybody.
A
Cool,
so
that's:
okay,
so
I'm
just
catching
up
with
the
questions
now.
Is
it
possible
what
context.
A
Oh,
my
font
is
small
again
525
that
was
okay.
Well,
okay,
so
use
the
cooked
view
for
markdown
Rich
diff,
oh
yeah,
like
here
you
mean
I,
mean
I,
think
we're
probably
kind
of
at
the
end.
We've
been
doing
this
for
an
hour
and
a
half
and
I
think
we've
kind
of
I've
got
we've
gotten
to
go
everything.
Does
anybody
want
to
like
have
any
questions
or
anything
before
we
go
because
usually
I
I
cap
it
at
the
hour
and
a
half
but
I
always
like
it?
A
C
B
You
could
go
in,
you
know:
capping,
dropping
Pub.
B
Yeah
and
so
not
here
you,
oh
yeah,
you,
you
are
yeah,
let's
forget
about
this,
this
problem,
so
in
the
capping
bird.
B
Yes,
and
if
you
do
a
capping,
Dash,
not
log,
for
instance,.
B
B
Oh
yeah,
okay,
okay
yeah!
So
here
you
have
a
note,
log,
Dash,
H,
so
I'm
sure
there
should
be
the
target
flag.
B
Unix
uniques
Quran,
slash
I,
need
to
remember
the
name.
Exactly
that
we've
set
depends
on
the
version
you
have.
A
A
A
So
what
what
mikhail's
showing
us
is
how
we
can
sort
of
just
sort
of
look
at
the
the
node
level
logs
for
the
proxy
from
inside
of
one
of
these
Damon
set
pods,
which
is
for
all
these
issues
here
for
context
for
folks,
like
you
know
the
biggest
problem
with
these
issues,
right
people
will
file
these
issues
and
they'll,
and
it's
like
really
hard
to
figure
out
what's
wrong,
because
in
order
to
figure
out
what's
wrong,
you
know
the
coupe
proxy
logs
are
really
there's
a
lot
going
on
in
there.
A
It's
doing
a
lot
of
different
stuff
right,
and
so
we
have
these
individual
likely
utilities
that
people
can
use
to
get
individual
information
really
easily
for
for
their
services,
so
anyways
what
do
I?
What
am
I
doing
for
the.
B
A
B
B
A
B
That's
why
you
you've
connected
to
you've
connected
to
to
the
capping
instant
while
swatching
via
API,
server
and
you're,
getting
the
not
logs
of
a
node
log
is
the
node
level
model.
So
that's
the
simplest
model
we
have
in
capping,
because
it
doesn't
have
all
the
information
required
to
make
import
selection
topology
and
things
like
that.
You
just
have
VM
points.
B
A
B
B
Expose
deploy
Alpine.
B
Yeah,
there's
a
spot,
let's
say
80
whatever,
which
just
just
to
show,
but
it's
just
one
party.
So
whatever
anyway,
we
don't
yeah.
Okay,.
B
A
Let's
get
rid
of
it
all
right,
so
there
we
go
so
I
deleted
it.
So
that's
this
minus
right
yeah!
So
so
this
is
cool
right.
So,
like
I
can
log
this
and
then
every
single
time
an
operation
is
happening
at
the
data
plane
level.
I
can
see
it
so
now
what
I
can
do
is
I
can
now
expose
this
port
again.
Let's
Okay
and
okay,
nothing
nothing
behind
my
hands
here.
Here
we
go
there.
We
go
Okay
cool.
So
now
you
can
see
this
plus
right.
A
So
I
can
continuously
just
view
the
diffs
of
what's
changing
in
my
cluster
from
the
service
perspective.
So
like,
for
example,
imagine
you
you
were
on
a
real
production
cluster
of
maybe
a
50
node
cluster,
and
you
were
trying
to
figure
out
whether
there
was
a
bug
in
your
proxy
or
whether
it
was
a
bug
in
the
wait.
Was
writing
rules
or
whatever
you
could
see?
A
You
could
run
this
whoever's
seeing
the
issue
could
create
their
app
and
then
you
could
run
this
and
then
you
could
just
capture
the
diff
of
all
the
changes
that
happened
from
that
point
and,
and
that
would
be
all
and
and
nothing
else,
and
you
could
capture
that
per
node
very
easily
right.
A
B
You
can't
get
in
between
there.
Well
yeah,
if
you,
if
you
stop
the
log
and
you
do
the
global
log,
you'll
see
the
let's
say:
let's
say
you
see,
there's
a
trouble!
Okay,
you
want
to
show
some
figures.
Oh.
A
Yeah
go
ahead,
I
was
just
gonna.
Mention
include
proxy.
If
I
get
the
logs,
you
know,
I've
got
everything
in
here:
it's
not
it's
not
separated
at
all
right
and
so
I
could
increase
the
verbosity
and,
if
I
increase
the
verbosity
I
will
see
the
IP
tables
flags
that
are
being
set
and
all
that
stuff,
the
commands
that
are
being
run,
but
the
state
is
not.
The
model
of
the
system
is
not
separated
out
this
way.
So
anyways
yeah
go
on
yeah.
B
So
now
you
can
you
can.
Let's
say
you
see
that
you
don't
have
your
own
points
you.
You
expect
an
endpoint
for
the
Alpine
of
the
Alpine
service,
for
instance,
and
you
don't
see
it
in
yeah.
You've
queried
the
local
node,
the
local
model,
and
you
don't
see
your
endpoint,
so
you
want
to
to
check
if
it's
a
problem
from
the
global
model,
that's
at
the
top
of
of
a
drawing,
and
so
to
do
that
you
can
just
query
the
global
model
with
global
log
instead
of
not
log
yeah.
A
Okay,
so
so
now
what
we're
looking
at
is
so
we
were
just
looking
at
the
node
level
log,
but
now,
since
we're
running
both
the
node
level
stuff
and
the
and
the
and
the
API
server
watch
we're
running
that
both
in
the
same
process.
That's
not
a
requirement,
but
we
are
doing
that.
We
can
do
Global
log
and
now
we
can
see
the
other
part
that
I
just
showed
you.
B
Know
yes,
for
instance,
if
you
suspect
a
problem
with
the
topology
implementation,
you
could
have
a
look
at
the
global,
the
global
model
to
see.
If
you
have
all
your
endpoints,
you
expect
and
then,
if
you
don't
see
them
in
the
world
log,
you
can
say
that
you
have
a
you,
have
a
bug
in
the
topology
implementation,
for
instance,.
B
Then
just
one
last
thing
go
back
on
the
terminal.
Okay
and
then
you
can
say:
okay,
I
have
a
use
case
in
my
in
my
cluster.
That's
not
working.
You
can
start
the
global
life.
B
Just
to
PS
PS
aux,
so
we
can
get
a
copy
pasta
quickly,
no
dash!
Oh
there
we
go
okay,
so
we
we're
going
to
do
the
capping
Cube.
Okay.
B
A
Yeah
and
yeah
okay.
So
what
we're
doing
here
right
is
this
is
another
thing
so
so
again,
when
we
look
at
the
way,
we
normally
run
the
proxy
right.
We
start
this
one
process
and
it's
storing
in
memory
all
the
state
of
the
the
watches
for
the
API
server.
It's
storing
all
that
in
memory.
A
So
in
this
case
we
are
We
Now
where's.
The
file,
though
you.
A
A
A
Okay,
So
Okay
cool,
so
so
yeah.
What
it's
done
is
it's
aggregated,
so
those
of
you
that
are
familiar
with
cni
providers
will
be
sort
of
like
okay,
because
because
you
know
like
Calico
and
Andre
do
this
for
Network
policies
right.
They
have
a
controller
that
sort
of
Aggregates
all
the
network
policy
information
into
a
way.
That's
going
to
be
easy
for
the
nodes
to
implement
right,
and
so
like.
A
So
imagine
the
bug
reporting
situation
right
like
from
here
once
we
move
to
this
we're
gonna
have,
but
the
ability
people
can
can
actually
submit
a
bug
where
they
say:
here's
the
model
and
here's
the
node
level
logs.
Here's
what
my
node's
doing,
here's,
what
my
model
is
doing
and
so
here's
what
so
what's
the
problem
right.
B
And
now
you
can
do
capping
from
file.
A
B
And
from
file
you
you
do
no,
no,
it's!
It
should
be
from
sorry
it's
a
file
not
from
file.
B
And
now
it's
using
the
file
as
the
source
for
the
model,
and
you
can
now
maybe
Cube
cattle
exactly
in
another
terminal.
The
bottom
terminal,
for
instance,.
C
A
A
B
A
B
Yeah,
that's
pretty
nice
and
maybe
even
less
being
interesting.
That's
the
file
is
actually
watching
the
the
changes.
So
if
you
keep
your
network,
if
you
keep
your
network
in
in
the
back
in
in
the
background
yeah,
so
Ctrl
Z
and
or
maybe
you
can
open
another
chemix,
it
would
be
yeah
okay
and
now
you
can
via
the
global
state
if
you
Ricardo.
This
is
why
you.
A
B
A
B
B
A
A
A
Vlad
has
been
working
on
a
new
test
framework,
and
you
know
one
of
the
things
we're
real
interested
in
is
building
some
Kube
proxy
test
Suites
some
a
new
set
of
coop
proxy
test
Suites
that
are
really
comprehensive
and
easy
to
read,
and
so
he's
got
a
new
test
framework
over
here.
A
That
is
going
to
be
sort
of
an
alternative
to
Ginkgo
that,
hopefully
we're
going
to
start
writing
some
some
new
Kube
tests
in
so
if
anybody
wants
to
like,
if
anybody's
looking
for
an
alternative
way
to
write
their
tests
and
go
and
they
want
to
play
around
with
this
I
I
think
early
feedback
would
be
really
cool.
A
But
it's
kind
of
like
it's
as
you
can
see,
it's
very
readable
and
it's
very
similar
to
Ginkgo,
but
it
solves
some
of
the
some
of
the
problems
that
we've
seen
over
time
with
Ginkgo,
like
stack,
traces
being
tricky
to
reason
about,
and
and
other
things
like
that,
and
it's
like
it's
Coop.
A
It's
gonna
have
some
Coop
native
features
and
stuff
inside
of
it,
so
that
not
yet,
but
it
will
so
that
it
will
be
easier
to
write
things
that
are
kubernetes
Centric
in
that
test
framework,
and
so
I
I
wanted
to
talk
about
it
a
little
in
the
week
in
review,
but
I,
but
I
almost
forgot
so
I'll
put
Vlad
over
here
at
the
top
vlads
new
test
framework,
and
this
is
being
worked
on
in
the
Upstream
as
well.
It's
like
there's
like
a
cup
and
stuff
like
that
associated
with
it.
A
So
if
anyone
wants
to
try
it
out
cool
and
we're
probably
going
to
try
to
write
our
write,
some
Coupe
proxy
tests
in
this
or
most
likely
that
are
that
are
more
readable
and
and
Doug
actually
has
written
an
initial
prototype
of
that
Doug
Doug
landgraph
of
some
Coupe
proxy
tests
that
output
a
bunch
of
data
and
and
tell
you
about
that,
and
and
and
are
very
explicit
in
terms
of
like
what
services
they're
testing
and
we're
hoping
to
use
that
as
like
cross-compatibility
backwards
compatibility
with
the
old
with
the
old
code
proxy,
so
that
we
can
really
have
a
definitive
way
of
moving
forward
with
this
stuff.
A
So
it's
just
very
early.
It's
going
to
take
a
long
time
to
get
there,
but
we
we-
hopefully
hopefully
within
a
few
months,
maybe
a
month
or
two
we'll
actually
be
able
to
show
IP,
ipvs
and
everything
else,
all
the
different
versions
of
NF
of
this
stuff
working.
Maybe
we'll
do
a
second
tgik
follow-up
on
this.
A
So
how
can
I
get
in
the
world
with
Dave
being
a
data?
Scientist
I
want
to
switch
to
the
cloud
a
well.
Why
don't
you?
If
you
really
want
to
work
on
some
tough
problems?
We
can
help
you
so
as
as
me
me
what
what
what
what
if
you're
interested
in
this
stuff,
why
don't
you
join
kubernetes
slack
and
then
reach
out
to
like
me
or
Ricardo
and
Sig
Network,
and
and
we
can
find
help
you
find
a
good
project
to
get
started
on.
A
Ricardo's
got
a
few
things
he's
doing
some
stuff
with
nginx
he's
doing
some
stuff
with,
of
course,
Google
proxy
we've
got
some
other
stuff
in
the
network
policy
world
there's
all
sorts
of
stuff,
so
we
can
help.
You
get
started
sure
cool.
So
thanks
everybody
I'm
gonna,
just
let
everybody
go
because
it's
about
we're
at
the
two
hour
mark
now,
but
yeah
join
us
on
k8.io
slack
Sig
Network.
Send
me
a
message
today,
as
I'll
I
will
I
will
yeah
I'll
help
you
get
started.
B
You
got
some
point
set
after
the
services.
No,
it's
not
about
this.
That's
when,
when
we
add
something
we
have
a
service
vanvian
points
and
when
we
did
it
something
we
have
federations
of
the
end
points
before
the
duration
of
a
series.
A
I
love
this
button
next
time,
I'm
going
to
use
this
button
more
I,
just
love
this
button,
I
just
want
to
press
the
button.
One
time
before
we
go
I'm
going
to
give
Evan
a
little
antonin's
here,
hi,
Antonin,
I
didn't
even
know
you
were
here:
okay
cool.
So
this
is
cool
thanks.
Everybody
thanks
a
lot
Michael
for
coming
and
helping
me
understand
the
kaping
stuff
a
little
bit
better
and
Ricardo,
and
everybody
else
for
showing
up,
especially
those
of
you
in
Europe
and
hunland.