►
From YouTube: TGI Kubernetes Episode 181: KPNG part duex!
Description
Join Jay, and some surprise members of the upstream KPNG team for an overall code walk through and project update of KPNG, and maybe, some other random meanderings along the way !
Notes: https://tgik.io/notes-181
A
C
A
I
made
it
yeah,
no
there's
a
logo,
I
put
some.
I
don't
have
it
so.
Okay,
we're
gonna,
do
we're
gonna
talk
about.
So,
let's
start
with
the
notes,
so
docker
shim
is
actually
out
of
tree.
I
guess
like
every
every
tgik.
We
have
this
as
an
update,
but
I
think
it's
really.
Maybe
this
will
be
the
final
update
about
this.
So
it's
gone
because
it's
out
of
the
code
base.
A
It
won't
be
there
anymore.
So
that's
like.
Maybe
that's
the
final
announcement
you'll
ever
hear
about
that.
We
this
week
a
meme
gave
an
awesome
presentation
on
windows
on
andrea,
so
me
and
amim
on
the
andrea
live
show
which
we
would
love
for
you
to
come
see
it's
the
project,
andrea
on
youtube,
and
you
can
also
check
it
out
at
andrea,
dot.
Io,
slash
live.
We
have
all
our
episodes.
A
It's
like
a
cni
focused
version
of
tgik,
so
a
meme
went
and
did
a
really
cool
presentation
of
our
sig
windows,
dev
tools,
which
I
I
don't
think
we've
done
a
full
tgik
on
it
yet,
but
but
that's
a
really
cool
sort
of
sandbox
for,
for
you
know,
for
spinning
up
clusters
from
scratch
with
windows.
What's
up
sharat
sharat
came
here.
A
He
just
joined
our
team
and
he's
he's
going
to
help
us
out
on
on
the
kpng
project,
or
at
least
I'm
going
to
try
to
convince
him
to
and
nashad
said,
I'm
a
good
sales.
I'm
a
good
sales
guy,
so
I'll
probably
be
successful
at
that
right,
nishada!
So
zack.
A
C
A
B
A
D
D
A
Me
this
is
a
controversial
topic
for
this
show
because
you
know
I
did
a
hamburger
for
the
logo,
but
but
laurie
asked
me
to
do
the
show
and
you
know
laurie's
a
vegan
so
but
the
only
reason
I
did
the
hamburger
was
because
the
last
show
was
a
hamburger.
A
Jamie
phillips,
my
friend
wrote
this
post
about
csi
proxy
csi
proxy
is
a
grpc
connector,
so
it
allows
you
for
windows
clusters
to
spin
up
a
windows
node,
and
then
you
can
run
a
regular
csi
and
then
you
run
csi
proxy
on
that
windows,
node
and
then
your
windows
container,
when
it
tries
to
mount
a
csi
volume,
all
the
calls
that
it
makes
these
powershell
calls
right
get
proxy
through
the
csi
proxy
onto
the
host,
so
that
allows
windows
to
consume
csi
in
a
really
nice
decoupled
sort
of
native
kind
of
way.
A
A
A
Oh
zach,
I
didn't
know
you
lived
in
arizona.
So,
if
you
don't
have
that,
then
you
have
to
have
like
your
windows
containers.
A
You
know,
need
to
use
a
thing
like
rancher
winds,
rancher
winds
right.
They
need
to
use
a
thing
like
rancher
winds
and
what
rancher
wins
does
is
basically
gives
you.
The
ability
to
run
system
calls
on
the
windows
operating
system.
Any
arbitrary
system
calls
through
a
socket
like
through,
like
you
know,
so
like
that's,
which
is
fine.
You
could
do
that,
but
csi
proxy
is
like
a
well-defined
fenced
system
interface.
So
it's
not
doesn't
have
that
huge,
huge
thing.
That'll
allow
you
to
do
anything.
C
B
C
A
C
A
Yeah,
why
don't?
I
have
a
it's
really
weird,
because
I
always
want
to
do
this
and
on
this
machine.
For
some
reason,
I
just
don't
have.
D
B
A
C
C
A
No
okay,
we'll
do
that
experiment
anyways!
So
that's
awesome,
though,.
A
Is
a
great
yeah
jamie?
Did
this
post
and
the
nice
thing
that
jamie
did
in
this
post?
Is
he
he
wan
he's
he
walks
through
how
to
set
this
up
with
this,
with
an
smb
driver
and
like
from
beginning
to
end
on
rancher?
So
it's
a
good
way
to
learn
about
windows,
storage
and
a
little
bit
about
persistent
volumes
and
stuff
in
csi
and
what
we
want
to
do
if
somebody
wants
to
get
involved
and
help
us
out.
We
have
our
sig
windows
dev
tools
and
we
would
love
to
take
this
sig
windows
dev
tools.
A
We
would
love
to
take
jamie's
blog
post
and
then
automate
that
into
our
sig
windows.
Dev
tools-
right
in
fact,
we'd
like
to
do
that
so
much
that
we
should
file
an
issue
to
do
that.
Right
now
add
jamie's
blog
post,
csi,
thingy,
okay,
I'm
gonna!
Do
it?
Okay,
so
yeah
anybody
wants
to
take
this
issue.
Sig
windows,
dev
tools,
issue,
number,
143,
okay,
good!
Now
what
else
is
in
the
notes.
C
To
point
out,
like
actually
in
your
in
the
docker
system
post
what
they
say,
what
they
point
out,
which
I
think
is
what
people
are
going
to
have
difficulty
with,
is
that
the
default
behavior
has
changed
right
like
before.
If
you
didn't
set
anything
that
it
would
just
follow
the
doc,
it
would
just
assume
the
docker
shim
was
present
and
then
connect
to
it.
Now,
if
there
are
multiple
things
configured
say,
if
you
have
docker
installed,
docker,
also
installs
container
d,
you
have
multiple
things
configured.
B
C
A
B
A
A
A
She's
still,
in
a
reason,
she's
just
secret:
it's
like
a
secret
like
her
secret
weekend
project,
so
all
right
so
who
vlad
is
here
vlad?
What's
up
man.
C
A
A
A
C
A
So
there's
a
plug-in
for
it,
so
so
we
could
actually
clone
this
and
try
it
out.
So
all
right.
So
maybe
if
we
have
time
in
the
show
we
can
give
this
a
shot
because
it
would
be
nice
to
sort
of
debug
kp
g
with
this.
Oh,
I
can
just.
Can
I
just
literally
run
this
is
that's
all
I
have
to
do
or
do
I
have
to
install
crew
first,
I
guess
I
have
to
install
crew
right,
coupe,
ctl.
B
A
C
C
D
C
A
A
Where
is
it
here
we
go,
so
I
think
I
just
deleted
a
bunch
of
stuff.
I
killed
it
so
before
we
get
started,
I
thought
I
would
just
like
show
you
show
you
all
like
how
this
sort
of
came
to
be.
So
it
turns
out
that
the
windows
coupe
proxy
a
long
time
ago
we
were
finding
that
it
was
really
really
nasty
in
terms
of
like
you
would
have
to.
We
wanted
to
add
our
own
specific
command
line
options.
Ravi
made
a
patch
for
that.
That
was
really
hard.
A
A
C
A
I
don't
even
have
my
slack
open
because
my
cpu
started
to
slow
down
global
issue
for
tracking
configmap.
Here
it
is.
A
Yeah,
so
this
is
like
one
of
the
ones
and
I
eventually
closed
this
because
we're
doing
it
all
in
kp
g.
I
think,
but
so
if
you
look
at
things
like
this
right,
like
there's
a
big
thing
about
like
you,
edit
a
config
map
in
kubernetes,
and
you
have
no
idea
what's
going
to
happen
right
so,
for
example,
you
know
you
might
expect
that
if
you
edit
a
coupe
proxy
config
map,
it
would
restart
or
it
reconfigure,
but
it
doesn't
so
you
have
to
like
restart
it
manually.
A
A
D
A
No,
you
gave
me
something
if
you
give
me
the
coop
config,
I
could
use
that.
Probably
okay,
I
don't
really
need
it
that
much.
But
if
you
give
me
the
cube
config,
maybe
I
can
use
it,
I'm
not
on
the
vmware
internal
network.
A
So
what
is
that?
Oh,
let's
see
duffy's
thing,
he
showed
me.
D
B
A
Yeah,
so
I
like
that,
so
let
me
go
back
pull
out,
I
don't
know
what
happened
to
pallavi.
A
Okay,
yeah
yeah,
but
okay.
So
that's
okay,
so
my
brain
hurts
every
kpng
unfortunate
naming.
So
once
this
starts,
we
can
we'll
look
at
the
config
map
for
coupe
proxy.
But
it's
weird
because
you
have
like
options
for
all
the
different
proxies
in
there
and
then
it
has
to
like
parse
through
them
and
more
and
more
will
duffy's
here
from
psyllium,
and
he
can
tell
you,
like.
A
Yeah
and
and
and
there's
not
really
any
way
to
plug
in
the
service
proxy
in
kubernetes
right
so
yeah
like
so
I
don't
know
how
you
all
do
it
over
there
at
ceiling.
But
if
you
have
anything
to
say
about
that.
C
Basically
zillion
does
this
thing:
does
things
a
bit
differently,
because
our
entry
point
into
all
of
this
is
a
bit
different
than
most
right.
So
again
we
use
ebpf
for
the
underlying
technology
so
like
in
most
cni's.
If
you
make
a
connect
with
both
cni's
once
the
packet
leaves
the
ip
address
inside
of
the
pod
and
starts
headed
off
to
other
places,
that's
kind
of
the
entry
point,
then
you
can
use
ip
tables
to
decide
what
to
happen.
C
Or,
what's
going
to
happen,
you
can
use
ipvs
to
decide
like
what
how
to
resolve
things
like
q
proxies
internal
service
load,
balancing
mechanism
and
all
that
stuff.
We're
able
to
do
all
that
in
ebpf,
and
so
our
entry
point
instead
of
being
the
ip
address,
is
the
connect
call
right.
So
when
we
see
that
system
call,
that
is
a
connect,
call
to
connect
to
that
to
connect
to
a
socket,
that's
where
we
grab
it
and
determine
what
to
do
with
it
right
like
what
are
we
going
to
route
it
to?
C
C
B
C
Which
is
different
than
a
packet
has
come
from
a
pod
ip,
and
now
that
I
know
that
the
pod
ip
I
have
some,
I
have
some
construct
of
the
pod
ip.
This
is
the
pod's
ip
address
and
it's
sending
me
traffic.
So
then
I
can
configure
my
ip
tables
or
ipvs
rules
accordingly
and
say,
since
I
know
it
came
from
this
pod
ip
and
there
are
network
policies
that
may
associate
themselves
with
those
pod
ips.
Accordingly,
then
I
can
make
decisions
about
what
to
do
with
that
traffic
or
making
those
decisions
in
evp.
C
Basically,
writing,
like
you
know,
think
of
it
almost
like
javascript
for
the
kernel
right,
like
think
about
it,
like
we're
we're
making
decisions
about
what
happens
when
that
connect
call
happens.
So
when
you
define
network
policy-
and
you
say
if
it
came
from
this
pod-
and
there
are
particular
you
know-
network
policies
that
are
in
play
that
may
restrict
or
allow
or
you
know,
do
anything
or
manipulate
that
traffic
in
some
way
we're
having
we're
making
all
of
that
business
logic
or
we're
defining
all
that
business
logic
in
ebpf.
C
If
you
go
to
it's,
just
psyllium
is
meant
for
anything
with
the
linux
kernel.
At
this
point
like
it's,
not
it's
not
limited
to
one
thing
or
another.
If
you
go
to
cylinder,
io
you'll
find
docs
for
how
to
deploy
solium
as
like
a
native
cni
or
in
a
variety
of
different
ways,
in
a
variety
of
different
clouds,
from
azure
to
aws
to
google,
or
you
know,
oracle
like
whatever
it
is
like.
B
C
A
B
C
Obvious
being
openly
obs
being
a
virtual
switch
right
effectively,
it
can
actually
accelerate
the
stuff
in
the
kernel.
There's
an
idea
in
obs
of
like
fast
path
and
slow
path.
Fast
path
is,
is
you
know,
traffic
is
connections
between
the
pods
ip
address
and
other
ip
addresses
on
the
same.
Node
likely
are
going
to
be
fast
path.
Connections,
because
the
local
ovs
switch
already
understands
how
to
get
to
that
stuff,
whereas.
A
C
Answer
there
are
a
few
implications
of
that,
like
one
of
my
one,
one
of
the
ones
that,
like
really
resonates
with
me,
is
like,
as
a
I've,
been
a
network
engineer
for
a
long
time,
and
I've
worked
on
some
pretty
big
infrastructure
and
I've
dealt
with
like
ddoses,
and
you
know
coming
up
with
different
architectures
of
networks
that
would
be
able
to
support
kind
of
the
general
traffic
inside
of
lab
environments
for
juniper
networks.
I
was
there
for
like
six
years.
C
I
did
did
that
a
bunch,
and
I
also
worked
on
like
att
backbone
networks
back
in
the
day
when
dsl
was
kind
of
kicking
off
and
like
some
of
the
stuff
that
really
impressed
me
at
the
time
was
like
factoring
into
a
design
such
that
you
can
control
traffic
as
close
to
the
source,
as
you
can
get
so
that
you
can
eliminate
the
the
noise
that
that
traffic
might
generate
on
the
wire
right.
So
you
don't
so
you're
not
like.
A
A
C
A
D
D
I
I
like
do
you
mind
if
I,
I
think,
as
I
understood
it,
what
it
does.
The
benefit
is
basically,
it
means
you
can
reject
the
traffic
before
it
actually
hits
the
tcp
stack.
It
still
goes
through
the
kernel
layer,
but
you
intercept
silimasu.
The
ebpf
basically
intercepts
the
syscall
and
can
just
call
abort
the
connect
call
rather
than
the
send
call
yeah.
B
C
It's
actually
well,
that's.
The
point
is
like
with
ebpf.
This
is
just
one
of
the
many
points
of
entry
points
that
we
can
actually
control
traffic
at
right.
The
first
one,
in
fact,
is
the
connect
call,
but,
as
we
move
down
to
the
host
stack,
we
can
configure
we
can.
We
can
control
anything
that
happens
on
a
network
interface.
We
can
control
anything
that
happens
at
pretty
much
any
point
in
the
stack,
which
is
to
your
original
question
like
okay.
But
how
does
psyllium
do
cube
proxy
right?
C
Like
if
the
two
pods
are
on
the
same
host
or
if
the
two,
if
the
two
entities
are
on
the
same
host
and
that
connection
to
the
socket
can
happens,
we
can
actually
just
kick
that
traffic
right
over
to
the
other
socket
where
that,
where
the
serving
socket
for
that
other
part
is
right,
we
could
shortcut
quite
a
lot
of
that
traffic.
C
C
Yeah
yeah
it
becomes
it
just
stays
in
the
application
layer
because
we're
just
transparently
passing
it.
In
fact,
that's
how
we
do
some
of
the
hubble
observability
stuff
right,
where
we're
actually
showing
dns
packets
that
are
going
by
and
that
kind
of
stuff
like
we're,
basically
passing
that
we
pass
traffic
from
that
connect,
call
right,
transparently
through
a
configured
envoy
proxy
that
sits
one
on
one
per
node.
We
transfer
we,
we
pass
it
transparently
through
that
envoy
listener.
A
A
A
Yeah,
so,
okay,
so
that's
the
difference
between
our
normal.
That's
the
difference
between
a
normal
cni
type
thing
or
network
kubernetes
network
tooling,
and
something
like
a
celium
is
it
you
don't
have
to
go
into
the
kernel
and
make
these
decisions
and.
D
D
C
Yeah,
all
these
things
are
passing
through
the
linux
kernel.
The
question
is,
in
my
mind-
and
this
is
the
key
difference
is
like
when
we
think
about
like
if
we
were
to
do
like
the
life
of
a
packet
even
before
it's
a
packet
right
here.
Your
application
makes
a
connect.
Call
that
connect
call
moves
down
the
layers
down
to
become
you
know
it
come
becomes
associated
with
that
source.
Ip
address
that
packet
gets
bundled
up
and
sent
out
on
the
wire
comes
down
to
the
nodes
network.
C
C
C
C
Exactly
right,
so,
in
our
case
like,
if
you
have
a
with
envoy,
you
can
configure
multiple
listeners
right
so
and
an
envoy
can
act
as
a
transparent
proxy,
something
that
happens
like
higher
than
the
network
layer
right,
maybe
not
the
application
layer,
but
a
bit,
but
a
bit
further
down
right.
So
we
can
actually
configure.
We
can
take
that
traffic
from
that
socket.
Pass
it
transparently
through
an
envoy
listener,
get
all
that
rich
metadata
from
that
traffic.
That's
being
paid,
that's
being
tr,
passed
transparently
through
envoy
and
then
expose
that
as
observability
data
right.
C
A
A
C
C
And
it
could
be
incredibly
efficient
for
how
it's
doing
for
how
it's
handling
that
switching
traffic.
Like
I
worked
on,
I
worked
at
nicera
like
back
in
the
day
like
on
on
obs
itself
like
years
and
years
ago,
right
so
like
yeah,
it's
I
know
for
sure
that
obs
is
like
a
huge
improvement
in
the
way
that,
like
virtual
switches
were
defined
in
the
past,
but
this
is
about
more
than
networking
right,
like
evpf
happens
at
at
points
in
the
linux
kernel
that
have
nothing
at
all
to
do
with
networking.
B
C
Right
and
so
the
question,
it's
not
it's,
the
comparison
isn't
fair,
I
think
to
either
of
them
to
be
honest.
Right,
like
it
comes
down
to
like
what
you're
trying
to
achieve
like
what
problem
you're
trying
to
solve
and
like
what
parameter
and
and
what
and
what
criteria
and
the
criteria
by
which
you
define
success
in
solving
those
things
right
like
ovs,
is
like
big
value,
and
I
mean
was
like.
C
I
mean
some
of
the
big
value
that
we
saw
in
ovs
like
when
we
were
bringing
that
to
market
was
like
an
open
research
right
or
in
openstack,
where
you
wanted
to
define
private
networks
that
maybe
had
overlapping
ip
addresses
or
overlapping
mac
addresses
and
stuff
like
that.
But
you
needed
to
be
able
to
have
some
programmable
layer
that
you
could
define
all
that
stuff
with
right.
C
A
C
Particularly
care
anymore
right,
they
just
want
like
they
want
to
be
able
to.
They
do
want
to
be
able
to
find
things
like
network
policy.
They
are
still
attracted
to
the
idea
and
the
concept
constructs
behind
micro
segmentation
being
able
to
say
that
this
application,
as
I
have
defined
it,
can
only
communicate
to
these
other
applications
by
label
or
by
or
by
some
other
like
easily
quantifiable
filter
set
right
yeah.
I.
A
Let's
get
back
to
the
okay,
so
we
had
a
flake
one
time
and
it
was
related
to
ipvs.
So,
like
I
wrote
up
a
post
about
it.
If
folks
want
to
look
at
it,
I
put
it
in
the
show
notes.
This
was
a
while
ago,
so
we're
looking
at
this
upstream
and
like
the
initial
test
that
failed
was
this
esipp
slow
should
work
from
pods
right,
okay,
so,
and
so
yes,
oh
yeah
where'd.
A
I
go
yeah
here,
okay,
so
so
that
this
was
the
name
of
the
test
right,
so
we
went
ahead
and
we
looked
into
this
and
ultimately
the
the
reason
that
this
failed.
If
once
we
looked
into
it
was
there
was
an
end-to-end
test
and
it
saw
that
the
source
ip
wasn't
preserved.
We
got
the
you
know
that
that's
where
it
fails.
A
So
it's
like
this
pod,
never
came.
It
was
never
able
to
get
to
this
pod
and
it's
like.
So
this
is
how
the
kubernetes
end-to-end
tests
work
like
we
make
pods.
We
have
one
thing
here,
one
other
thing:
there
we
try
to
talk
to
each
other
with
different
services
and
whatnot,
and
that's
you
know,
there's
hundreds
of
tests
that
do
that.
So
ultimately,
the
next
thing
you
have
to
do
is
you
know
you
go
into
these
like
prologues
and
then
these
prologues
will
have
you
know
you
can
go
into
this
job.
A
This
is
a
specific
job
and
the
name
of
this
job
was
test
ipvs
or
whatever,
and
it's
using
ipvs,
which
is
one
coop
proxy
implementation,
there's
ip
tables,
there's
ipvs,
there's
windows,
kernel,
there's
user
space,
there's
windows,
user
space
and
ultimately
like
this
is
the
thing
that
was
being
called.
This
command
was
being
called
and
it
was
failing.
You
could
see.
This
is
an
external
service
ip.
This
is
a
google
range
35,
whatever
ip
right.
A
So
it's
like
this
was
failing,
and
then
the
code
was
exit
code
of
that
was
was
a
seven,
so
you're
like
okay.
What's
seven
so
failed
to
connect?
This
means
you
have
a
firewall
at
your
side
that
like
prevents
you
from
calling
out
to
this,
and
I
think
this
also
can
happen
if
there's
just
nothing
on
the
other
end,
if
there's
an
ip
address
and
the
ip
address
doesn't
have
anywhere
to
go,
I
think
you
can
get
a
seven.
A
I
don't
know
duffy
can
correct
me
if
I'm
wrong
there,
though,
but
so
there's
a
lot
of
hypotheses
you
might
have
here
right.
Maybe
the
client
isn't
up.
Maybe
the
coug
proxy
never
wrote
any
routing
rules
for
that
right
so,
like
maybe
the
client
was
up
and
the
external
ip
was
up,
but
there
was
no
like
thing
on
the
node
that
was
saying
route
this
traffic
to
that
external
ip.
A
So
we
looked
around
and
antonio
I
looked
at
this
with
antonio.
He
showed
me
how
to
do
this
stuff
and
antonio's.
C
A
Yeah-
and
we
need
to
get
him
on
here,
one
of
these
days,
so
so
we
dug
through
all
these
logs
and
then
finally,
what
we
ultimately
found
was
that
it
was
related
to
this
external
traffic
policy,
local
bug,
so
it
turns
out
that
ipvs,
you
know,
if,
like
pod,
isn't
on
the
same
node,
there's
a
certain
semantic
in
ip
tables
where,
if
external
traffic
policy
is
on
even
if
it's
not
local,
I
think
it'll
like
forward
it
the
pod
to
to
to
a
non-local
as
a
fallback
or
something
I
think
something
like
that,
and
so
we
just
like
disabled
that
job
right
so
like
this
is
the
problem
with
having
a
big
monolith:
entry
right,
it's
like!
A
A
So
you
know
that's
kind
of
like
one
of
the
things
that
brought
about
this
whole
idea
of
like
well.
We
need
some
framework
where
people
can
extend
the
coupe
proxy
without
having
to
jam
it
in
tree
right.
So
so
that's
how
that's
kind
of
how
we
got
to
the
kp
g
thing.
So
in
yeah,
in
like
2021
ideas,.
A
Ideas:
coup
proxy.
We
had
this
mailing
list
thread
where
we
asked
around
and
we
said
like:
okay,
a
lot
of
people
have
ideas
on
this
who's
gonna.
Is
anybody
willing
to
help
us
try
to
clean
some
of
this
up
and
make
it
easier
to
maintain
the
code
proxy
and
then
mikhail
was
like?
Oh,
I
wrote
this
thing.
That
does
that.
Does
that
so,
like
then,
we're
like
okay?
So
then
we
had
our
first
cook
proxy
meeting
group
and
nobody
really
talked
a
lot
about
the
cleaning
up.
A
Coop
proxy
and
a
lot
of
people
had
questions
about
kpg,
so
we're
like
well
what,
if
we
just
kind
of
like
hack
around
on
this
thing
and
try
to
get
it
working
so
so
we
did
that
for
a
few
months
and
you
could
see
like
we
were
able
to
like
get
now.
It's
like
a
lot
of
people
are
committing
to
it
like
these
are
the
commits
back
in
back
in
2020,
and
here's
where
we're
at
now
right,
we've
got
a
huge
we've
got
like
10
people
working
on
this
thing
now
right.
A
So
what
this
thing
does
is
it
makes
it
so
that
anybody
can
build
the
thing
that
that
well,
not
anybody
but
like
so
it's
easy
to
extend
the
coupe
proxy
in
the
way
that
duffy
was
just
talking
about
right.
So
what
what
kpng
does
is
it?
We
have?
You
have
a
global
data
model
in
memory
of
the
entire
kubernetes
state
space
for
networking
and
then
that
global
data
model
is
decoupled
from
the
kubernetes
api?
A
Okay
and
then
what
happens?
Is
you
have
these
things
that
slurp
in
what
the
cube
api
server
has
right
and
then
they
put
it
in
that
data
model
and
then
you
can
plug
in
through
grpc.
You
can
just
plug
in
a
sync-
and
you
know,
iptables
is
a
sync,
for
example
right,
so.
A
A
Where
did
it
go?
I
thought
I
had
a
diagram
of
this
in
here,
but
maybe
I
don't
anyways.
I
can
just
tell
you
the
way
it
works
so
the
way.
Oh.
Actually,
I
do
have
a
diagram.
It's
right
here.
So
the
way
it
works
is
you
have
a
back
end
and
the
back
end
will
periodically
talk
to
the
ping,
the
boss
and
say
hey.
I
need
the
new
endpoints.
I
need
the
new
services,
I
need
the
new
endpoints,
so
it
grabs
all
these
rules
and
then
it
writes
the
rules
out.
A
A
Okay,
yeah,
okay,
see
you,
sir
eric
okay,
so
vlad
congratulations.
We
were
able
to
get
k-top
running
well.
So
what
happens?
If
I
hit
enter
anything
nothing,
but
this
is
cool.
This
is
great
vlad,
so
so
vlad
yeah
vlad
been
working
hard
on
k-top.
I
remember
seeing
the
original
one
a
long
time
ago,
so
all
right
so
the
way
so
so
so
I
say
these
are
decoupled
and
I'm
like
okay,
so
I
really
mean
it
so
you
start
a
pod
up
and
you've
got
two
different
containers
here.
A
You've
got
your
the
thing
that
watches
the
api
server
and
then
you've
got
the
thing
that
about
the
thing
that
does
the
back
end
work
right.
So.
A
If
I
go
in
here,
I
can
actually
start
this
up.
I
can
just
start
it
comping
up
really
easily,
so
I
can
run
this
hack,
kpng
local
up
cluster,
and
this
is
going
to
start
a
so
there's
nothing
here,
right,
there's
nothing
running
here,
so
I
can
just
clone
kpng
right
and
then
I
can
just
run
hack,
kpng
local
up
right
and
then,
if
I
run
this,
it
will
create
a
kind
cluster
with
no
coupe
proxy.
That's
like
a
newer
thing
that
you
can
do
in
kind.
A
A
I'll
just
show
people
the
script
right,
so
so
we
we
we
just
we
did
it
initially
with
calico,
but,
like
you
know,
you
could
do
any
cni
or
just
use
kindness
in
our
ci
jobs.
We
use
kindness.
A
A
So
so
yeah
we
compile,
we
compile
it
down.
A
Yeah
build
kpng
here
and
then
after
we
build
it,
we
install
k8s
and
we
install
kpng.
So
let
me
find
where
our
kind
definition
is.
I
think
it
moved
it
used
to
be
in
here
vim
hack
kind.
I
am
oh
yeah
here
it
is
so
you
could
see
here
these
coupe
proxy
mode,
none
right!
So
if
you
do
that,
you'll
spin
up
a
kind
cluster
with
no
coupe
proxy
right
so
yeah.
A
So
we
do
that.
So
we're
doing
all
that
now
we're
making
this
kind
cluster
it's
not
going
to
have
a
proxy.
Then
we're
going
to
load
our
proxy
into
it
right.
So
we've
we've
already
built
the
kpng
thing
and
we've
stored
it
locally
and
then
we'll
load
it
load
it
locally.
Then,
when
this
comes
up
we'll
check
to
see
whether
you
know
the
first
thing
to
check
you
know
is,
you
know
like
the
best
thing
to
do
is
like
check,
coordinates
because
coordinates.
A
A
All
right,
so,
let's
give
it
a
second
so
now
kpmg
is
coming
up
and
as
this
starts
I'll
start
walking
you
through
the
code
base.
So
so
we
mentioned
back
ends
several
times
folks.
What's
up
doug
moz,
I
haven't
seen
y'all
in
hawaii
in
a
while
nice
one
vlad
what's
up,
I
mean
I
know
I
I
do
have
a
diagram,
I'm
just
not
good
at
searching
myro.
B
A
Doug,
so
so
the
way
this
works,
the
way
the
code
base
is
structured,
is
there's
these
back
ends.
So
right
now
we
have
ip
tables.
We
have
ipvs
and
nft
right.
Those
are
the
three
back
ends
and
then
there's
a
like
a
cmd
directory
and
the
cm
each
one
of
these
back
ends
has
its
own
go
mod.
It's
like
its
own,
go
module
right,
so
you
don't
have
to
like
on.
This
is
the
first
big
difference
from
the
intrigue
coupe
proxy
in
the
intricate
proxy.
A
C
A
Nice
totally
pluggable
right,
so
you
don't
even
need
to
obviously
like
you
can
run
kpng
with
a
back
end.
That's
not
even
in
tree
right
and
lars
has
a
great
blog
post
about
that.
So
I
have
his
link
to
his
blog
post
somewhere
in
here.
A
Yeah,
so
he
has
a
blog
post
lars.
Has
this
blog
post
about
his
initial,
take
at
this
and
and
so
like
when
he
first
tried
to
extend
it,
it
was
really
messy,
so
we
had
to
do
a
bunch
of
weird
stuff
to
get
it
working,
and
I
think
now
now
it's
really
easy
to
extend
because
we've
ever
since
what
we
did
is
when
he
tried
to
do
this.
We
realized
we
needed
to
separate
these
back
pens
out
so
that
it
was
really
easy
to
not
hit
go
dependency
problems.
A
Yeah
so
then
the
so
then
like.
If
we
look
at
this
script,
we
just
ran
like
what
this
does
is.
It
runs
this
command
local
to
sync
and
then
that
local
to
sync
command
reads
this
like
there's
this
local
commands
function
and
what
that
does
is
it
looks
at
every
backend
that's
been
registered
and
then
it
just
adds
a
command
line
option
for
it.
So
that
allows
you
to
run
these
entry
ones
in
the
same
way
that
you
would
run
an
entry,
coupe,
proxy
and.
A
A
A
These
back
ends
now
take
over,
and
so
you
so
then
you're
probably
wondering
well
how
do
these
back
ends
work?
So
so
what
happens?
A
Is
you
have
this
other
process
for
the
back
end
and
if
you
go
to
sync.go
you'll,
see
kind
of
what
we
might
call
like
the
the
interface
between
a
back
end
and
a
front,
and
the
and
the
interface
is
that
these
four
functions
like
set
service,
delete
service,
set,
endpoint,
delete,
endpoint
and
that's
really
all
any
service
proxy
does
right
is
an
endpoint
comes
in
and
it
has
to
do
something
with
it
and
as
service
comes
in
and
has
to
do
something
with
it
right.
A
So
you
have
to
implement
those
four
functions,
and
you
know
where
your
pluggable
back-end
logic
comes
in,
so
the
coup
proxy,
the
in
the
regular
coupe
proxy,
like
the
ip
table
like
well
in
the
kpmg
implementation
of
ipt
of
the
iptables
proxy.
What
we
do
is
these
are
the
places
where
we
do
the
coupe
proxy
sync
logic,
to
write
things
in
memory
to
the
caches
and
all
that
of
the
id
tables
rules
that
we
want
for
the
overall
state.
So
that's
the
way
this
works,
that's
how
you
extend
it
right.
A
So
we've
got
a
lot
of
stuff
coming
in
me,
and
rajas
are
working
on
the
user
space.
We've
got
actually
pallavi's
working
on
the
user
space
now
with
the
roon
she's
they're,
taking
that
over
from
us
and
a
meme
is
working
as
a
windows
user
space,
pr
that
implements
a
windows,
user
space,
backend
and
me
and
doug
are
probably
going
to
start
a
windows
kernel
back
in
pretty
soon.
So,
like
that's
how
that
all
works!
A
A
It
wrote
an
endpoint
so
that
the
internal
api
server
endpoint
that
core
dns
could
access
that
and
then
talk
to
the
api
server,
and
in
order
for
that
to
have
happened,
core
dns
has
to
be
running
on
a
node
that
has
a
service
proxy,
a
working
service
proxy.
So
we
know
kpng
is
working
so
now
we
can
coop
ctl
log
that
n-10
cube
system
right.
We
can
see,
there's
two
containers
in
here,
there's
kpmg
and
then
there's
kp
and
gip
tables
right.
A
So
let's
look
at
the
kp
and
gip
tables
container
right
and
then
we
could
see
that
here
it
is
we're
storing
the
iptables
rules
right.
So
this
happens.
So
if
we,
if
we
logs
dash
f
this,
we
can
hopefully
we
can
scale
these
pods
up
and
down
and
we
could
see
that
it's
doing
work
right,
so
coupe
ctl
get
pods
dash
a
group
ctl
delete
pod.
A
So
let's
delete
one
of
these
coordinate,
implementations,
dash
n
cube
system
and
then
we'll
see
that
our
iptables
thing
like
flipped
out
and
rewrote
all
the
all
the
rules
right
so
it
saw
an
endpoint
was
gone
right
so
like
what
happened
there.
Well,
what
happened
there
was
this
happened
right.
So
what
happened?
A
Was
that
global
state
thing
the
other
kpmg
container,
watching
the
api
server
saw
a
change,
and
so
then,
when
the
back
end
called
back
out
to
that
model,
container
saw
oh
there's
a
change,
and
now
I
need
to
go,
and
I
need
to
rewrite
my
background
routing
rules
right.
So
that's
how
that
whole
thing
worked.
So,
like
that's
kpng
101.,
that's
that's
the
basics
of
it.
It's
it's!
Actually,
not
that
complicated.
Once
you
kind
of
once,
you
kind
of
think
about
it,
and
so
we
do
have
a
new
set
of
tests.
A
Do
not
have
a
sort
of
a
set
of
service,
conformance
tests
that
will
tell
us
that'll
tell
you
about
cluster
ips
and
node
ports
and
load
balancers
and
external
names,
and
whether
they're
implemented
the
right
way
and
whether
all
your
services,
whether
your
services,
are
working
across
all
the
pods
and
all
the
nodes
in
your
cluster,
because
we're
seeing
more
and
more
and
more
in
the
community
of
these
different
service,
proxy
extensions,
and
so
that's
a
new
test.
A
Oh
yeah,
so
so
that's
that
those
are
these
service
we'll
be
here.
We
go
these
ones,
so
here's
where
they
are
and
then
I
don't
know
what
you're
doing.
If
you
want
to
hang
out
wednesday,
we're
we're
going
to
do
a
stream
about
this
and
go
through
the
details
of
how
to
use
this
and
everything
else.
A
C
Was
gonna
ask
another
question
which
is
that,
like
you
know,
I
mean
it'd
be
kind
of
interesting
to
like
at
some
point
dig
into
this,
but,
like
you
know,
some
of
the
challenges
that
we
have
with
q
proxy
implementations
are
like
the
time
it
takes
for
q
proxy
because
it
effectively
is
a
distributed
system
right
so
like
every
node
has
its
own
view
of
the
world,
regardless
of
how
we
try
to
keep
them
in
sync,
there's
always
going
to
be
some
latency
involved
in,
like
whether
all
of
the
nodes
determine
that
an
input
has
become
unhealthy
fast
enough
to
make
a
decision
to
make
to
make
a
change
in
the
way
that
they
might
decide
to
rob
traffic.
C
And
like
it'd,
be
interesting
to
see
kpmg's
like
take
that
on
wherein,
like
you
know,
having
an
understanding
of
because
you're,
basically
building
business
logic
now
on
the
node,
like
is
the
node
able
to
make
a
bet,
a
faster
decision
about
whether
health
is
is
active
or
not
like
when
a
node
when
a
pod
gets
marked
unready
for
whatever
reason
like
if
a
bod
marks
itself,
if
it
fails
its
health
check
or
whatever.
If
if
the
node
could
detect
that
that
change
had
happened
faster,
then
that
might
be.
A
Yeah
yeah
right,
like
so
you're
faster
already,
the
api
server
is
going
to
be
able
to
tell
you
about
that
faster.
Your
watches
are
not
going
to
have
as
much
latency
you're
going
to
have
you're
going
to
have
better
performance
anyways,
but,
like
you,
don't
even
need
to
talk
to
the
api
server
because
you're
on
the
node
right.
So
you
could
I
mean
I
guess
you
could
you
could
start
to.
You
could
make
a
back
end.
That
was
like
aware
of
the
state
of
things
on.
C
C
Implement
like
circuit
breaker
pattern
right
like
do
the
circuit,
breaker
thing
in
the
implementation
at
kpng
right,
you
know,
circuit
breaker!
Is
that
pattern
wherein,
like
you,
have
multiple
backends
that
are
behind
a
service
and
you're
addressing
them
individually?
And
you
try
to
connect
to
that
first
one
and
you
and
it
may
be
as
simple
as
try
to
make
the
connect
call
and
if
the
connect
call
takes
longer
than
a
a
defined
latency
then
drop.
B
A
C
So
so
that,
because
because
kpmg
is
effectively
in
the
data
path,
making
the
routing
decision
it
could
it
could
implement
like
a
circuit,
breaker
pattern
where
I
could
say
like
you
know,
I
tried
to
make
that
inc.
I
tried
to
make
that
connect.
Call
it
took
longer
than
I
expected
for
it
to
establish
we're
going
to
move
on
to
the
next
one.
A
C
C
D
A
C
Like
it's
not
really
about
how
it's
not
really
about
trying
to
understand
those
things
that
are
on
the
network
or
sharing
that
network
with
you,
it's
more
about
you
give
up
it's
more
about
like,
instead
of
implementing
the
connect
instead
of
implementing
this
as
manipulating
the
packet
so
that
you
have
determined
what
the
destination
of
that
packet
is
and
you
send
it
out
the
door
boot
and
it
works
or
it
doesn't
right
like.
Instead,
you
might
make
a
decision
about
you
connect
first,
and
you
try
to
connect
in
that,
and
you
have
some.
A
C
B
C
You
know
look
at
ip
tables,
we
determine
based
on
the
percentage
of
chance
and
the
healthy
endpoints
that
we
are
currently
under
the
impression
that
these
things
are
healthy.
We
pick
one,
we
manipulate
the
destination
ip
address
of
the
packet
and
we
send
it
and
we
don't,
and
we
don't
know
or
care
that
it
ever
connected
right
like.
C
D
C
C
A
D
No,
I
mean
there's
they're
like
really
useful
cases
like
what
happens.
If
you
have
one
node,
that's
overloaded,
so
you
want
to
push
traffic
to
healthy
nodes.
So
you
basically
just
randomly
pick
one
node.
If
it's
unhealthy,
then
shift
it'll
just
transparently
try
another
node
before
the
user
runs
into
a
failed
connection
right
or.
B
C
Way
that
that's
implemented
in
ip
tables,
is
you
pick
one,
and
then
you
know
by
chance
or
by
some
percentage
of
chance,
and
then
you
manipulate
the
destination
ip
and
off
you
and
off
you
send
it
right.
C
That's
fair
yeah,
but
the
difference
here
is
that,
like
you're
saying
that
your
business
logic
is
able
to
connect
before
allowing
that
connection
to
establish.
A
C
And
the
time
it
takes
for
this
rule
to
become
propagated
all
the
way
down
to
ip
tables
on
every
node.
It's
not
insignificant
right
and
if
there's
a
change
on
some
node
in
the
thousand
node
cluster,
where
the
api
server
is
under
significant
load,
then
it
will
take
longer
for
that.
For
that
event
to
propagate
down
to
the
cube
proxy
piece.
C
You
know
like,
and
so
the
question
becomes
like
you
know,
this
is
another,
take
on
basically
solving
the
similar
problem,
but
where
you,
where
you
basically
say,
instead
of
instead
of
just
kicking
it
out
the
door.
What
if
we
just
look
to
see
if
the
connection
was
healthy
and
we
could
be
a
little
bit
more
intelligent
about
what
that
connection
to
health
means
right
like
if
the
proxy
says
hold
this
connection,
because
it's
a
high
priority
connection
or
whatever
like
then
I
want
to
go
ahead
and
make
try
and
establish
a
connection.
C
A
Yeah,
so
so
this
is,
and
now
this
is
what
he's
talking
about
with
the
scale
thing
right
he's
talking
about
now
like
when
you
have
a
real
cluster
with
hundreds
of
pods
yeah
right
now.
This
becomes
a
thing
like
you
have
to
read
through
these
rules,
one
at
a
time
and
then
you.
Finally,
at
the
end,
you
have
a
50
probability
of
picking
the
last
one,
but
you
use
it.
So
this
is
kind
of
the
classic.
A
C
A
D
B
A
Yeah,
so
I
think
the
thing
is
yes,
so
so
you
could
make
these
local
and
the
hardest
part
about
this
is
that
there's
never
been
an
easy
way
to
plug
this
logic
in
so
that
somebody
could,
for
example,
focus
on
that
part
of
the
problem
and,
and
the
other
part
of
it
is
even
if
somebody
did
plug
it
in
it's
not
super
easy
to
to
know
that
you
haven't
broken
some
basic
semantic
rule
that
defines
a
kubernetes
cluster.
A
So
so
I
think
we,
I
think,
we're
in
a
position
where
we
can
solve
both
those
problems
here
with
some
of
the
things
we've
we've
shown
you
today
and
some
of
the
stuff
zach
will
show
you
on
wednesday
right
so
yeah.
A
Having
me
yeah,
I
mean
I
feel
like
we've
gotten
through,
like
we've
gone
through
the
code,
everyone
knows
how
to
run
it
this
this
runs
from
source,
so
you
run
the
hack
locally.
You
can
change
the
code,
you
can
you've
got
60
issues
and
all
of
our
issues.
What
we're
doing
what
we're
really
focusing
on
now.
A
Yeah
yeah,
so
I
just
made
this
today
so
that
I
could
say
we
had
60
issues
but
other
than
this
one
right.
So
these
are
all
right,
so
pair
over
at
calum
networks
has
been
working
really
hard
at
helping
to
define
each
one
of
these
issues
in
a
sort
of
modular
way
right
so,
and
so
everybody
who's
helped
like
rajas.
A
We
could
see
one
of
the
things
I'm
real,
proud
of
about
this
project.
Is
it
it's
one
of
the
only
upstream
projects
where
we
really
work
together
on
everything,
so
you
could
see.
Neha
and
anusha
are
working
together
on
this
one
doug
and
fred.
Well,
these
are
well.
These
are.
These
are
just
who
who
are
working
on
it
but
like?
Actually,
these
are
all
most
of
these
are
done
like
pair
programming
on
stuff.
So
you
don't
have
to
be
like
a
networking
expert
to
get
involved
here.
You
can
just
be
like
hey.
A
I
want
to
work
on
something
somebody
hack
on
this
with
me
and
then
we'll
kind
of
like
find
you
a
partner
to
like
do
so.
That's
awesome,
so
you
don't
have
to
be
like
some
super
smart,
like
networking
person
to
do
this.
Any
idiot
can
can
help
us,
including
me,
even
a
shod
with
that
hat.
A
A
A
A
Yeah
but.
C
D
D
Like
the
way,
I
the
only
thing
I
know
about
circuit
breaker,
that
is,
it's
basically
meant
to
both
protect
the
like
remote
side
from
being
overloaded
and
just
allow
the
client
to
fail
fast.
But
I
think
what
duffy's
proposing
is
more
about
protecting
the
client
from
ever
even
hitting
that
fail
fast
state.
You
know.
C
It's
effectively
analogous
right
like
here
it
still,
it
still
comes
down
to,
like
you
know,
to
failing
fast
and
then
what
you,
what
you
do
when
that
connection
I
mean
at
this
point,
the
application
would
probably
never
know
that
it
failed
right.
Like
you,
the
connect,
you
have
a
better
decision-making
process
about
where
to
send
that
traffic.
A
I'll
I
wanted
to
show
folks
one
other
thing
that
we
spent
some
time
on.
That
was
real
interesting,
so
so
like
about
like
five
years
ago,
mikhail
filed
this
issue-
and
we
didn't
know
about
this
issue.
But
me
and
anusha
were
looking
at
this
problem,
where
we
were
failing
one
of
the
one
of
the
k-8s
ip
tests,
where
you
preserve
source
source
pod
ips.
So
you
hit
a
service
proxy
and
like
it,
preserves
the
original
pod
ip
right.
A
And
when
you
go
through
the
through
the
cluster
ip,
and
so
it
turns
out
that
there's
a
flag
in
coop
proxy
called
masquerade
all
and
if
you
turn
that
yeah
so
and
what
that
allows
you
to
do
is
that
allows
you
to
do
like
essentially
like
a
user
space
proxy?
It
essentially
allows
you
to
make
it
so
that,
like
all
of
your
pod
source,
ips
are
hidden
behind
the
service
ip
right,
and
so
it
turns
out
that
the
default
for
that
was
false.
A
But
we
accidentally
made
it
true
whoo
when
we
did
this
in
kp
g,
so
yeah
like
we
broke
that
test
broke
and
it
took
us
a
while
to
figure
out
what
was
wrong.
So
then
we
had
to
like
spin
up
another
kind
cluster
and
look
at
the
exact
same
iptables
rule
created
during
that
exact
same
test,
and
then
afterwards
we
saw
that
this
line
wasn't
being
written.
So
then
what
happened
was
ben
ben
implemented
mikhail's
patch
five
years
ago
and
what
he
did.
Is
he
added
this
thing
in
here?
A
Where,
if
you
have
that
option
turned
on
okay,
then
it
turns
around
this.
This
masquerade
mark
right
and
if
you
turn
on
that
masquerade
mark,
then
whenever
you
hit
the
you
know,
your
incoming
pod
is
like
hidden
behind
the
service,
ip,
your
pod,
ip
and
so
anyways.
We
were
going
through
that
code
today
and
we
realized
that,
if
you,
if
you,
if
you
hex
decode
this,
it
says,
mask
and
so.
A
Wondering
why
you
even
have
to
do
that.
It
turns
out-
I
guess,
iptables.
When
you
give
it
these
masquerade
marks,
you
have
to
send
it
an
integer,
because
it's
using
a
bit
mask-
or
some
like
that-
I
don't
know
like
I
didn't.
I
gave
up
at
that
point,
but
anyways
so
yeah
me
so
anusha
fixed
this
and
we
fixed
that
bug
recently
and
and
but
the
reason
I've
shown
this
to
people
is
this
is
the
type
of
work.
That's
valuable
right.
A
A
A
Yeah
bye
y'all
next
time,
all
right
thanks,
nishaad
thanks
thanks
duffy
thanks
everybody
for
coming
to
tgik
and
come
to
andrea,
live
this
wednesday
fours
come
watch,
zach,
andrea,
andrea
dot,
io,
slash,
live,
come
hang
out,
we're
going
to
introduce
a
new
tool.
We've
been
working
on
cool
bye,
everybody.