►
From YouTube: TGI Kubernetes 090: Grokking Kubernetes - kube-proxy
Description
Notes archived at https://github.com/heptio/tgik/blob/master/episodes/090/README.md
Come hang out with Duffie Cooley as he continues the "grokking kubernetes series with a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
A
B
A
Discovered
something
interesting
about
YouTube.
You
know
it's
always
a
learning
process
I'm
about
two
minutes
late,
because
I
was
actually
talking
to
you
all
thinking
that
I
saw
you.
You
were
able
to
see
me,
but
you
weren't,
so
I
hope
that
you
can
all
see
me
now.
It
looks
like
everything
is
streaming
along
wonderfully.
So
if
you
can't
give
me
a
thumbs
up
good
to
see
you
all,
we
have
who's
with
us
today,
so
we
had
well
lead
from
I'm,
not
sure
we're
we're
leaders
from
but
he's
definitely
not
far
from
sleep.
A
Probably
we
got
my
Martin
from
the
Netherlands.
We
have
me
replying
back
that
sleep
is
coming
comes
for
us
all.
You
know
how
it
is.
We
have
LeMat
II
come
in
joining
us
good
to
see
you,
sir.
We
have
Rory
in
it
for
the
long
troll
by
giving
us
a
name
of
the
city
that
he's
in
that
I
think
only
worry,
and
maybe
a
few
other
people
can
pronounce
go
ahead.
A
Welcome
Johar
I'm
glad
to
hear
that
Alemany
I'm,
glad
that
you
really
like
the
series
I'm,
definitely
enjoying
doing
it
and
I
plan
on
doing
quite
a
bit
more
with
it
I'm
right
now,
I'm
kind
of
working
through
the
whole
series
of
components
themselves
and
then
I
kind
of
want
to
move
on
to
some
of
the
higher
level
primitives
like
what's
actually
happening
in
the
API
server.
That's
actually
why
I
listed
the
API
server
last
in
the
series.
A
That's
because
once
I
get
to
that
part,
I
want
to
kind
of
start
going
through
things
like
the
deployment
and
like
really
digging
into
like
what
a
deployment
actually
does
talking
about.
Stateful
sets
and
like
when
you
pick
them
over
other
things,
kind
of
digging
into
that
sort
of
stuff,
so
I'm
really
glad
you're
digging
it
and
Shahar
also
I'm
glad
you're
doing
it
as
well.
A
It
would
be
a
pretty
neat
book.
I
think
that
I
struggled
with
whether
it
makes
sense
as
a
conference
I,
don't
know,
I'll
give
that
some
more
thought,
hello,
Suresh
from
Hamburg
again
Christoph
from
düsseldorf,
Germany
and
Antoine
from
Paris
France
could
see
Edwin
and
Vadim
from
Kiev
Ukraine
and
Stevo
from
Germany
lips,
Germany
and
Slav
from
Sofia
Bulgaria
such
an
amazing,
worldwide
community.
We
have
here,
you
know
it's
just
absolutely
a
bit
I
mean
from
Strasbourg
and
Niren
hello,
Duren
from
Jersey
he's
a
good
friend
of
mine.
A
He
works
with
me
here
at
VMware,
super
smart
guy
and
always
like
you
know,
engaging
in
everything,
that's
fun
about,
but
the
crazy
changes
that
we
see
in
technology
so
super
awesome
welcome
there
and
we
have
David
from
Saint
Marie
Ontario
Canada.
We
have
yet
team
from
Ashburn
Virginia
and
have
lesson
from
South
City
California
like
South
San,
Francisco
er
is
there
actually
a
city
called
South
City
in
California.
I
would
not
be
surprised
if
there
was
a
city
called
that
we
have
Gustavo
from
Brazil
and
Shawn
from
Birmingham
England.
A
It's
very
good
to
see
you
all
I'm
glad
you're
all
here,
and
then
this
episode
we're
going
to
talk
about
Q
proxy
as
a
component.
We're
gonna
dig
into
like
how
its
deployed
some
of
the
configuration
stuff
before
it
like
we're,
gonna
we're
going
to
dig
into
that
sort
of
stuff,
Oh,
south
san
francisco
that
makes
sense
and
then,
but
before
we
do,
that
we're
actually
gonna
like
revisit
a
little
bit
about
the
qibla.
That
I
meant
to
mention
while
I
was
on
the
the
call
couple
of
weeks
ago.
A
But
you
know
how
things
happen:
get
a
little
out
of
hand.
We
didn't
get
to
it.
So
I
definitely
want
to
come
back
to
the
cubelet
API
and
show
you
where
to
kind
of
discover
more
information
about
it
and
that
sort
of
thing
so
we're
gonna,
get
we're
gonna,
go
back
to
that
real,
quick
and
then
and
then
we'll
move
on
to
Q
proxy.
But
before
we
do
that,
let's
check
out
our
notes
for
this
week.
A
A
Know
ask
for
the
blessing
of
the
demo
gods
as
we
go
into
this
on
this
beautiful
Friday.
The
13th
I
think
we're
in
good
shape,
but
you
know
you
never
want
to
be
too
overconfident
right,
like
you
want
to.
You
want
to
think
that
there's
always
like
room
for
for
variant
for
variation
there.
The
next
thing
we
have
is
this
really
like
a
neat
thing:
the
numerology
stuff.
A
There's
it's
palindrome
week
here
in
the
US:
it's
not
founder
week
in
other
parts
of
the
world
where
they
order
their
dates
a
little
differently
like
some
parts
of
the
world
we
do
month/day/year
or
that
sort
of
stuff.
But
here
in
the
US
we
do
month.
Oh
sorry,
the
other
way
around
we
do
day
yeah,
you
can
say
India
someplace
someplace,
to
do
day
month
year.
A
You
know
well,
I
guess
you
know:
I'm,
not
gonna
put
my
systemize
superstitions
on
y'all
I
just
thought.
It
was
interesting.
Other
interesting
stuff
happening
this
week,
q1
16
that
has
been
bumped
to
September
eighteenth
there's
a
lot.
That's
been
going
in
to
cube
ones,
to
keep
116
and
including
API
removal.
You've
heard
me
talk
about
this.
I
did
a
whole
episode
on
it
a
couple
weeks
ago
and
I
highly
recommend
that
anybody
watching
this
video
take
a
moment
and
understand
what
I'm,
referring
to
when
I
didn't
refer
to.
Api
removal.
A
Remember
that
some
API
is
that
you
may
be
taking
taking
into
taking
you
for
granted,
are
about
to
change,
and
they
will
no
longer
work.
The
way
that
you
expect
them
to,
and
this
will
this
will
happen
in
very
insidious
ways
because
they'll
be
manifests.
You
have
always
just
applied
to
kubernetes
that
worked,
and
now
there
will
be
manifest
that
do
not
that
do
no
longer
work.
A
A
But
yeah
so
API
removal
in
116
super
super
important
I'm,
reminding
you
now
and
I'll
remind
you
again
and
we'll
talk
about
it.
Some
more
API
removal
in
116,
so
just
to
give
me
a
link
to
that
real
quick,
because
you
know
but
I
know,
let's
duplicate
API,
but
it's
actually
API
removal
in
116
all
right.
A
That
deployment
will
not
be
accepted
by
the
API
server,
because
that
object
is
no
longer
known
by
the
API
server.
You
have
to
use
apps
of
V
1
for
daemon
set
deployment.
Stateful
setting
replica
set
I'm
actually
more
worried
about
deployment
and
ask
you
one
that
I
am
like
for
most
references,
because
I
think
that
quite
a
lot
of
people
out
there
working
with
kubernetes
are
going
to
be
like
surprised
that
it
doesn't
work
the
way
they
expect
it's
easy
fix,
but
just
beware
spread.
The
word
help
me
spread
the
word
about
it.
A
Alright
enough
on
that
subject,
let's
move
on
what
else
do
we
have
so
116
got
pushed
a
little
bit
mainly
because
of
the
amount
of
work
that
a
amount
of
change
that's
happening,
so
that
you
want
to
make
sure
that
it's
a
really
good
release,
so
they
pushing
the
date
out
a
little
bit
the
release.
The
release
will
be
a
little
bit
later.
A
A
B
A
What
is
the
difference
between
the
kubernetes
cluster
using
100
in
1
standard
1:1,
CPU
VMs
versus
having
one
and
one
standard,
96
CPU
versus
six
and
one
sixty
sixty
and
CPU
I
asked
this
question
multiple
times
in
the
kubernetes
community?
No
one
suggested
an
answer.
If
you
are
unsure
about
the
answer,
then
there
is
something
for
you
to
learn
from
my
experience
or
to
skip
I
answered,
promised
I
woke
up
in
the
middle
of
the
night,
with
a
determined
introduce
our
infrastructure
costs.
A
We
used
expensive
data
center.
We
use
different
types
of
machine.
We
use
force
on
a
pot
of
the
scaler,
these
cluster
autoscaler
to
scale
no
fools
amuse,
preemptable
VMs
using
exclusively
pre-emptive
Williams
kind
of
similar
to
like
spot
instances
in
AWS.
They
were
able
to
lower
their
cost
even
more
major
red
flag
about
that
nagging
feeling.
A
A
Pause
that
our
average
that
average
5m
idle
our
idle
is
the
task
in
the
queue
for
them
a
minute
later
much
more
CPU
work
like
NDB
I
thought
it
was
makes
sense
to
have
a
configuration
such
as
this
translates
to
idle
pods,
don't
consume
more
than
20
an
active,
healthy
pots,
peak
at
200,
min
or
200
micro
shares
of
a
CPU.
However,
when.
C
C
A
Kept
increasing
the
requested
pottery
sources
until
I
ended
up
with
the
following:
ok
250,
to
150.
With
this
configuration,
the
cluster
was
running
smoothly,
but
it
meant
even
idle
pods
or
pre-allocated
more
CPU
time
than
they
needed
I,
actually
wonder
what
the
application
is
like
of
its
Java
or
some
of
the
other
things
all
right.
A
C
C
A
A
Right
because
the
you
couldn't
distribute
the
work
across
more
CPUs,
especially
as
you
added
more
pods
and
those
sorts
of
things,
so
that
would
make
me
nervous,
I
would
think
more
V,
CPUs
or
some
balance
between
B
CPUs
and
memory
or
IO
would
actually
be
where
you
would
wanted
it
where
you
want
it
ahead.
So
in
this
case
they
went
to
16
B
CPUs,
it's
an
interesting
article.
I
mean
it's
an
interesting
experience,
I'm,
sorry
that
they
did
that
in
production.
But
it's
definitely
an
interesting
study
into
what's
happening.
There.
A
Patrick
Lang,
who
I
am
proud
to
say,
I'll,
be
doing
a
workshop
with
in
at
Q
Khan
this
year,
I'll
be
working
with
Patrick
Liang,
with
James
Nunnally
and
with
mr.
Benjamin
elder
will
be
doing
a
workshop
into
how
to
kindly
or
to
use
kind
to
do
development
against
the
kubernetes
code
base
so
definitely
check
that
out,
but
Patrick
Liang
is
like
our
Windows
is
like
the
it's
like
one
of
the
print
like
senior
software
engineers
of
microsoft,
working
really
closely
with
like
what's
happening
with
windows,
server,
containers
and
those
sorts
of
things.
A
So
I'm
really
great
I'm,
really
glad
to
see
him
out
there
talking
about
that
stuff.
If
you're
interested
in
Windows
and
containers
follow
that
guy,
he
has
a
lot
of
really
good
for
me
and
he's
and
he's
doing
a
lot
of
work
around
it.
So
highly
recommend
checking
out
this
blot,
this
podcast
and
if
you're
interested
in
Windows
containers.
Let's
go
back
to
our
questions.
Let's
see,
who
else
do
we
have
here?
We
have
karen
from
hyderabad.
We
have
Jana
Slav.
Tell
me
about
the
full
moon
in
Europe.
A
A
A
Help
us
get
this
survey
out
there,
it's
really
great
to
have
people.
Do
it,
there's
an
opportunity
interest
rate
for
to
receive
one
of
two
hundred
one
of
three
two
hundred
dollar
Amazon
giftcards
prizes,
and
we
really
use
this
information
to
kind
of
help.
Inform
I
have
no
idea
what
my
video
be
available.
A
Yes,
they
are,
they
are,
but
they
take
a
long
time
to
post
like
if
you
go
to
the
YouTube
channel
for
blackhat.
There's
like
a
huge
number
of
videos
that
are
available
for
free,
but
they
always
like
take
a
few
months
to
actually
go
out
so
I'm.
Actually,
looking
forward
to
seeing
myself,
even
though
I
know,
I
will
sound
like
a
chipmunk
I'm,
still
looking
forward
to
seeing
the
video.
A
We
use
these
surveys
to
understand
better
how
you
know
what
the
areas
that
people
are
really
focusing
on
right,
like
one
of
the
one
of
the
things
that
I
actually
really
got
personally
out
of
the
last
survey,
was
what
sea
and
eyes
were
people
using
in
production
and
and
like
whether
that
was
changing
over
time
and
so
there's
a
lot
of
really
great
questions
in
here.
Please
feel
free
to
submit,
spend
some
time
answering
them
and
again
help
us
spread
the
word
get
them
out
there.
A
We
have
somebody
stop.
Okay,
they
have
crews,
which
is
a
car
company
and
a
self-driving
car
company
working
on
open
sourcing,
isopod
an
expressive
dsl
framework
for
cooming.
These
configuration
after
I
mean
this
week.
The
3-sphere
was
again
kind
of
thrown
fuel
on
the
fire
around
yamo
and
some
of
the
other
stuff
I
mean
I,
know
that
there's
obviously
always.
B
A
A
That,
like
is
gonna,
be
the
thing
that
enables
your
team
to
approach
a
tool
and
be
successful
with
it
right
now,
whether
that
means
you
go
to
the
paluma
route,
where
you
decided,
like
you
know,
actually,
instead
of
learning
the
overhead
of
learning
an
entire
new
language,
something
like
Jason
in
our
case
Annette,
you
want
to
basically
just
extend
your
knowledge
of
an
existing
language
like
go
or
Python
and
and
into
the
domain
of
handling
infrastructure
right,
I.
Think.
A
Pretty
interesting
play
and
that's
actually
where
that's
the
bet
that
plumies
making,
but
there
are
a
ton
of
things
out
there,
but
there
are
a
ton
of
things
out
there
that
are
actually
trying
to
solve
this
problem,
and
this
is
the
new
one.
So
these
folks
are
actually
have
actually
made
I
when
they
call
it
isopod
I'm
glad
they
didn't
call
it
iPod
in
retrospect,
I
would
have
been
confusing,
so
in
a
Carl
Eisenberg
described
how
the
past
he
was
building
a
multi-tenant
computer
platform
on
Cougar
need
is
to
support
hundreds
of
engineers.
A
A
I'm
not
going
to
dig
into
it
in
list
at
this
time,
but
definitely
check
it
out.
If
it's
interesting
to
you
and
then
you
know,
give
us
feedback.
If
you
can
get
to
be
an
episode
of
TGI
King,
let
us
know
and
all
right.
It's
t-shirt
time
from
now
until
Q,
con
cloud
native
con
in
November
I
will
give
out
Cooper
nudish
t-shirts
to
people
on
this
forum,
who
are
helping
new
users
and
contributing
to
the
community
so
feel
free
to
dive.
In.
A
I'll
be
using
a
mix
of
secret
criteria
to
determine
the
prizes
and
I'll
be
giving
them
out.
This
is
mr.
Jorge
Castro
that
I
work
with
here
at
VMware
he's
one
of
the
community
leaders
inside
of
chromatin
a
set
of
communities,
and
he
will
be
basically
rewarding
people
who
are
actually
out
there
trying
to
do
their
best
and
like
and
help
others
to
succeed
with
kubernetes
and
technologies
like
it.
So
if
you
jump
into
the
forum's
and
help
it
out,
basically,
this
is
discussed.
A
A
My
last
news
article
did
you
know
that
CN
CF
does
webinars
and
there
are
quite
a
few
of
them,
and
so,
if
you're
interested
in
learning
more
about
the
space
or
about
different
technologies,
definitely
check
out
the
webinars
that
are
available
at
CN,
CF
IO,
slash,
webinars,
there's
a
bunch
of
them
that
are
coming
one
of
the
ones
that
actually
happens
pretty
on
a
pretty
regular
basis.
This
is
one
that's
called.
A
What's
new
in
kubernetes
I
highly
recommend
that,
because
it's
actually
usually
it
involves
the
release,
lead
talking
about
kubernetes
and
what
they
do
is
really
dig
into
the
detail
about
like
what
the
what
new
stuff
has
come
in
116
and
and
the
kind
of
sharp
edges
or
things
you
have
to
be
aware
of
before
you
migrate
to
it.
That
sort
of
stuff
so
definitely
check.
A
All
right,
that
is
all
the
notes.
Thank
you
for
putting
up
with
the
all
the
notes.
I
mean
it
really
doesn't
seem
like
it
was
that
many
notes,
but
it's
like
you
know
it
took
a
little
while
here
we
are.
Let's
go
back
to
the
notes.
I
wish
I
do
with
my
black
hat,
so
that
would
be
available.
I
keep
looking.
A
He
says
when
you
get
to
the
main
topic.
Can
you
give
us
your
thoughts
on
psyllium
I,
like
psyllium
a
lot?
They
just
came
out
with
a
new
release,
and
one
of
the
selling
points
is
replacing
cue
proxy
I
liked
if
they
replace
the
cue
proxy
a
lot,
but
we'll
talk
about
it
a
little
bit
more
we're
mckuhn,
yeah
I
mean
Rory.
B
C
A
Pretty
good
living
at
it,
security,
you
know
so
I
think
it's
good
for
him.
All
right.
I
have
one
check
this
box
where
it
says
cubed
API,
because
I
was
realizing.
You
know
what
I
didn't
really
spend
as
much
time
on
the
cubed
API
as
I
should
have,
and
so
to
address
that
what
I'm
gonna
do
is
show
you
where,
in
the
code
base,
you
can
find
information
about
api's
that
are
expressed
by
things
like
the
cubelet.
A
I
want
to
talk
just
real
quickly
about
kind
of
like
layout
for
things
which
I
think
is
you
know
it
may
be
useful
to
you.
You
may
have
seen
this
before
and
you
may
not
have
seen
it,
but
the
way
that
things
are
kind
of
built.
You
know
the
core
components
within
kubernetes
are
built,
there's
actually
like
a
command
a
directory,
CM
Deacon
directory,
in
which
you
can
see
all
of
the
components
that
make
up
some
of
the
shared
components
and
actually
all
of
the
compiled.
A
A
Like
the
main
components,
cubes
scheduler
key
proxy
can
control
the
manager,
and
then
we
dig
into
queue
petal,
of
course,
and
if
we
dig
into
cubed
here
we
can
see,
there's
really
not
a
lot
underneath
this
directory
right,
it's
mainly
a
configurable
surface.
It
doesn't
really
get
into
the
actual
code,
and
then
you
might
be
thinking.
Okay.
Well,
that's
interesting,
but
where
is
the
actual
code
that
represents
the
qubit?
Well,
it's
not
going
to
be
into
the
command.
That's
kind
of
like
your
entry
point.
That's
why
the
configurable
surface
is
actually
exposed
there.
A
So
if
I
go
underneath
kubernetes
kubernetes
Package
cubelet,
then
I
have
all
of
the
code
that
actually
represents
the
qiblah
itself
and
a
lot
of
the
and,
if
you're
interested
in
digging
into
it
more
and
understanding
what's
available.
This
is
a
great
place
to
kind
of
like
if
you
want
to
just
go,
read
all
the
Kuban
associated
with
the
cubelet.
A
This
is
where
you
would
go
most
of
the
things
within
kubernetes
that
have
an
api
kind
of
express
those
api's
in
a
single
place
where
they
try
to
like
make
them
available
to
people
who
are
so
that
so
that
it
can
be
understood.
And
if
you
look
at
projects
like
cube,
ATM
or
kind,
they
kind
of
express
it
in
the
same
way
right.
They
basically
will
try
to
put
all
their
api's
underneath
the
api's
in
a
way
that
they
can
actually
be
the
way
that
they
can
be
well
understood.
From
that
perspective,.
A
B
A
A
B
A
B
B
A
A
Let's
see
advisor,
we
have
a
debug
P,
prof
endpoint.
We
have
a
slash
run
for
basically
starting,
let's
see
if
either
earth
handle
starting
the
debug
handle
er.
All
of
these
pads
are
exposed
by
the
cubelet
no,
and
we
have
port
forward.
It's
like
a
lot
of
the
a
lot
of
the
infamy,
a
lot
of
the
configuration
that
is
described
that
you
can
do
to
the
cubelet
or
to
any
given
pod
on
a
key,
but
things
like
exacting
into
them.
A
A
B
A
A
But
it
talks
specifically
about
how
the
mechanism
works
right
so
like
if
you
on
any
machine
like
from
your
laptop
or
what
have
you
and
you
type
Q
kettle
exact
into
a
particular
pod
within
kubernetes.
That
call
will
then
go
to
the
API
server.
Let's
go
down
to
the
bottom
graph
that
pod
will
go
down
into
the
API
server
and
then
it
will
forward
it
will.
A
It
will
be
forwarded
to
the
cubelet
related
to
the
pod
that
you're
trying
to
exec
you
to,
and
it
will
actually
expect
it
will
attach
to
that
pod
by
proxy
right.
So
the
API
server
will
approximate
that
call
to
the
cubelet
via
that
cubelet
api
that
we're
looking
at
over
here
right.
So
these
this
cubed
api
that
we're
actually
looking
at
is
actually
going
to
describe
like
what
calls
are
necessary
to
describe
things
like
exec
and
those
sorts
of
things
right.
So
we
can
do
things
that
get
the
running
pods.
A
Getting
the
running,
pods
yeah
I
mean
there's
quite
a
lot
of
here.
We
go
here's
the
exact
request,
params
right,
so
the
actual
parameter
is
necessary
to
exact
into
a
container,
because
you're
actually
going
to
hit
one
of
the
containers
within
a
pod.
It
needs
to
know
how
to
actually
populate
that,
and
here
it
is
specifying
what
the
required
programs
are
and
making
sure
that
it
gets
in
the
information
or
it
will
just
reject
that
request
out
of
hand
right.
A
So
all
of
this
is
represented
by
the
API
exposed
by
the
cubelet
and
that's
what
I
wanted
to
actually
talk
about
yeah
and
you're
right
very.
This
is
exactly
where
it
was
mr.
dims.
Are
you
raising
your
hand?
Could
you
have
a
question?
Are
you
just
saying
preach
I
can't
really
tell
I'm
hoping
it's
preached.
I
guess
we'll
see.
Are
you
saying
hello,
hello,
mr.
didn't
all
right
so
yeah
the
cubelet
actually
has
quite
an
extensive
API
that
it
exposes
to
the
API
server
and
for
most
of
our
normal
day-to-day
use.
A
We
never
really
see
that
right,
we're.
Never.
Actually
we
don't.
Usually
we
don't
interact
with
the
cubelet
directly.
Most
of
the
time
we
interact
with
the
api
server
through
exec
calls
or
Bob
calls
or
attach
calls
or
those
sorts
of
things,
and
the
API
server
interacts
with
the
people
that
are
on
our
behalf.
But
what
I
wanted
you
all
to
think
about
was
we
were
talking
about
the
cubelet
last
time?
Is
that
to
remember
that
the
cubelet
is
doing
so
much
more
of
the
heavy
lifting
in
this
picture.
Then
then
we,
then
we
think.
A
You
know
orchestrating
the
proxy,
the
cubelet
that
is
responsible
for
that
work
right.
It's
actually
handling
all
the
authorization
authentication
stuff
its
handling
all
of
that
bit
and
it's
also
handling
the
proxy
piece.
But
when
you're
interacting
when
you're
typing
commands
in
and
out
right,
those
commands
are
actually
happening
inside
of
the
container
via
a
proxy
connection
back
to
that
cubelet,
and
that's
what
I
was
trying
to
express
by
the
cubed
API.
So
if
you
ever
really
want
to
just
like
introspect
and
see
okay
like
what
all
can
the
cubelet
do?
B
A
C
A
A
B
A
Then
we're
saying
that
they
never
like
remove
tabs
and
I
was
like
wow
I
want
to
copy
this
one
into
you
before
I
forget.
A
So
definitely
check
that
out
all
right.
So
the
next
up
in
the
series
is
mrs.
cubed
proxy.
We're
gonna
talk
about
like
how
it
works,
we're
talking
about
how
it
authenticates
to
the
API
server
we're
gonna
talk
about
the
different
models
for
key
proxy.
We're
gonna
talk
a
little
bit
about
services,
theory
of
operation,
config,
zty
and
metrics.
So
let's
jump
into
that.
A
Have
to
cook
reading
these
clusters
that
I
created
using
kind
in
these
two
kubernetes
clusters
are
configured
in
each
of
the
to
do
two
different
ways
that
you
can
configure
queue
proxy
well,
two
of
the
ways
I
think
it's
the
two
modes
for
queue
proxies
that
are
available
today.
I
think.
Actually
there
used
to
be
three
modes.
There
was
a
user
space
proxy
as
well,
but
I
think
that's,
finally,
being
deprecated
and
I
suspect
it'll
be
removed
if
it's
not
already
removed
in
1/16
it'll
be
going
away
soon.
A
A
Managing
the
surface
abstraction
within
kubernetes
I'm,
hoping
that
many
of
you
already
kind
of
know
what
services
are,
but
we're
going
to
talk
a
little
bit
about
them,
but
like
what
think
what
they
do
with
those
sorts
of
things
here.
The
flags
that
are
available
within
queue
proxies
like
some
of
the
configurable
mechanisms
that
are
in
here,
and
we
can
talk
about
like
you
know,
some
of
the
things
that
they
they
work.
A
Let's
break
down
off
the
API
server,
how
does
you
proxy
authenticate
to
the
API
server
and
does
anything
try
to
interact
with
API
serve
with
with
queue
proxy
directly?
So
the
first
thing
I
wanted
to
talk
about
when
we
talk
about
queue.
Proxy
is
kind
of
like
how
its
deployed
and
and
and
and
kind
of
help
kind
of
Express
that
a
little
bit
so
I'm
gonna
move
into
my
IP
tables
cluster.
Here.
This
is
a
this.
A
B
A
A
Can
see
that
I
have
two
daemon
sets
deployed
right,
one,
that's
actually
handling
the
network
CNI
piece
and
this
cluster
I
was
using
Canal
and
also
a
daemon
set
for
queue
proxy.
Now
it's
an
interesting
question:
why
use
a
demon
set
all
right
like
what?
What?
Why
is
that
primitive
daemon
set
interesting
in
this
or
the
right
thing
for
this
particular
case
right?
So,
let's
look
at
that.
David
set
real
quick
describe
yes,.
A
So
here's
our
Damon
say-
and
we
have
a
selector
that
says,
don't
select-
are
looking
for
any
version
of
kubernetes
running
Linux
like
presumably
because,
if
there's
a
different
operating
system,
we
would
want
to
actually
make
sure
that
we
did
apply
it
the
correct
version
of
key
proxy.
For
that
other
operating
system.
We
have
the
desired
set
of
nodes,
which
is
that
all
of
the
nodes
in
our
cluster
and
the
up-to-date.
A
We
have
things
like
here,
some
some
things
that
are
actually
used
to
configure
Cupra
by
queue
proxy
to
implement
the
the
configuration
of
IP
tables
and
those
sorts
of
things
in
the
underlying
host.
There's
some
really
interesting
mounts
for
queue
proxy,
and
so
let's
just
talk
through
those,
so
we
are
actually
passing
a
config
file
to
keep
proxy
to
configure
it
and
we're
hat
and
we're
also
passing
this
hostname
override
flag
to
queue
proxy.
A
This
means
that,
like
this
is
a
pretty
high
value
target
for
somebody
like
building
kubernetes
custer's,
because
if
you
think
about
it,
this
is
gonna
run
on
every
node
and
it
has
thing
access
to
it
like
the
live
modules
directory.
This
is
where
your
kernel
modules
are
right.
The
reason
it
has
this
is
because
queue
prophecy
might
be
the
first
thing
that
is
trying
to
kick
to
start
up
IP
tables,
and
if
it
is
the
first
thing
to
start
up,
IP
tables.
A
Ip
tables
has
a
mechanism
by
which
the
very
first
time
somebody
runs
IP
tables
commands.
It
will
auto
load
a
lot
of
the
kernel
modules
necessary
to
Manitou,
to
support
that,
and
that's
all
fine,
except
that
when
that
auto
load
happens,
you
have
to
make
sure
that
the
auto
mode
is
actually
the
IP
tables
command
has
access
to
the
correct
directory
where
those
modules
are
otherwise
it
might
reload
modules
that
may
not
be
compatible
with
the
underlying
kernel.
A
That's
why
you
want
to
mount
think,
like
Lib
modules,
from
the
underlying
host
very
good
reason
to
have
that
there,
but
at
the
same
time,
this
means
that
it's
kind
of
high
priority
target
for
things
that
are
trying
to
secure
Cooper,
tedious
right,
because
it
means
that,
like
we
have
the
ability
to
like
I
guess,
it's
read-only
amount.
So
it's
not
quite
as
bad.
A
So
we
know
the
amount
means
that,
like
at
least
those
modules
will
only
be
mountable
from
in
there,
but
remember
that
we
also
have
to
give
permission
based
on
to
load
those
modules
into
the
into
the
underlying
node.
So
permission
wise
is,
is
still
pretty
interesting,
a
pretty
interesting
target.
We
have
next
tables
Locke,
which
is
really
about
basically
making
sure
that
we
we
don't
stomp
on
ourselves
when
configuring
IP
tables
IP
tables
is
not
atomic.
When
you're
configuring
IP
tables,
it
means
what
I
mean
by
not
atomic.
A
I
should
say:
they're,
usually
actually
quite
a
few
writers
to
IP
tables
inside
of
kubernetes,
because
you
have
things
like
the
cubelet
configuring,
IP
tables.
You
have
kim
proxy
configuring
IP
tables
and
if
you
have
a
CNI
implementation
that
it's
going
to
handle
network
policy,
that
network
policy
is
also
going
to
be
implemented
in
IP
tables.
So
that's
actually
quite
a
lot
of
cooks
in
the
kitchen
when
it
when
you
think
about
it,
there's
a
lot
of
a
rewrite
that
has
to
happen
in
a
global.
You
type
view
tables
because
of
that
yeah.
A
B
A
It
has
a
priority
class
of
system,
known
critical,
which
means
it
has
to
run
inside
of
the
that's
I
believe
that
means
it
still
has
to
run
inside
the
cube
system
namespace
and
our
goal
with
that
priority
class
is
to
make
sure
that
other
things
get
evicted
before
Q
proxy.
No
matter,
yes,
that
it's
the
correct
answer
and
the
tables
does
solve
the
problem,
but
at
the
same
time
I
said,
I
wasn't
gonna
get
him
do
this,
but
at
the
same
time
it
actually
represents
another
set
of
another
series
of
problems.
A
Come
on
I'm
gonna
give
a
quick
overview
of
that
and
then
we'll
move
on
to
other
stuff
so
and
if
tables
does
actually
address
the
problem,
the
interesting
thing
about
NF
tables
is
that
they're
trying.
Obviously
you
know
to
make
it
so
that
users
of
IP
tables
have
a
pretty
fluid
experience
in
moving
between
the
different
kernel
modules.
Now
what's
interesting,
is
that
the
the
way
that
they
went
about
that
was
not
in
normalizing
the
ABI,
as
ever
as
it
is
represented
by
the
kernel
interface?
A
Instead,
they
normalized
the
ABI
API,
as
it
was
represented
by
the
binary
right.
So
I
could
use
IP
tables
in
a
I
in
in
in
the
in
the
legacy
mode
in
which
I'm
actually
using
kernel
modules
associated
with
the
original
IP
tables.
Implementation
for
that
particular
version.
But
if
I
wanted
to
move
to
the
N
F
tables
implementation,
I
could
actually
I
would
I
would
I
would
use
the
same
IP
tables
command,
but
I
would
have
to
check
out
the
new
version
of
of
that
command.
A
A
It's
like
Debian
Buster,
for
example,
and
then
you
deployed
to
proxy
on
that
node
and
the
queue
proxy
binary.
Ip
tables
binary
that
keep
proxy
uses
isn't
actually
binary
compatible
with
what
the
underlying
node
uses.
Now
at
this
point,
key
proxies
not
going
to
work.
It's
not
going
to
be
able
to
actually
interact
with
IP
tables
in
a
useful
way,
because
it's
gonna,
there's
gonna,
be
that
mismatch
between
the
two
api's
and.
A
Up
in
the
community-
and
this
is
something
that
we're
working
on
trying
to
resolve
and
I
feel
like
there's
actually
quite
a
lot
of
thinking
in
a
lot
of
different
directions
about
how
to
go
about
fixing
it.
But
it
is
an
interesting
problem
in
the
interface,
especially
if
you
think
about
you,
know
how
containerization
and
the
underlying
node
configuration
happens.
I
mean
in
this
case
the
real
challenge
is
that,
like
you,
proxy,
is,
is
an
implementation
of
configuration
that
is
specific
to
the
node
I?
A
A
Ask
me
again
about
psyllium,
though,
because
like
that's
yes,
sir,
in
my
opinion,
I
feel
like
they're
in
a
much
better
position
to
actually
solve
this
problem,
but
a
lot
of
products
and
a
lot
of
projects
are
because
they're
able
to
take
effectively
that
entire
space
and
solve
it
that
much
more.
You
know
in
a
much
more
reasonable
way,
all
right.
It's
a
lot
of
information
about
iptables,
a
lot
of
information
about
like
what
q
proxy
does.
A
This
I
want
to
I
want
to
get
into
a
kind
of
generic
a
little
bit
more
about
why
Damon
says.
So,
let's
come
back
to
that
original
question,
so
it's
a
Damon
set
because
we
want
to
make
sure
that,
as
you
add,
more
nodes
or
remove
nodes.
Well,
specifically,
as
you
add
more
nodes,
then
we
aren't
that
we
ensure
that
q
proxy
runs
on
every
node
now
that
in
itself
was
kind
of
interesting.
That
means
that
every
node
every.
B
A
It's
going
to
be
running
Q
proxy.
Now
this
is
actually
one
of
the
things
I
really
dig
about
q
proxy.
This
is
an
interesting
thing
right,
so
Q
proxy.
That
means
that
it
represents
a
distributed
system,
but
not
maybe
not
in
the
way
that
we're,
nor
that
we
normally
think
about
it
right.
So,
as
we
add
more
nodes,
we
add
more
Q
proxy
instances
and
each
of
those
q
proxy
instances
in
the
current
design
before
endpoint
slice
has
a
view
of
the
entire
list
of
endpoints
and
the
entire
list
of
services
right.
A
But
what
I
mean
here
is
if
I
have
a
thousand
knows
each
of
those
and
COO
proxy
instances.
Is
configuring
IP
tables
on
a
given
node
as
they
see
and
understand
the
state
of
services
for
all
of
the
services
within
the
kubernetes
cluster?
This
is
consistently
configured
on
each
node
cluster
wide
and
the
thing
that
that
is
implementing
that
is
cube
proxy
on
the
host.
So,
as
I
add
a
new
lead
us
through
this,
as
I
add
a
new.
A
Let's
take
a
look
at
what
that
means
in
each
of
our
two
different
clusters
here
just
for
a
single
node,
and
then
we
can
talk
about
like
because
what
I'm
trying
to
get
to
is
like
I
want
you
to
understand
that
this
is
really
fascinating,
because
it
means
the
configuration
may
be
unique
on
each
node,
but
only
because
of
timing
right,
let
how
long
it
actually
takes
for
each
node
to
understand
the
entire
configuration
of
that
service
set
globally
has
to
be
done
on
each
node.
Specifically.
A
In
the
destination
port
to
port
80
and
then
forward
it
to
mark
mask
and
then
later
on,
if
the
dist,
if
anywhere
in
that,
if
the
destination,
regardless
of
the
path
of
the
source,
is
headed
for
10
103,
42,
17,
32,
again
that
cluster
IP,
then
I
want
you
to
send
it
to
this
service.
Let's
take
a
look
at
that
service.
A
So
we
have
a
new,
a
new
stanza
with
an
IP
tables
that
describes
that
service
right
if
the
destination
is
that
then
I
want
you
to
prompt
commented.
You
know
cluster
IP
port
80,
and
this
is
actually
the
line
that
we
saw
before
the
queue
services
output.
And
then
we
do
this
kind
of
crazy
thing,
which
is
mostly
pretty
cool.
We
do
this
kind
of
crazy
thing
where
we
basically
try
to
randomize
across
the
known
entities
or
the
known
backends
for
a
particular
service,
and
that's
actually
where
this
cube
set
piece
comes
in
right.
A
B
B
A
A
A
B
A
A
little
tough
to
read
right,
like
there's,
actually
definitely
some
challenges
in
understanding
like
what's
going
to
happen
with
a
packet
when
we,
when
we
send
it
out
all
right.
So
we
see
the
destination
being
this
4
&
5.
And
if
we
look
at
that
one,
we
can
see
that
it's
it's
going
to
bounce
it's
going
to
send
50
percent
of
the
traffic.
To
this
end,
point.
A
Could
what
abused
cue
proxy
to
specify
a
certain
external
proxy
for
specific
domains
as
well?
Keep
rat
she's,
really
not
gonna.
Look
at
the
name
per
se.
It's
gonna!
Look
at
the
IP
I
mean
you
could
do
interesting
things,
probably
not
what
names
person
per
se
within
IP
tables,
but
you
could
actually
overwrite
the
rule
set
within
IP
tables
for
that
sort
of
stuff.
Alright,
so
let's
take
a
look
at
how
this
is
actually
so.
A
A
We
see
that
we
have
two
pods
ready.
We
have.
There
are
only
two
positively
created
ten
to
forty
four
one,
two
and
ten
to
forty
four
to
two
they're,
both
listening
on
port
80
and
that
the
implementation
on
our
of
cue
proxy,
using
iptables
mode
within
that
cubelet
or
within
that
particular
node.
It's
going
to
actually
handle
the
forwarding
of
that
traffic
right.
So
if
I
do
curl,
actually
you
know
what
I'm
probably
going
to
want
to
do
something
else.
I'm
going
to
do
cube
ghetto
delete
edit
deployment.
B
B
A
So
we're
gonna
see
what
we're
seeing
in
this
output
is
that
the
traffic
is
actually
balancing
back
and
forth
and
a
pretty
naive
way
across
the
two
endpoints
defined
within
the
cluster
right
and
so
in
in
our
grip
here.
What
we're
doing
is
we're
trying
to
understand
which
of
the
two
pods
were
aimed
at
and
we're
able
to
determine
that
from
the
hosts
file.
So
we
can
see
the
IP
address.
C
B
A
A
A
Yeah,
but
what
I
wanted
to
point
out
was
effectively
what
I
mean
by
a
very
naive
load,
balancer
right.
So,
although
we
can
see
the
connections
going
back
and
forth,
we
can
also
see
that
the
source
IP
address
changes,
and
this
highlights
one
of
the
important
things
to
understand
about
cue
proxy,
and
that
is
that
it
is
very
naive
in
the
way
that
it's
making
the
routing
decision
right
now,
I'm
connected
to
the
worker,
where
one.
A
It
means
that
the
my
source
address
is
going
to
be
the
IP
address
of
the
node,
but
when
I
send
that
traffic
to
the
the
one
that's
locally
hosted
on
the
note
that
I'm
on
it
means
my
source
address
is
going
to
be
the
IP
addresses
within
that
network
range,
and
this
also
works
on
the
outside
right.
If
I
do.
A
So
now
I've
created
two
different
services:
one
service,
that's
actually
using
cluster
IP
and
one
service
is
actually
using
node
port
and,
if
I
do
cubic
it,
I'll
get
endpoints
for
nginx.
I
can
see
R
to
R
can
see
our
to
our
back-end
services
and
for
NP
I
can
see
our
two
back-end
services
both
resolving
to
the
same
IP
port
pair
right.
A
B
B
B
A
B
A
So,
in
our
case,
you
can
still
see
that
the
source
address
is
changing
right
when
we
go
back
and
forth
between
the
two
hosts,
we're
bouncing
back
and
forth
between
the
two
requests
right.
Sometimes
it's
using
my
local
IP
address
that
I
detector.
Sometimes
it's
using
the
IP
address
of
the
node
that
I
connected
to
to
actually
handle
node
port
and
sometimes
it's
using
the
IP
address
of
the
locally
relevant
one.
So
it
doesn't
matter
where
we
come
at.
It's
still
going
to
change
the
source
address
exactly
Bob.
Did
you
you're,
like
reading
ahead?
A
Look
at
you
like
sitting
up
all
right?
Well,
not
all
right!
So
what
can
we
do
about
that,
though,
right
like
we
can?
Actually
we
can.
We
can
change
that
behavior
a
little
bit,
but
we
did
sexually,
but
I
wanted
to
make
sure
that
we,
you
know
that
you
all
get
the
idea
that
what's
interesting
about
this
configuration
is
that
each
of
the
instances
have
their
own
complete
view
of
all
of
the
services
that
are
defined
and
they're
not
configured
specifically
consistently
right.
A
A
And
we
can
see
that
in
this
particular
case
the
probability
is
a
little
different
too
right
because
we're
actually
sending
it
off
to
different
targets,
depending
on
a
probability.
That's
been
defined
at
service
definition
and
if
I
jump
and
if
I
compare
that
output
to
workers,
zero
or
worker
I
can
see
the
difference.
There
may
be
different
output.
A
But
the
configuration
of
each
node
is
actually
response
is
for
each
q
proxy
instance
is
responsible
for
configuring.
The
services
as
they
are
defined,
or
as
they
are
understood
by
each
of
the
queue
processing
instances
and
their
watch
against
all
of
the
services
and
all
of
the
endpoints
within
the
cluster.
A
A
There
is
a
urban
and
community
service
that
it's
defined
same
thing
for
actually
for
DMS.
There
is
an
internal
cluster
IP
service
that
represents
the
API
server
Hume
API
server
and
those
are
gonna
balance
across
all
of
the
actual
control
plane
nodes.
If
there's
only
one
control
blink
node,
it
will
only
target
that
one
thing.
A
All
right,
in
our
case,
it
will
only
terminate
on
the
one
node
where
the
API
server
is
running.
This
is
a
single,
a
single
master
cluster,
but
if
I
brought
up
like
a
three
master
or
a
multi
master
clusters-
and
it
would
see
all
three
of
those
hosts
and
balanced
across
the
set
answer-
was
that
question
was
asked
by
clan.
A
B
A
What
they
say
with
it,
what
this
is
talking
about
is
the
source
IP
address,
as
it
needs
to
be
configured
by
the
by
that
particular
service.
Again,
this
is
a
naive
implementation
of
a
load
balancer.
It's
going
to
forward
that
traffic
to
any
healthy
instance,
regardless
of
where
the
source
of
the
traffic
is
coming
from.
A
C
A
Of
the
other
type
of
services,
except
for
external
DNS,
that
one
that
has
nothing
to
do
with
you
proxy,
but
cluster
IP,
node
part
and
load
balancer.
All
three
actually
do
configure
are
configured
by
Q
proxy
or
implemented
by
Q
proxy
that
there
are
some
knobs
and
dials
things
like
external
traffic
policy
that
allow
you
to
define
exactly
what
that
behavior
might
be
before
we
get
into
the
like
exactly
what
those
knobs
and
dials
are
like.
A
A
Right
we
can
see
that
both
the
the
cluster
IPS,
like
the
default
type
of
service
that
will
be
defined.
If
you
don't
specify
anything
else,
cluster
IP
will
actually
represent
an
instance
of
well
we'll
get
an
IP
address
from
that
services.
Cider
that
you
configure
for
the
cluster.
In
our
case,
we
configured
1096
dot,
0,
dot,
0,
slash
16,
so
we'll
just
randomly
pick
an
IP
address
from
that
range
and
a
book
enema
hand
it
off,
and
then
it
will
actually
handle
any
port
forwarding
or
port
mapping
configuration
that's
necessary
and
I'm.
A
A
So
in
note
point
the
way
it's
configured
right.
We
still
get
a
cluster
IP,
even
though
I
didn't
ask
for
a
cluster
IP
I
always
get
a
cluster
IP,
except
for
like
one
particular
case.
So,
in
the
way
that
note
port
works
is
that
I'm
also
going
to
get
a
node
port,
and
this
node
part
will
be
taken
from
a
figured
range
on
queue
proxy,
where
we
specify
the
node
port
range
and
then
node
port
range
by
default.
A
If
you
don't
specify
anything
else,
it's
going
to
be
port
30,000
to
port
32767,
so
you
have
2767
ports
to
use
within
that
range
that
are
going
to
be
expressed
by
each
of
the
by
each
of
the
queue
proxy
instances
again,
consistency
right
so
that
that
service
has
been
defined
with
that
port.
That
means
at
that
port
range
that
we've
just
described
a
30,000
to
32767
it's
going
to
be
configured
consistently
on
every
node
running
queue.
A
Proxy
means
we
have
a
flat
port
range
that
is
defined
inconsistently
on
every
node
running
queue
proxy
and
when
a
node
port
has
been
granted
to
a
service,
regardless
of
which
node
you
go
to
you're,
going
to
be
able
to
see
the
result
of
that
service,
and
then
that's
actually
one
of
the
things
I
wanted
to
show.
So
why
do
you
keep
kiddo
get
nodes
Oh
wide?
If
I
do
curl
for
each
of
these
hosts
right,
regardless
of
which
host
it
is.
A
Now
what
it
means
is
that
in
if,
if
I
were
to
hit
that
control
flame
node,
because
my
nginx
pod
would
never
land
there
and
beam
I
went
to
get
a
canoe,
it
means
that
I
would
get
a
consistent
source
address,
because
I'm
never
next
I'm,
never
terminating
on
a
node
where
that
note
is
actually
existent.
So
it's
interesting
stuff,
so
key
proxy
configures,
that
port
range
port
30,000
to
port
32767.
A
Consistently
on
every
node,
where
queue
proxy
runs
all
the
time
and
and
by
consistently
means
that,
like
some
nodes,
may
take
longer
to
actually
implement
the
change
depending
on
the
scope,
the
scale
of
services
that
we
defined,
but
each
of
the
nodes
will
have
that
consistent
view
of
the
world.
All
right.
A
So
cluster
IP
node,
no
import-
it
shall
only
usually
only
used
only
used
within
the
scope
of
the
cluster
and
what's
interesting
is
because
we
were
just
looking
at
the
IP
table
stuff.
If
you
look
at
the
way
that
the
IP
tables
stuff
is
doing
its
magic,
it's
doing
its
magic
by
manipulating
the
packet
right.
It's
not
necessarily
IP
tables
is
not.
The
cluster
IP
address
is
never
something
that
is
expressed
as
a
radical
idea
dress
within
the
overlay
network.
A
It's
only
round
belen
that
there
is
actually
some
manipulation
that
will
happen
by
IP
tables
to
make
sure
that
the
destination
IP
address
is
merged
to
one
of
those
healthy
endpoints
described
by
the
service.
Let's
talk
about
that
real,
quick,
because
I
want
to
make
sure
that
we
understand
what
I'm
talking
about
there.
A
A
A
A
There's
no
ARP
entry
for
that
either
because
it's
actually
only
implemented
inside
of
IP
tables.
It's
only
implemented
by
basically
if
the
source
packet
I
mean
it.
When
the
request
comes
in
when
that
TCP
request
comes
in
right,
we're
gonna,
look
at
the
source
and
destination
IP
address
if
the
destination
IP
address
is
that
cluster
IP
we're
gonna,
manipulate
that
packet
and
change
the
destination
IP
address
to
one
of
the
healthy
backends,
based
on
that
randomizer
that
we
saw
earlier
in
the
IP
tables
output.
A
A
A
B
A
Even
though
that
my
request
came
in
from
even
well,
even
though
my
request
is
coming
in
and
my
destination
IP
address
is
that
that
cluster
IP
address
that
1098
address
before
it
leaves
the
club
before
it
leaves
this
node
the
destination
IP
address
will
be
one
of
the
resolved
IP
addresses,
based
on
whether
they're,
healthy
or
not
good,
to
know
and
good
to
understand
how
like
how
how
keep
proxy
actually
does
what
it
does.
Alright,
now.
A
We've
talked
about
note
part
where
you
talk
about
how
note
port
is
effectively
the
same
as
cluster
IP
or
the
different
services
right.
We
have
a
service
subtype
cluster
IP.
We
have
a
service
type.
Note
note
part
the
difference
between
node
part
and
service
IP
is
that
when
we
have
a
node
port,
we're
grabbing
one
of
those
like
one
of
those
ports
from
30,000
to
32767
and
we're
expressing
it
as
yet
another
possible
source
all
right.
So
in
this
output
we
can
see
that
there's
actually
these
two
lines
up
here
at
the
top
right.
A
A
Those
probabilities
do
look
weird
I
agree,
but
it's
the
random
thing
that
actually
puts
the
skew
on
it
and
yeah.
It's
totally
bizarre
all
right,
but
if
I
look
at
just
the
end,
u
next
one
I,
don't
see
that
destination
port.
That's
our
difference!
Right
so,
if
I
do
pull
that
up
and
then
pull
this
one,
so
the
top
one
here
is
actually
our
node
port
implementation
and
the
bottom
one
here
doesn't
have
a
note
portable
wait.
Oh
my
bad
hello!
A
There
we
go:
that's
what
I
want
so
there's
my
MP,
so
I
can
see
these
two
Newport
lines
here
which
are
actually
mapping
traffic
back
to
my
service
and
then
I
also
see
the
cube
service
defined
for
that
for
the
cluster
IP
associated
with
that
guy
and
then,
but
my
my
non
node
port
instance.
Right
only
has
the
cluster
IP
doesn't
have
the
nude
port
associated
with
it.
Now,
what
is
the
difference
between
node
port
and
load
balancer?
A
Does
anybody
know
anybody
how
many
ideas
difference
between
node
port
and
load,
balancer
service
type
load,
balancer
and
service
type
of
apart
before
we
proceed
with
that?
I
want
to
actually
point
out
something
that
I
thought
was
really
interesting.
When
I
first
started
playing
with
kubernetes
and
trying
to
understand
services,
load,
balancer,
isn't
a
built-in
thing,
so
by
default,
I
mean.
Is
it
like?
If
you
have
a
kubernetes
cluster,
you
may
not.
A
A
A
Okay,
so
I
have
my
three
services:
Kubica,
don't
get
SPC,
alright
nginx
and
your
next
I'll
be
in
nginx.
Np
I
can
see
that
I
got
a
cluster
EP
each
time
and
but
and
I've
got
a
different
node
port
for
the
bottom.
Two,
so
node
port
got
I
got
a
new
part
when
I
described
and
known
port
and
I
also
got
a
new
port
when
I
described,
load.
Balancer,
correct,
mostly
correct,
yeah
lb
is
into
a
port
on
steroids.
Exactly
good
answers,
all
right,
so
load
balancer
is
money,
absolutely
correct.
A
All
those
things
are
true
well
answered.
Load
balancer
can
be
offloaded
to
data
plane,
I'm
hoping
that's
data,
plane,
yeah,
so
load,
balancers,
money,
load,
balancers,
note,
port
on
steroids
and
load
bouncer
for
cloud
providers
or
managed
infra
and
node
ports
for
local
development.
Machine
provided
ports,
mostly
true,
okay,
so
you're
always
going
to
get
a
node
port.
A
When
you
create
a
load,
balancer
you're,
actually
interacting
with
some
other
system,
whether
it
be
the
cloud
integration
provider
or
whether
it
be
like
something
like
metal
lb,
you're,
actually
going
to
interact
with
that
system
and
say
and
request
if
they
configure
a
load.
Balancer
on
your
behalf
and
point
it
back
to
all
of
the
nodes
within
your
cluster,
because
we
understand
how
that
traffic
path
works
right
so
because
we
have
a
node
port.
A
We're
gonna
have
that
load,
balancer
forward
traffic
back
down
to
our
nodes
on
that
specified,
node
port
and
in
our
look
and
our
queue
proxies
gonna
balance
traffic
across
those
paths.
Load
balancers
are
l4
mechanisms,
layer,
four
load
balancer
right,
and
that
means
that
we
don't
have
any
guarantee
that
the
path
back
to
that
service
is
going
to
have
to
try
and
retain
the
the
source.
Address
means
what
it
means
that
this
traffic
might
come
into.
A
One
of
the
nodes
where
that
service
is
and
terminate
on
that
pod
leaving
the
hook
the
source
IP
address
in
place,
or
it
might
come
into
that
node
and
traverse
to
another
node,
because
the
way
that
the
IP
tables
forwarding
works,
meaning
that
now
the
source
IP
address
is
that
first
node
before
terminating
on
that
service.
Now.
A
I
think
it's
time
for
us
to
talk
about
external
traffic
policy,
real,
quick,
because
I
think
it's
actually
pretty
interesting.
So
one
thing
I
do
want
to
talk
about,
though
it's
like,
because
in
my
cluster
I
don't
have
a
load,
balancer
implementation
in
place,
I'm,
never
gonna,
see
it
resolve
right
so
because
I've
actually
asked
for
a
load
balancer
to
be
created.
A
That's
never
ever
gonna
happen
because
there's
no
load
balancer
implementation
inside
of
my
kinda
cluster
right
now,
I'm
not
running
in
the
cloud
I'm
running
on
a
kind
cluster
brought
up
locally
on
my
machine
and
I
haven't
deployed
anything
like
middle
lb
or
any
of
those
things,
and
so
what
that
means
is
that
that
rate
that
that
that's
not
gonna,
it's
not
gonna
work,
I'm
not
going
to
be
able
to
actually
ever
resolve
that
to
a
load.
Balancer
IP.
A
If
you
see
this
in
your
environment,
it
may
be
that
it
means
that
you're
like
on
a
bare
metal
configuration
or
that
the
cloud
provider
integration
isn't
working
or
that
you
know
whatever
it
is.
It
was
satisfy
that
load.
Balancer
implementation
is
gonna,
isn't
it's
not
working?
How
do
we
get
real
IP
in
that
case?
Let's
talk
about
that
with
the
part
where
Bogdan
is
actually
going
to
talk
about
yep,
it
will
get
update.
A
A
B
A
A
A
Source
type
of
local,
so
understanding
whether
the
traffic
is
actually
coming
from
local
or
somewhere
else.
So
when
I
actually
change
that
external
traffic
policy,
there
we
go
when
I
change,
that
external
traffic
policy
I
change,
the
way
that
the
behavior
would
work.
So
you
remember
earlier,
we
did
a
curl
against
each
node
right,
so
we
have
let's.
B
A
A
B
B
B
B
A
A
A
A
Alright
I
have
two
on
worker:
I
have
chewin
work
or
two
here:
their
IP
addresses
right,
they're,
broken
up
by
IP
address
and
because
there's
four
of
them
on
each
node.
My
balancing
is
actually
still
gonna
work
out.
Okay,
because
the
traffic
will
be
balanced
across
each
of
the
two
inside
of
that
host.
But
let's
talk
about
what's
happening
here:
if
I
do
port,
if
I
do
the
control,
if
I
do
the
control
plane,
node,
which
is
port,
which
is
a
Peter
seven
I'm,
not
getting
anything
back?
A
What
do
you
think
I'm
load
balancer
would
do
if
it
didn't
get
a
reply
back
from
an
endpoint
like
this
right.
It
would
determine
that
that
look
that
that
node
was
not
in
the
running.
It
was
not
a
healthy
back-end
service
and
so
we'll
take
it
out
of
it.
We'll
take
it
out
of
the
the
routing
decision
right.
The.
A
A
Represent
a
set
of
two
pods
on
each
of
the
two
nodes
right.
That
means
that
they're
still
in
the
running,
the
load
balancer
out
front,
is
still
gonna
balance
between
those
two
nodes.
But
what's
gonna
happen
when
the
traffic
terminates
on
those
knows
right
now,
what
happens?
Is
that
we're
actually
going
to
be
balancing
back
and
forth
between
the
only
the
two
pods
that
are
local
to
that
node?
A
This
greatly
simplifies
traffic
pattern,
because
it
means
that
when
the
when
the
connection
goes,
the
TCP
connection
comes
in
your
terminating
on
a
host
and
your
source
IP
address
will
always
be
the
same,
because
we
know
what
that
path.
Look
like
we're.
Never
introducing
an
extra
hop
like
we
do
in
cluster
mode,
where,
if
the,
if
the
service
isn't
local
or
if
by
chance,
you
pick
the
random,
we
decided
to
route
your
traffic
over
to
another
node,
we're
gonna
change,
that
source
IP
address.
A
A
C
A
Of
knowing
that,
so
it's
still
going
to
load
balanced
using
its
algorithm
across
those
healthy
endpoints,
the
nodes,
but
the
differences
in
the
availability.
At
the
moment,
I
have
more
availability
on
node
0
than
I
do
on
node
1,
but
the
little
balancer
doesn't
know
that
so
it's
gonna
just
balance
across
the
two
of
them
evenly.
A
This
is
basically
just
handling
a
percentage
of
routing
back
and
forth
between
the
two
okay
for
cloud
providers
are
managed,
yeah,
ok,
so
the
master
is
not
tainted.
It
just
doesn't
have
a
pod.
If
I
put
an
engine
X
spot
on
the
master,
it
would
start
answering
and
the
master
is
tainted.
Sorry,
yes,
the
Masters
changed.
So
that's
why
it
doesn't
have
a
pod.
A
A
Interestingly
enough,
we
need
to
know
the
hostname
sorry,
the
reason
that
queue
proxy
needs
to
know
the
hostname
in
its
configuration
is
so
that
it
can
determine
what
local
traffic
is.
If
that
Miska,
if
that
is
misconfigured,
if
this
product,
if
if
key
proxy,
doesn't
understand
what
it's
the
host
name
of
the
node
itself
is,
it
has
to
know
we
have
like
deducing
what
the
what
the
local
traffic
is,
and
that
means
that
it
would
not
be
able
to
handle
things
like
external
traffic
policy
local,
but
that's
how
much
determining
it
right.
A
And
from
there
it
can
determine
what
the
source
IP
address
or
the
actual
IP
address
of
the
host
is,
and
then
it
could
actually
kind
of
determine
from
there
like
what
is
local
traffic,
what
is
not
local
traffic
and
how
to
manage
that
problem.
A
couple
other
things
about
key
proxy:
let's
do
SS
a
and
lut400,
and
you.
B
A
Cube
proxy
does
the
great
way
I'm,
actually
understanding
what
a
particular
service
is
exposing
or
what
it's
actually
listening
on.
Okay,
let's
see,
we
can
see
the
queue
proxy
is
listing
on
all
ports
on
three
two,
oh
one,
eight
and
three
one
nine
six.
These
are
our
node
porch
that
we've
expressed
right.
That
makes
sense.
We
can
also
see
that
it's
bound
to
locally
local
host
10
to
49
and
port
10
to
56.
A
A
A
Yeah,
so
here
are
the
things
that
are
exposed
by
the
metrics
endpoint
4q
proxy.
It
talks
about
a
lot
of
the
you
know,
standard
go
stuff
and
then
I
starts
getting
into
things
like
that
are
that
are
HP
requests
that
it
saw
it
talks
about
network
programming.
How
long
did
it
take
cue
proxy
to
configure
the
network
and
compared
compared
to
other
things?
These
are
like
a
quantile
buckets
right,
so
we
can
see
that
most
of
them
were
taken
in
about
sixty
five
seconds
seems
kind
of
slow.
A
A
A
A
I'm
gonna
jump
into
one
of
these
in
public
and
figs
eat
right,
so
cute
get
all
get
raw
actually
get
and
Kubb
system.
Oh
yeah,
Mel
grips
so
I'm,
actually
really
sad
self
weights
going
away
like
this
might
be
going
away
soon
because
they
feel
like
it's
kind
of
a
tough
implementation
to
support.
But
it's
still
here
for
now
so
we're
gonna
keep
using
it
good
I'll
get.
B
B
A
A
A
A
All
right,
so
there
is
a
config
Zee
endpoint
on
Q
proxy,
so
we
can
actually
hit
that
thing
and
see
how
its
configured
these
are
the
configurations
of
Q
proxy
for
this
particular
instance,
and
we
can
actually
interact
with
it
without
a
cert.
So
that's
cool.
So
if
we
do,
we
go
back
to
our
keep
getting
a
bra.
A
Yes,
because
the
ports
not
actually
exposed
that
way,
so
we
can
actually
get
this
information,
which
is
pretty
handy
to
understand
like
how
it's
actually
configured
at
runtime.
We
can
see
how
it's
actually
configured
at
runtime
by
looking
at
the
pod
right,
so
we
can
actually
understand
what
configuration
they
actually
had
right.
So
cute
can
I'll
describe
pod
n,
coop
system,
coop
proxy.
A
Jf
then,
the
config
map
that
represents
key
proxy
so
cute
get
out
describe
config
map,
a
new
system,
whoo
proxy.
Here's,
the
configuration
of
cube
proxy
setting
the
metrics
behind
address
to
localhost,
and
all
of
that
is
actually
enabling
BSR
false
setting
the
network
name
is
for
swift
all
that
stuff.
These
are
all
flags
that
you
can
use
to
configure
it
like
what
the
resource
container
it's
going
to
be
put
in
with
the
port
ranges,
because
it's
not
specified
its
default
30,000
to
thirty
thirty-two
767,
because
mode
is
not.
A
Mode
by
default
is
IP
tables
and
then
how
it's
actually
going
to
use
and
how
it's
actually
going
to
authenticate
it's
using
a
service
account,
so
it's
cute
config
is
defined
using
the
token
file
service
account
took
and
that's
actually
how
it's
going
to
authenticate
to
the
API
server
so
that
I
can
do,
is
watch
and
understand
right
so
because
it's
gonna,
the
entire
configuration
of
all
the
key
proxies
is
shared
across
that
whole
set,
and
it's
expressed
here,
okay.
So
what
it's
find
address
is
client
connections.
A
A
It
has
a
convenient
point
that
is
how
its
configured
we've
talked
about.
Let's
go
back
here
and
make
sure
we
talked
about
how
it's
authenticating.
We
talked
about
IP
tables.
We
talked
about
the
services.
What
a
theory
of
operation
we
talked
about:
config,
Z
and
metrics.
The
next
thing
I
want
to
talk
about,
is
IPs
and
then
we're
gonna
wrap
it.
So
what
time
is
it
now?
I'm
sure
that
we're
not
like
super
late
over
time?
Oh
it's,
three.
Okay,.
A
B
A
A
C
A
This
is
kind
of
interesting
from
the
kind
config
perspective.
If
I
look
at
the
configuration
of
this
file,
this
is
a
standard
kind
config,
but
I'm
populating
the
cube
proxy
configuration
and
setting
mode
to
ipbs.
So
this
is
actually
how
the
only
difference
between
the
configuration
between
the
other
cluster
and
this
cluster
is
that
I've
actually
set
I've
set
the
mode
to
IPP
s4q
proxy.
That's
all
it
was
necessary
to
do
right,
and
so
now
by
docker
exec
into
my
ipbs
worker.
A
A
A
B
B
A
A
A
A
B
A
A
That
we
have
associated
with
things
or
the
sort
or
the
node
port,
we're
gonna,
see
actually
balancing
across
things.
So
in
this
case
it's
a
node
part
3,
0,
8,
9
5
is
a
node
part.
We
have
4
nginx
service
and
it's
actually
going
to
round-robin
against
these
three
entries
and
forward
back
to
that
service.
Right
and
if
we
look
at
the
cluster
IP,
here's
the
cluster
IP,
we
have
associated
with
one
of
our
services,
the
same
round-robin,
a
right.
You
can
see
that
we
don't
have
any
active
connections
and
the
weights
are
all
equal.
A
B
A
A
All
right
so
again,
the
weird
thing
about
IBBs
mode
is
that
the
IP
tables
look
dramatically
simpler,
but
they
don't
do
nearly
as
much
work
as
they
do
in
iptables
mode
or,
if
there's
way
less
stuff
happening
here.
What's
happening
here.
Right
now
is
basically
just
like
standard
rules
and
configuration
that
Calico
puts
in
place
for
things,
but
there's
no.
A
Making
sure
the
forward
check
stuff
gets
picked
up,
but
there's
no,
but
there's
not
a
lot
of
the
same
stuff
that
we
saw
in
net.
You
can
see
that
in
the
net
table
like
some
of
these
things
are
defined,
but
not
a
lot
of
them
are
the
same
right
like
we
don't
see
as
much
in
mark
masks.
We
don't
see
as
much
in
node
port
defined
here,
because
that's
actually
going
to
be
handled
by
the
kernel
now,
because
we're
actually
configuring
IP
Hipps
directly
I.
A
A
To
sync,
and
then
it's
basically
just
watching
for
endpoints
it
again
configuring
these
things.
So
for
your
purposes
at
this
time,
they're
equivalent
right
eye,
PBS
and
IP
tables
are
doing
this
them.
Two
modes
are
doing
effectively
that
the
same
work,
IP
tables
came
first,
it's
still
the
default
I'm
actually
not
sure
when
the
IPPS
may
become
the
default,
but
I
knew
that.
I
know
that
for
your
particular
purposes,
right
now,
they're
equivalent
and
if
you
wanted
to
actually
explore
using
IPPs
instead
of
IP
tables
that
is
available
to
you
today.
A
A
All
of
these
things
together
mean
that
we're
actually
doing
quite
a
lot
of
churn
to
that
IP
tables
configuration
and
it
causes
things
to
behave
in
ways
that
are
maybe
not
the
best
right,
because
it
was
never
really
meant
or
designed
for
that.
Nf
tables
is
a
step
in
the
direction
as
well
right.
Where
we're
like
you
know,
we
know
we
got
to
get
better
at
actually
having
a
dynamically
configurable
filter
system,
so
end
of
tables
represents.
A
You
know
a
course
in
that
race
right,
so
we
have
IP
tables,
old
and
kind
of,
and
it
has
lots
of
interesting
sharp
edges
when
there's
a
lot
of
churn
and
if
tables
solves
that
you're
in
problem,
but
also
introduces
some
interesting
problems
about
the
migration
from
old
to
new.
Then
we
have
I
PBS.
If
we're
gonna
do
load
balancing,
why
do
load
balancing
with
IP
tables?
Maybe
we
should
use
IP
virtual
server
for
that
another
horse
of
the
race?
Then
we
have
things
like
cilium
or
everywhere.
A
They're,
like
you
know
what
like,
we
could
just
do
all
this
in
HD,
p
and
BPF.
We
don't
need
to
be
doing
this
and
you
know
some
of
the
older
technologies
like
NFTE
those
sorts
of
things,
although
it's
interesting,
if
you
look
at
and
if
you
closely
it's
like
effectively
still
implemented
and
I'm
BPF,
so
all
of
these
things
are
converging
on
ways
to
make
all
of
the
problems
that
ku
proxy
represents
better
and
generally.
C
A
B
A
A
For
bracket
10,000
I,
don't
know
what
number
that
represents,
which
iptables
would
hit
that
that's
true.
As
you
grow
the
number
of
services
iptables,
obviously,
because
of
the
points
I've
already
described
right
as
you
grow
the
number
of
services,
you
have
more
churn,
which
makes
iptables
fall
over
faster
because
it's
not
atomic
as
you
grow
the
number
of
services,
the
time
it
takes
to
actually
handle
the
reconfiguration
of
iptables
takes
longer
right
because
it's
not
atomic,
but
would
you
be
able
to
move
to
eye
view
yes
and
get
away
from
that
problem
in
the
iptables?
A
A
B
A
Topology,
where
there
we
go,
that's
what
I'm
looking
for
there
is
an
enhancement
proposal
in
place
to
try
and
solve
this
problem
like
service
topology,
where
you
can
actually
handle
things
like
topology.
We
are
routing
for
services,
I,
think
that's
what
you're
looking
for
I'll
go
ahead
and
put
it
in
the
notes,
so
we
don't
lose
it.
B
A
I'm
gonna
sign
off
all
right.
Thank
you
for
digging
into
this
with
me,
there's
so
many
interesting
things
that
we're
gonna
be
playing
with
in
the
future
versions
of
this.
So
far,
we've
done
cubelet
we've
talked
about
cubed
proxy
in
the
next
one,
we're
probably
going
to
talk
about
controller
manager,
we're
gonna
talk
about
the
cubes
scheduler
and
we're
gonna
dig
into
some
of
the
details
about
how
all
of
those
things
work
and
some
of
the
concerns
around
how
all
that
stuff
happens.
A
But
the
big
takeaway
for
Q
proxy,
again
I
feel
like
what's
really
interesting
to
understand
is
that
Q
proxy
does
a
lot
of
the
heavy
lifting
for
services
and
that
it
is
actually
a
set
of
instance
in
pods
that
run
on
your
nose.
So
each
node
has
its
own
consistent
and
unique
view
of
the
cluster
servicemen
surface.
So
thank
you
again
and
you
all
have
a
great
killer
weekend
and
I'll
see
you
next
time.
So.