►
From YouTube: TGI Kubernetes 117: Grokking Kubernetes : Etcd
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
A
There
we
go
hey,
welcome
to
Etsy
D
or
welcome
to
episode
117
of
TDI
k.
This
episode
is
gonna,
be
us
digging
in
to
sed
how
it
works,
what
it
does
some
of
the
topologies
for
how
its
spun
up
inside
of
kubernetes
custer's
we're
gonna,
look
at
all
kinds
of
interesting
models
here
and
I'm
going
to
share
with
you
a
lab
environment
that
I
built,
but
you
can
run
locally
on
your
machine
to
to
do
some
of
this
modeling
yourself.
A
So,
let's
see
who
we've
got
signed
in
here?
Oh
wow
God.
Let's
go
back
a
little
bit
all
right,
so
we
got
mr.
Steve.
Wade
is
hanging
out
with
us
from
London.
How
are
you
doing
mr.
Steve
good
to
see
you
Philip
Nelson,
saying
hello,
joy
from
Richmond
Virginia
good
to
see
you
joy,
Lou,
Maddy,
saying
everybody
rocks
I'm,
so
glad
to
see
you
Lou,
Eddie
and
glad
to
see
everybody
else.
A
Thanks
for
calling
this
out
cuz,
it's
definitely
one
of
the
things
I
wanted
to
dig
into
and
I
just
hadn't,
like
I
hadn't,
actually
scheduled
an
episode
for
it.
So
good
point
calling
this
one
out.
We
got
Sonny
Kumar,
saying
hello,
Rory
from
a
lot
go
ahead,
I
can
never
sit.
I
can
never
say
that
name,
but
it's
always
good
to
see
you
Rory
SiC
honk
represent,
which
is
kind
of
a
a
group
of
security
minded
individuals
got
cherry,
saying,
hello,
Z,
saying
hello
and
Jeremy
from
saying
asking
what
the
topic
is.
Yeah.
A
If
you
have
things
or
questions
that
you'd
like
to
ask
or
things
you
want
to
make
sure
that
we
cover-
or
you
know,
use
cases
and
stuff
that
you're
concerned
about
just
make
sure
you
throw
them
in
here
and
then
I'm
happy
to
like
dig
into
them
and
talk
about
how
they
work.
But
this
is
usually
just
me
like
brain
dumping,
a
particular
component
of
kubernetes
as
it
relates,
and
we
are
outside
yeah
I'm
hanging
out
in
my
backyard,
just
beautiful
backyard,
I
love.
It
it's
tremendous
place
to
be.
A
Who
else
do
we
have?
We
got
Brendan,
saying
hello
from
Seattle.
We
got
some
hellos
from
Sweden
good
to
see
you.
We
got
some
hellos
from
Denmark
and
from
Boston
we
have
somebody
signing
in
from
India
ich
bar
good
to
see
you
and
Ally
from
Sweden
Anshu
from
Paris
Bogdan,
saying
hello
from
Bucharest
Romania
Romania
I
am
from
Israel
mr.
Steeves,
Lucas
and
hello.
Mr.
Tim
car
also
Cornelia
day
was
saying
hello
from
Santa
Barbara,
courtesy
of
Camellia
and
more
Tessa
from
Tehran.
A
A
Fully
geared
Bear
from
Portugal
and
Armen
from
our
amsterdam
marcin
from
cacao,
Celaya
looking
forward
to
the
big
day
at
the
deep
dive
and
David
from
med
Barry
David
memory
from
Colorado
I
think
there
might
actually
be
a
mid
berry
in
Colorado
I,
don't
where
my
brain
was
doing.
That
and
toon
is
saying
hello
good
to
see
you
today
and
Phillip
and
Tiffany
and
American
and
Alessio
and
Phil
and
AJ
and
Shubham
Z
from
San,
Francisco
and
Boas
from
hungry.
A
It's
good
to
see
you
all
I,
just
love
how
this
is
such
a
global
audience
and
it's
just
a
great
time,
just
kind
of
like
sit
and
geek
out
about
some
stuff,
so
I'm
glad
you're
all
here
to
hang
out
with
me.
So
that
said,
let's
dig
into
what's
happening
in
the
ecosystem.
This
week,
remember
I've
pointed
you
in
this
direction
before
but
I'm
gonna
go
ahead
and
kick
over
to
the
screen
in
face.
A
All
right
cool
and
then
we
can
click
over
to
there
yeah
all
right.
So
I've
pointed
you
in
this
direction
before,
but
your
reminder
that
there
are
a
couple
of
really
great
places
to
get
some
information
about
what's
happening
in
the
ecosystem.
Lwk
d
info
is
a
great
place
to
if
you're
interested
in
what's
happening
with
the
within
the
the
code
base
for
kubernetes
and
as
an
example
for
the
week
of
May
3rd
we've
got
a
next
deadline.
A
Is
the
enhancement
freeze
happening
on
May
19th
and
then
there's
some
featured
PRS,
which
are
always
pretty
interesting
and
they're
pretty
well,
and
usually
the
stuff
that
shows
up
here
has
definitely
stuff
worth
tracking
if
you're
interested
in
the
code
base
of
what's
happening
inside
the
code
base
of
Cooper
neat
is
so
like.
This
is
a
great
resource.
If
you're
looking
to
understand
a
little
more
about,
what's
happening
there,
the
other
one.
This
week,
that's
useful
is
TGI
K
or
sorry.
A
Okay,
it's
weakly
yeah
cube
weekly
dot
IO,
and
this
one
is
curated
by
some
of
them.
Some
of
the
awesome
spokespersons
for
Cooper,
News,
I
can't
think
of
a
term
right
now,
but,
like
you
know
the
folks
who
are
like
out
there
talking
about
CN
CF
ambassadors
for
kubernetes,
that's
what
I
was
looking
for
and
they
and
they
and
they
curate
this
list
every
week
and
it's
actually
pretty
decent
every
week.
So
this
week
on,
the
podcast
from
Google
is
home.
A
Talking
to
Matt
butcher
talking
about
helm,
which
ought
to
be
a
pretty
good,
interesting
conversation.
A
lot
of
us
are
carrying
baggage
around
helm
over
time,
but,
like
really
you
gotta,
you
got
to
stop
and
take
a
good
cold
look
at
where
it
is
today.
Right,
I
mean
like
on
three
is
a
big
change
and
it's
definitely
worth
like.
You
know,
considering
again
when
we
got
the
US
Department
of
Defense,
enabling
devstack
ops
on
f-16s
and
battleships
sounds
like
an
action-packed
article.
A
But
it's
not
removed
from
the
api
server
yet
right.
So
there's
a
period
of
is
a
period
of
compatibility
in
which
we're
able
to
we're
still
able
to
use
what's
there
until
the
neck
until
the
until
the
the
version
that
will
remove
it
and
so
API
extensions,
khi,
ob1
beta
one
api,
registration
and
authentication.
These
are
all
moving
and
as
of
119,
they
will
have.
A
Steamed
you
to
meet
up
on
medals
journey
toward
throwaway
clusters,
I
love
that
topic.
That's
pretty
awesome
and
it's
on
YouTube.
So
I'm
gonna
give
me
that
put
the
link
into
the
to
the
notes.
Y'all
and
Steve
Wade
has
updated
his
version
of
deprecation.
That's
good
I,
wonder
actually,
if
the
one.
What
is
that
one
Pluto
I
wonder
if
Pluto
has
uploaded
updated?
Let's,
let's
take
a
look
real,
quick,
Fairwinds,
ops!
Pluto
is
a
great
little
open-source
tool.
They
actually
just
announced
it
pretty
recently.
It
looks
like
we're
getting
updates.
A
A
A
What
else
we're
gonna
cover
so
there's
also
some
pretty
interesting
articles
when
I'm,
one
of
my
favorite
topics
is
kind.
Y'all
have
heard
me
talk
about
this.
All
the
time.
I
probably
gets
a
little
irritating
hear
me
talk
kind
up,
but
you
know
we're
gonna
be
playing
with
it
again
today,
and
it
is
what
it
is,
but
this
one
it's
interesting,
because
it's
about
running
kind
inside
a
kubernetes
cluster
for
continuous
integration.
So
if
the
resource
you
have
at
your
disposal
is
a
kubernetes
cluster,
how
can
you
you
know,
handle
continuous
integration?
A
Where
kind
is
the
destination
for
your
tests,
and
this
one
was
really
well
written
it's
by
GU
and
Steve
Young,
and
this
is
a
basically
documenting
like
how
some
of
the
challenges
they
were
into
it
into
in
basically
nesting
kind
inside
of
kubernetes
at
d2,
IQ,
so
definitely
worth
checking
out.
They
call
out
some
really
interesting
problems.
Things
like
making
sure
that
you
have
the
MTU
set
correctly
the
pit
one
problem,
if
you're
not
familiar
with
containerization
too
much.
This
is
a
this
is
a
problem
that
has
been
haunting
this
for
years.
A
You
know,
basically,
we
need
to
make
sure
that
we
have
a
way
of
managing
pin
1
and
the
TLDR
version
of
the
pin.
1
problem
is
that
if
you're
running
everything
inside
of
a
container
and
you
send
a
kill
or
a
sick
up
to
the
first
process
in
that
container
or
the
roiling
process
inside
that
container,
how
do
you
ensure
that
all
of
the
child
processes
are
also
reaped?
And
this
is
where
one
of
the
problems
well
one
of
the
ways
to
solve
that
problem?
A
A
It's
interesting
they
didn't
caught.
They
didn't
run
into
the
file
system
problem
that
I
run
into
you
know
so.
Yeah
lots
of
really
interesting
details,
lots
of
interesting
things
to
think
about
when
you're
trying
to
deal
with
running
kind.
Inside
of
another
insight
about
kubernetes,
pod,
so
kind
inside
of
kind.
Inside
of
you
mean
you
could
go
several
layers
deep
I
think
you'd
probably
run
out
of
inotify
watch
this
before
anything
else
happens,
but
you
get
the
idea
kind
of
need
stuff
all
right.
What
else
do
we
go?
A
Checking
in
with
the
chat
carlos
santana
thing?
I
saw
that
today.
Yeah,
that's
a
good
one!
Yeah
kind
is
awesome,
keep
saying
Alex
it
just
just
why?
Because
you
can
Alex,
because
it's
fun
to
like
you
know,
point
the
mirrors
at
each
other
and
see
and
see
what
shakes
out
now.
Really,
though,
it's
it's
for
CI,
right
so
like
say,
you're,
developing
an
operator
and
your
environment
is
and
your
environment
in
what
you're
doing
your
builds
and
your
chests
are
gonna
be
like
places
where
you're,
where
you're
gonna
want
to
do
all
of
your
testing.
A
So
in
your
CI
environment,
like
you,
might
spin
up
a
kind
cluster
and
then
deploy
that
operator
and
then
run
your
integration
tests
against
that
operator
and
making
sure
that
it's
gonna
work,
but
you
don't
want
to
do
that
against
a
production
cluster.
So
how
do
you
handle
that
sort
of
thing?
And
this
is
one
of
the
one
of
the
use
cases
for
it?
A
A
They
talk
about
like
why
they're
a
good
idea
how
to
do
them.
They
give
you
some
examples:
wow
they
really
go
into.
It
required
dropped
capabilities.
That's
actually
a
really
good
set
of
drop
capabilities,
turn
off
that
broadcast
all
kinds
of
things,
and
some
of
these
are
actually
turned
off
by
default.
Weirdly
enough,
but
yeah.
This
boot
want
to
turn
that
off
I
mean
the
volumes
that
you're
going
to
allow
access
to
right.
A
So
Krista
config
map
downward
API
empty
deer
projected
secret
notice
that
there
are
no
there
we
go
so
persistent
volume
claims
if
you
want
to
enable
persisted
volume
claims.
This
is
a
very
restrictive
policy
that
we're
looking
at
here,
host
paid
host
network
rule
most
right,
ass,
I,
don't
see
mushroom.
Where
is
it
that
I'm
actually
kind
of
surprised
that
I.
A
See
rule
must
run
s
so
disallow
route,
so
basically
you're
you're
enforcing
that
the
container
has
to
run
as
root.
So
that's
pretty
good,
but
basically
allowing
any
any
UID
outside
of
zero
to
basically
match
that's
good
host
IPC
false
yeah.
This
is
a
very
good
restrictive
policy.
I
can
tell
they
put
some
work
into
that
one
and
then
how
to
apply
the
policies
like
I'm,
basically
talking
about
the
way
that
pod
security
policies
are
consumed
and.
A
Okay,
well,
you
know
every
time,
I
click
on
this.
It
highlights
it
all
and
makes
it
go
away.
So
it's
all
hard
to
read
but
anyway,
so
this
is
actually
kind
of
a
way
of
providing
an
exception
right.
So
in
this
case,
they're
allowing
fluid
D
to
do
things
that
maybe
not
the
general
use
inside
the
cluster
can
do.
A
And
the
neat
thing
about
this
is
that
they're
associating
a
role
binding
bound
to
the
fluid
D
namespace
tied
to
the
service
account
that
fluid
D
will
use,
and
this
is
and
that
relationship
with
the
service
account
on
fluid
D
is
how
the
fluid
D
pods
as
long
as
they're
actually
mounted
within
that.
But
as
long
as
they're
mounted
with
that
particular
service
account
are
getting
access
to
use
the
pod
security
policy
that
they've
defined
for
fluid
D.
A
The
access
model
for
pod
security
policies
is
a
little
complex,
but
this
is
one
way
to
solve
it.
It's
probably
the
most
granular
way.
Another
way
to
solve
it
would
be
to
actually
bind
the
the
dual
role
binding
within
the
namespace
of
Group
service
accounts.
Any
system
service
accounts
to
the
particular
pod
security
policy,
and
that
way
you
don't
you
don't
actually
have
to
tie
it
to
the
specific
service
accounts
into
the
namespace.
You
can
also
tie
it
to
the
service
of
council
associated
with
things
like
the
controller
manager
right.
A
So
where
you
have
like
the
replica
set
controller,
you
can
find
it
to
that
and
then,
when
the
replica
set
controller
creates
pods,
it
will
have
access
to
the
principles
of
pod
security
policies
that
are
better
bound
within
that
namespace
interesting
stuff
when
they
have
troubleshooting
in
triage.
This
is
a
really
good,
write-up,
great
job,
Jason
price,
if
you're
interested
in
pod
security
policies.
This
is
a
really
solid
write-up.
A
But
if
you
are
interested
in
pod
security
policies,
you
should
probably
also
be
interested
in
things
like
OPA
and
those
sorts
of
things,
because
it's
likely
that
pod
security
policies
will
eventually
get
moved
out
of
the
configuration
in
into
oh
wow
typo
into
a
third
party.
Things
like
OPA
gatekeeper,
stuff
OpenShift
has
SCC
wonder
if
OPA
will
be
something
to
bring
you
yeah
I
think
that's!
The
goal
is
to
make
it
unify
on
and
stuff
like
OPA
gatekeeper.
A
The
challenge,
of
course,
is
that,
like
in
some
of
the
more
restrictive
environments
like
that
means
that
you
have
another
third-party
another
third-party
tool
that
you
have
to
require
be
deployed
on
all
clusters
and
that's
that's
a
hard
one
agreed
Ryan
and
that's
a
good
question
and
it's
one
that
we're
still
battling
with,
but
I,
don't
know
that.
There's
a
good
clean
answer
there.
If
you're
interested
in
this
topic,
though
I
would
definitely
recommend
jumping
into
the
sig
auth
community
meetings
and
bringing
it
up
and
driving
it.
A
I
know
that
we're
looking
for
people
to
help
drive
the
the
direction
of
where
this
is
actually
gonna
go.
So
very
good.
Yeah,
that's
a
good
one
too.
The
other
one
is
for
blasts.
It's
actually
it's
interesting.
You
bring
up
the
cube
system
on
because
that
actually
highlights
another
little
quirk
in
the
way
that
pod
security
policies
work-
and
here
is
a
trivia
question
from
my
amazing
audience.
Are
you
ready?
The
question
is
say:
you're
gonna
bring
up
a
cluster
with
cube,
ATM
and
cube.
Atm
brings
up
the
control
plane
with
a
static
pause.
A
Now
say
you
wanted
to
grant
access
to
the
pod
security
policies
that
allow
for
those
restricted
for
those
Mir
pods
to
be
registered
with
the
cluster,
so
that
when,
when
the
Kubla
tries
to
register
that
pod,
it's
a
it
is
able
to
register
the
pod,
because
the
pod
security
policy
allows
it.
What
would
you
do
like?
How
do
you
grant
access
such
that?
So
that
can
happen.
A
Anybody
new
anybody
have
any
ideas,
it's
kind
of
a
fun
one
think
about
who
is
defining
the
pod
at
that
case,
because
it's
not
the
user
who
is
defining
the
pod
and
when
that
pot
is
defined
like
how
would
you
grant
access
to
that
to
that
entity
to
have
access
to
that
pod
security
policy,
the
cubelet
yeah
it's
actually
the
cubelets
authentication
piece
is
actually
part
of
a
group
called
system
knows
it
is
the
cubelet
yeah,
and
so
basically
it's
the
node
itself.
The
node
authorizes
with
its
own
key,
and
so
the
system
node.
A
The
group
system
nodes
is
actually
where
you
would
grant
that
access.
You
would
grant
access
to
that
pod
security
policy
to
system
nodes
in
the
namespace
in
the
in
the
cube
system
namespace
and
then,
when
the
cubelet
tried
to
register
that
mirror
pod
with
the
API
server,
it
would
authenticate
as
a
part
of
the
system
nodes
group,
and
it
would
be
able
to
to
make
that
happen.
A
Mean
there's
another
article
by
public
populace:
oh
I,
caught
the
typo
Hellman
customized,
better
together,
I
agree
with
this,
but
I
think
it's
I
think
it's
hard
sometimes
to
have
to
think
about
generally,
when
you're
thinking
about
templating
applications,
and
this
is
actually
this
sort.
You
know
the
basis
of
a
couple
of
the
last
two
episodes
that
Bryan
Liles
was
doing
around
how
to
manage
things
like.
How
do
you
deploy
an
app
like?
What
are
your
concerns?
A
So
this
case
they're
talking
about
how
how
you
would
leverage
helm
in
helm
three
and
how
you
can
effectively
do
a
thing
like
templating,
where
you
end
up
with
a
whole
bunch
of
gamal
that
has
been
templated
from
your
home
recipe
and
then
once
you
have
that
yeah
mo.
Can
you
also,
then
use
customize
to
modify
or
or
change
the
values
on
a
per
cluster
basis,
and
actually
what's
interesting
is
that
this
has
actually
been
implemented
in
a
couple
of
different
places.
It's
implement
it's
implemented
as
part
of
flux.
A
It's
also
implemented
as
part
of
Argos
CD.
The
idea
that
you
could
actually
have
like
get
get
ops
based
configuration
on
your
clusters,
right
in
which
you
have
some
operator
running
inside
your
cluster,
whether
that's
Argo,
CDE
or
flux,
and
it
would
pull
the
resources,
maybe
a
helmet
art
down
from
your
git
repository
and
then
manipulate
that
to
make
it
cluster
specific
inside
the
cluster
itself
and
then
apply
those
resources
directly
against
that
cluster.
A
Alright,
so
customize
as
a
part
of
that
recipe
here,
but
I
think
in
this
case
it's
interesting
because
they're
they're
talking
about
how
you
would
leverage
this
thing
inside
of
the
inside
of
your,
like.
Maybe
like
your
CI,
push
flow
rather
than
a
pull
flow
sort
of
flipping
the
model
around
a
little
bit.
A
Pretty
cool
stuff,
though
definitely
worth
checking
out
how
we
doing
with
the
audience.
Red
Hat
has
moved
away
from
ansible
in
their
open
data
hub,
open
ship
to
customer.
Oh
that's,
interesting,
cool
I
didn't
know
that.
Thank
you
for
sharing
that
Willie
Paul
C
helm
supports
a
new
post
render
flag
so
he'll
really
see.
This
is
why
I
like
doing
this
cuz
I
learned
stuff.
Every
week,
mr.
Steve
Wade
will
be
talking
about
customized
next
week
on
a
webinar
with
weave
perks
and
he's
gonna
put
the
link
to
that
webinar
into
our
notes.
A
A
A
A
A
You
go
to
vmware
ton
zoom
and
you
go
to
TGI
K.
This
is
gonna,
be
I,
know
yeah
whatever
this
is
going
to
be
that
directory,
where
we
keep
TGI
K
and
as
a
reminder,
if
you
have
episode,
ideas
feel
free
to
throw
them
into
feel
free
to
throw
them
into
the
issues
here
right
and
we
try
to
pick
an
episode.
A
I've
been
actually
just
kind
of
like
going
in
my
own
direction
with
the
TGI
Kade
rockin
stuff,
but
whenever
we're
like
trying
to
figure
out
what
we're
gonna
do
next,
it's
great
to
actually
have
this
list
of
episodes
that
we
can
come
back
to
it
and
look
into
now.
Some
of
these
we've
covered
and
just
haven't
closed,
but
there
are
a
few
others
here
that
are
definitely
worth
digging
into,
but
we
also
have
an
episodes
directory
in
which
you
can
find
content
related
to
things
that
we've
done
in
the
past
right.
A
A
Take
three:
this
is
the
one
that
Josh
Rosso
put
out
and
he
put
in
his
examples
of
the
diagrams
that
he
put
in,
and
also
the
vault
examples
right
so
and
what
I'm
gonna
do
at
the
end
of
this
episode
is
I'm
going
to
upload
a
117
episode
directory
where
all
of
the
resources
that
I
walk
you
through
in
this
episode
are
gonna,
be
available
to
you
there.
So
if
you're
interested
in
it
after
that-
and
you
want
to
kind
of
reproduce
what
I
do
inside
this
episode.
A
A
A
A
A
So
it's
actually
super
interesting
stuff
right,
so
with
width
and
we're
going
to
leverage
this
today
in
this
episode,
but
with
but
with
Footloose
I
can
create
groups
of
nodes
that
are
brought
up
on
some
base
image
and
most
of
my
examples,
I'm
going
to
be
leveraging
Ubuntu,
1804
and
then
now
that
we
have
containers
that
are
running
a
an
actual
Ubuntu
operating
system.
We
can
put
stuff
onto
those
things
right
and
so
in
my
lab,
I've
got
that
set
up.
Let's
take
a
look
at
the
Footloose
camo
I
got
that
set
up
here.
A
We
can
see
I've
created
a
couple
of
different
machine
sets
and
so
kind
of
a
term
similar
to
the
way
that
we
think
about
cluster
API
right
and
have
three
replicas
of
this
particular
spec
right,
I'm,
telling
it
to
use
a
back
end
docker.
Why
is
there
a
back
end
field?
You
might
say,
because
you
could
also
use
firecracker,
which,
how
cool
is
that
so
in
this
configuration
I'm
using
docker
as
a
back-end,
but
you
could
also
use
firecracker
back
end
and
create
VMs
instead
of
creating.
Yes,
we
do
still
want
that
Steve!
A
Yes,
yes,
we
do,
and
you
can
create
virtual
machines
using
KBM
or
firecracker
to
actually
handle
this
stuff.
So
pretty
cool
here's,
the
image
that
I'm
going
to
use
and
then
you
can
see
it's
actually
pretty
flexible
with
the
most
recent
version
of
kind
kind,
now
puts
all
of
its
nodes
inside
of
a
kind'
network.
So
it's
not
using
the
default
docker
bridge
anymore.
A
It's
created
a
new
bridge
called
kind,
and
it's
putting
all
of
the
nodes
in
there
and
so
because
I'm
actually
gonna
stand
up
an
s-u-v
cluster
and
use
that
sed
cluster
as
a
backing
for
kind
I
needed
to
make
sure
they
were
able
to
reach
each
other
and
due
to
the
isolation
that
docker
does
automatically
between
networks.
I
just
decided
to
put
these
nodes
into
the
kind
into
that
kind,
Network,
and
that
way
they
can
all
just
intercommunicate
and
crazy
things.
Like
short
DNS
names
and
all
that
stuff
will
work.
A
Sir
good
to
see
you,
and
we
also
have
the
ability
to
mount
in
volumes
right
so
in
these
I'm
actually
creating
a
net
CD
cache,
and
this
is
actually
where
we
actually
host
the
sed
bits.
And
then
we
also
create
a
shared
directory
that
I'm
mounting
in
and
that's
actually
going
to
where
I'm
mounting
in
things
like
at
CD
ADM,
which
is
a
tool
we'll
talk
about
also
and
some
of
those
things
Cornelia
has
the
questions.
A
One
of
the
insanely
cool
things
is
that
you
can
is
that
when
you
are
using
actual
vm's
yeah,
exactly
true
I
really
dig
it
it's
it's
a
really
cool
tool,
so
I've
created
three
different
replica
sets.
If
you
will
right,
I've
created
one
for
sed
and
so
and
you
can
see
in
the
name:
I'm
actually
templating.
Member,
so
I'll
have
three
members
of
an
Etsy
cluster
member,
zero
member
one
member
too,
and
then
I've
also
created
a
load
balancer.
A
That's
where
the
load
balancer
name
is,
but
I've
only
got
one
of
those
and
it'll
be
lb,
0
and
then
I've
also
created
proxy,
and
you
know
we're
gonna
talk
a
little
bit
more
about
what
that
means
here
in
a
minute.
But
what
this
is
doing
will
be
fun.
We're
gonna,
get
to
that
here
in
a
bit
you
can
see,
has
the
same
mounts
as
all
the
other
SVD
stuff.
Footloose
source
is
that
we've
works
is
that
we
work
slash
Footloose
on
github,
so.
A
First,
it
created
my
SSH
key,
my
cluster
key,
and
then
it
made
sure
that
the
docker
image
I
need
is
actually
pulled
and
it
is
already,
and
then
it
just
basically
created
that
the
members
right
so
now,
I
have
my
sed
member
0s
city,
member
1,
St
member
2
and
then
I
have
the
sedl
be
0
and
the
sed
proxy
and
I
like
that.
They
also
in
their
logging,
call
out
that
it's
connected
to
the
kind
network.
How
cool
is
that?
A
So
then,
if
I
do
docker,
PS
I
can
see
all
those
nodes
located
and
the
way
that
footloose
works.
If
I
do
SSH
and
cluster
key,
then
I
can
do
SSH.
I
can
do
route.
Sorry,
footloose
SSH
route
at
M
number
0,
and
that
will
get
me
right
into
that
node
and
so
it'll
look
and
feel
just
like
a
regular
note
right,
systemctl
status,
systemctl
works.
There's
really
nothing
running
on
this
box
so
far
journal
kettle.
A
Oh
right,
so
got
journal
kettle.
All
the
things
actually
kind
of
work,
the
way
that
you
would
expect.
Obviously
some
piece
of
sick
pieces
have
been
stripped
out,
but
I
can
do
app.
Update
I
have
all
of
the
all
of
that
kind
of
neat
stuff
happening
so
and
in
in
the
source
of
Footloose.
They
also
describe
like
how
to
build
images.
A
Let's
take
a
look
at
that
catch
and
so
sorry
about
that,
so
in
my
inventory,
I've
actually
got
my
members.
I've
got
my
SD
member
0
1
&
2
I'm,
using
the
ansible
connection
equals
docker,
which
tells
ansible
to
docker
exec
into
these
things
for
any
manipulation
of
the
actual
node
I've
created
an
all
group.
A
I've
created
an
lb
nude
I've
created
an
lb
group
and
I've
created
some
sed
members
and
then
I
just
kind
of
you
know
horsing
around
over
the
week,
I
kind
of
played
a
little
bit
with
building
a
playbook
for
this
stuff,
and
so
again
this
is
all
gonna,
be
uploaded.
I
put
a
common
role
in
here
and
I,
put
an
H
a
proxy
role
in
here,
because
we're
gonna
play
with
the
idea
of
the
SD
cluster
being
behind
a
load.
A
A
And
in
this
sed
role,
I'm
able
to
get
away
with
a
lot
because
I'm
actually
assuming
the
configuration
of
the
mounts
and
stuff
right.
So
in
this
case
I
don't
have
to
copy
in
the
CA
tgz
or
the
the
CA
sorts
I
can
assume
that
they're
already
part
of
the
shared
directory
that
I've
built
here,
and
these
are
all
going
to
be
uploaded
and
so
because,
actually
before
we
get
into
this,
we
should
talk
about
Etsy
da
diem.
So
let's
do
that.
A
A
Sigg's
kay
it's
the
IO
@cd
ATM.
If
you
got
a
cig,
skates,
tayo,
/
@
çd
ATM,
you
can
find
a
project
that
we're
working
on
in
upstream,
and
it's
not
getting
a
lot
of
love
right
now.
But
it's
it's
up
there
like
some
of
the
motor
recent
commits
were
from
like
three
months
three
or
four
months
ago,
and
there
was
the
thing
that
had
to
fix
and
and
we'll
just
to
see
if
my
fix
worked.
A
But
what
Etsy
da
DM
tries
to
do
is
basically
provide
some
provide
a
binary
that,
like
wraps
the
ability
to
manage
the
membership
of
all
of
the
members
of
Etsy
D,
and
this
isn't
the
only
thing
that
does
it
out
there
like
there's,
there's
also
another
one
by
a
friend
named
Quentin
too
much.
That's
the
cloud
operator.
A
That's
another
really
interesting
project
out
here.
The
Quentin
Quentin
MC
the
cloud
operator,
but
the
goal
of
this
is
basically
to
make
it
so
that
you
can,
from
the
individual
node
or
from
the
particular
SED
node
have
some
tool
that
abstracts
away
some
of
the
complexity
of
configuration
of
SED.
In
that
you
could
have
things
like
you
could
bring
up
a
single
SED
member
and
then
from
another
computer
note
basically
bring
up.
You
know,
deploy
all
the
things
necessary
to
deploy
SED
again
on
that
node
and
just
point
to
that.
A
First
node
and
all
of
the
join
commands
and
all
of
that
other
stuff
will
just
be
worked
out
for
you.
So
greatly
simplifies
the
management
stuff
and
the
entity
cloud
operator
piece
is
actually
kind
of
interesting.
He
builds
context
for
what
the
nodes
are
leveraging
a
auto
scaling
group
in
AWS,
in
my
case,
I'm
not
actually
making
I'm
not
making
that
assumption
right.
A
So
that
brings
us
back
to
this,
and
this
is
where
basically
I'm
running
the
sed
ADM
command,
doing
exactly
what
we're
talking
about
before
right.
So
I
have
a
load
balancer
configured
and
that's
where
sed
lb0
comes
in
and
I'm
actually
leveraging
sed
ADM
to
in
it
sed
on
member
zero
I'm
pointing
the
join
command
I'm,
adding
as
an
extra
sand,
the
sed
lb0
hostname,
and
that's
because
by
default,
sed
ADM
will
only
encode
into
the
server
certificate.
The
hostname
that
sed
a
DM
is
running
on,
but
because
I'm
gonna
put
this
behind
a
load.
A
Balancer
also
I
needed
to
include
the
host
name
for
that
load.
Balancer
or
the
certificates
won't
match,
and
things
will
fail
and
we're
gonna
kind
of
explore
that
stuff
live
as
we
get
into
it
and
then
on
the
other
members
I'm
doing
the
same
command
but
I'm
joining
to
the
load,
balancer
and
I'm
again
encoding
in
the
XS
CDL
be
0
so
pop
quiz.
A
A
A
This
is
actually
kind
of
an
important
question
because
it
makes
you
think
about
where
I'm
terminating
TLS,
it's
a
no
for
proxy.
Yes,
exactly
I'm,
just
using
a
CH
a
proxy
to
handle
the
transport
piece,
I'm,
not
terminating
TLS,
on
H,
a
proxy
I'm
terminating
TLS
back
on
the
nodes
right
and
so
because
of
that
I
need
to
make
sure
that
the
backend
node
has
that
hostname
as
part
of
its
server
certificate
or
TLS
won't
terminate
on
that
node.
So,
if
you
think
about
that
as
a
diagram,
basically,
my
client
will
go
to
the
load.
A
A
So
now,
I've
got
all
that
configured
and
you
know
it's
kind
of
neat,
because
it's
ansible
and
I
think
I
think
it's
a
requirement
that
if
you're
showing
off
ansible,
you
show
off
that
it's
not
impotent.
I've
made
it
so
that
aside
impotent.
All
of
these
things
will
come
back
green
because
nothing
had
to
change
ooh
and
that
actually
highlights
one
of
the
other
kind
of
gotchas
that
I
had
to
fix.
A
This
is
kind
of
a
fun
one,
so
you
can
see
that
nothing
else
had
to
change
and
that's
actually
kind
of
neat
just
making
out
of
put
so
it's
an
example.
If
you're
interested
in
that
kind
of
thing
definitely
check
it
out
kind
of
cool,
so
one
of
the
other
things
I
had
to
fix,
which
was
kind
of
interesting,
was
the
resolution
stuff.
A
By
default,
we
will
try
to
resolve
an
ipv6
name
before
trying
to
resolve
an
ipv4
name.
So
your
system
resolvers
right
if
I,
do,
host
google.com
it'll
try
to
resolve
the
ipv6
name
before
it
tries
to
resolve
the
ipv4
name
and
that
broke
some
things
inside
of
my
SD
cluster.
In
fact,
it
took
me
a
little
while
to
figure
this
out
in
that,
if
I'd
jump
into
my
lb0
here,
if
I
were
to
do
a
host,
that's
the
member
zero.
A
A
A
Otherwise,
what
happened
well,
otherwise,
it
was
actually
failing
to
terminate,
because
it
was
resolving
to
ipv6
and
sed
by
default,
was
binding
to
only
ipv4.
What
kind
of
interesting
stuff
is
there
a
particular
need
for
ipv6?
No,
there
is
not
a
particular
mean
kind
of
tripping.
How
did
I
do
it
so
I
change
that
by
good
question.
Thank
you
for
getting
me
back
on
topic.
A
A
Basically,
tricking
the
system
resolver
in
GFC
and
everywhere
else
to
resolve
ipv4
over
ipv6
by
default.
If
you
have
a
dual
stack
node,
it
will
try
to
resolve
ipv6
over
ipv4
and
in
some
cases,
especially
in
cases
like
this,
that
can
break
stuff
because
you
don't
really
have
a
way
of
like.
Maybe
if
your
upstream
dns
for
ipv6
resolvers
are
broken
or
maybe
you
don't
actually
have
a
path
to
ipv6
addresses,
so
you
can
resolve
those
ipv6
hosts,
but
you
can't
access
them.
This
is
a
system
level
way
of
fixing
the
problem.
A
On
a
host
that
actually
has
manual
files,
you
can
you
can
dig
it,
you
can
dig
you
what
this
means
where,
basically
it's
configuring
the
get
out
our
info
function
and
and
it's
configuring
it
system-wide,
and
so
you
can
set
precedents
and
you
can
dig
into
it
and,
like
you
know,
pretty
cool
stuff,
but
yeah.
So
that's
how
I
did
that
that
was
your
deep
magic
tip
for
the
week.
A
A
All
right,
fine
who
could
yes
but
I
like
the
guy
thing,
better
cuz,
like
I,
don't
want
to
disable
ipv6.
That
seems
a
little
harsh
I
just
want
you
resolve
your
over.
The
ipv6
addresses
I'm
just
kidding
anyway.
So,
let's
play
with
it
some
more
so
now
we
have
our
Etsy
cluster
up
and
we
can
jump
into
one
of
our
nodes.
Stalker
exactly
is
see
member
0
bash
and
then
part
of
Etsy
da
DM
shared
a
CD.
A
Are
you
have
these
different
commands
right?
You
have
a
join
command,
which
we
saw
in
our
ansible
scripts.
We
have
our
init
command
and
we're
gonna
play
with
these
also,
but
in
fact,
let's
play
with
one
now,
so
you
have
join
in
it
and
reset
and-
and
these
are
some
of
the
ones
we're
gonna
be
playing
with
right.
So
in
our
case,
we
want
to
go
ahead
and
first
we're
going
to
see
if
sed
operated.
A
So
if
we
do
Optima
bin
SCD
CTL
sh
member
lists,
we
can
see
all
three
of
our
members,
1802
1803,
1804
and
they're
just
registered
with
their
ipv4
address.
This
was
the
problem
right,
because
these
are
registering
with
ipv4
and
ipv6
I
needed
to
make
itself
a
load.
Balancer
was
actually
accessing
them
on
an
ipv4.
That
was
that's
not.
A
That
was
how
I
chose
to
do
it,
but
the
clusters
up
so
the
next
thing
I
want
to
do
is
I
want
to
remove
member
0
from
the
from
the
set
and
show
how
that
works
and
kind
of
talk
about.
Why
that's
the
ATM
is
so
cool
and
then
we'll
get
into
the
fun
stuff
of
actually
bringing
up
an
Etsy
cluster
and
talking
about
how
all
of
that
stuff
works.
A
So
exact
yeah
remember
one
fash,
so
what
I'm
gonna
do
is
I'm
gonna,
remove
this
member
and
we'll
talk
about
the
magic
of
sed
ATM,
which
I
really
dig.
So
if
I
do
at
CD
ATM
reset
what
it
does
is
it
sees
that
the
sed
service-
and
this
in
this
case
is
actually
a
systemd
service
as
part
of
sed
ATMs
magic
is
to
pull
down
the
sed
binaries,
to
configure
a
system
D
service
and
start
that
service
for
you
and
configure.
However,
it
needs
to
be
configure
now.
A
This
was
the
this
was
the
the
leader
in
the
in
the
in
the
previous
configuration,
but
we
just
removed
that
leader
and
so
now,
if
I
on
one
of
the
other
nodes,
if
I
do
opt
in
at
CDC,
TLS
H
member
lists
I
can
see
if
there
only
two
members
and
that's
because
part
of
the
sed
ATM
magic
that
wraps
it
is
removing
the
member.
When
you
do
a
reset,
it
does
a
call
to
the
sed
cluster.
Before
doing
anything
to
that
Etsy
denote
itself.
A
A
A
It's
putting
all
of
its
data
into
Varley,
I
bet,
CD
itself.
I
do
find
our
live
at
CD.
I
could
see
all
of
the
existing
data
in
for
this
particular
member
there.
But
if
I
were
to
do
that
up
here,
that
director
doesn't
exist
right.
So
that's
what
Risa
does,
and
so
it
actually
kind
of.
Does
this
yeah
it's
kind
of
a
trip
system
B
instead
of
it's
not
a
docker,
but
you
know
it's
good
for
lab
environments
and
stuff
like
this.
A
Yeah,
so
you
go
to
NCD
to
I/o,
you
can
find
the
website
for
it
and
their
Doc's
are
pretty
good
and
they
talk
about
like
what
sed
is
and
what
it's
used
for,
and
some
of
the
adopters,
it's
see,
as
you
can
see,
Cooper
need
is
right
at
the
top.
That
CD
is
a
pretty
big,
important
part
of
kubernetes
and
it's
being
used
as
sort
of
the
backend
key
value
state
space.
So
pretty
much
everything
inside
of
kubernetes
is
operates
in
a
way
that
is
stateless.
A
Obviously
that
means
that
it
holds
a
lot
of
really
important
information
for
like
the
ability
to
manage
the
lifecycle
of
your
cluster
over
time,
and
so
because
that's
so
intrinsically
important
to
the
lifecycle
of
your
cluster.
It's
actually
probably
a
really
good
idea
to
understand
how
to
manage
sed
and
understand
how
sed
is
manish
your
cluster
and
that's.
The
idea
of
this
of
this
episode
talk
about
talk
about
how
that
works
in
general.
Now
typically,
I
talk
about
sed
I
talk
about
cube,
ATM
and
how
cube
ATM
does
this
stuff?
A
A
So
this
configuration
I'm
gonna
bring
up
just
like
a
stock
cube,
ATM
multi
master
cluster
with
us
thing,
I
think
it's
I
have
a
single
worker
node,
maybe,
but
it's
really
only
got
the,
but
it's
got
three
control
plane
nodes.
So
it's
an
H,
a
configuration
of
a
cube
ATM,
and
this
is
a
pretty
typical
deployment
of
cube,
ATM
based
configuration.
A
Make
this
bigger,
so
the
wait
queue
medium
handles
this
stuff
right
is.
It
will
actually
manage
what's
called
us,
what
we
refer
to
generally
as
a
stacked,
Etsy
D
cluster,
in
that
every
control,
plane
node
will
have
a
single
member
of
the
sed
cluster
and
cube
ATM
will
met,
will
take
care
of
the
of
the
instantiation
of
that
Etsy
D
cluster
in
such
a
way
that,
as
it
joins
new
members
to
the
cluster,
it
will
also
join
new
sed
members
to
the
existing
asset
e
cluster
right.
A
So,
as
a
new
control
plane
up,
node
comes
up,
we
actually
handle
the
calls
necessary
to
understand
okay
well
from
the
API
server.
Let's
pull
down
some
of
the
configuration
there
may
be,
the
the
shared
certs
and
those
sorts
of
things
for
these
new
control
plane
nodes,
and
then,
let's
make
the
call
to
sed
a
DM
or
to
use
sed
to
join
our
new
control
plane
node
to
our
existing
cluster.
A
We're
gonna
look
at
what
that
looks
like
here
in
a
sec
inside
of
queue
medium,
but
basically,
as
we
add
more
nodes
or
taking
it,
as
we
add,
more
control,
plane
nodes
queue.
Medium
is
handling
the
manipulation
and
making
sure
that
these
that
we're
forming
a
H
a
compatible
sed
cluster,
but
there's
a
cost
and
we're
going
to
talk
about
the
cost
here
as
well.
A
A
Cube
kiddo
get
nodes,
we
can
see
all
of
our
nodes.
Are
there
I
things
are
still
booting
up,
but
you
know:
I
have
two
workers
and
three
control
plane
notes
and
if
I
docker
exec
into
one
of
my
control,
plane,
nodes
time,
control,
plane
and
I
do
see
our
kettle
PS
I
can
see
that
the
way
the
cube
ATM
handles
this
is.
It
runs
NCD
inside
of
a
inside
of
a
pod,
and
we
can
find
that
pod
if
you're
curious
about
looking
into
how
this
works
inside
of
kubernetes
manifest
that's
IDI.
A
This
is
the
entire
pod
based
configuration
for
sed
right
and
cue.
Medium
will
generate
the
certs
and
handle
the
cert
rotation
for
this
stuff.
It'll
it'll
handle
it'll,
configure
it
in
such
a
way
that
this
particular
entity
member
comes
up.
It
has
a
health
check
for
itself.
Sed
goes
wonky.
It
will
actually
try
to
restart
sed,
but
it
starts
it
up
as
a
static
pod
manifest,
and
we
can
also
see
a
host
path
right,
so
we're
mounting
in
the
certificates
that
are
going
to
be
used
to
secure
this
particular
instance
of
bet.
A
A
A
A
This
is
a
place
where
you
can
go.
Look
for
that
snapshot
right
member
snap
and
as
long
as
you
have
a
reasonably
a
reasonably
recent
version
of
a
snapshot
of
a
CD,
you
can
get
that
cluster
back.
So
that
is
interesting
information
about
how
that
works.
Now
I'm
saying
I'm,
not
saying
it's
easy,
but
it
can
be
done
and
we
might
explore
it
in
the
time
that
we
have
today,
but
we'll
see
how
it
goes.
Probably
gonna
go
a
little
long
today.
A
A
This
manifest
basically
mounts
all
the
same
stuff
that
cube
ATM
mounts
and
it
makes
it
and
it
makes
a
bunch
of
assumptions
about
how
things
work.
So
it's
just
gonna
grab
a
case
sed
image.
In
this
case,
it's
pulling
three
three
ten
I
think
that
cluster
itself
is
already
operating
at
three
for
three
but
close
enough.
Then
we
set
some
environment
variables.
This
is
actually
a
way
to
configure
the
sed
client.
So
sed
CTL
is
the
prefix
and
then
any
of
the
arguments
that
you
would
pass
to
sed
CTL.
A
You
could
actually
pass
as
environment
variables.
This
is
true
for
HUD
server
as
well,
right
and-
and
we
have
an
example
of
that
over
on
the
other
node,
and
then
we
mount
in
the
the
CA
certificate
and
we
mount
in
the
health
check
certificate
that
we're
going
to
use
as
effectively
as
a
client
certificate.
And
then
one
of
the
interesting
things
is
that
we're
going
to
use
as
a
sed
at
CDCl,
CTL,
endpoints
localhost,
and
we're
doing
that,
because
if
we
look
at
the
way
the
cube
API
server
is
configured
it's
doing
the
same
thing.
A
So,
according
to
the
cube
API
server,
the
sed
server
is
available
at
local
host.
Two
three
seven,
nine
I
am
using
the
defaults.
You
know
I'm
kind
yeah
and
that's
because
sed
and
the
API
server
are
both
part
of
the
same
network
stack
right.
They
both
are
part
of
the
hosts
underlying
network
stack
and
so
API
server
is
able
to
reference
sed
at
local
host.
But
some
of
the
interesting
questions
are
like
what
does
that
mean
for
connectivity?
A
A
A
A
And
we're
gonna
count
them
so
there's
about
140
connections
destined
for
a
port
for
localhost
on
for
23,
seven
million
right,
and
so
what
this
means
is
all
of
the
established
connections
inside
of
the
current
connection
track,
it's
all
connecting
locally
to
2379,
and
that
means
that
even
if
I
have
an
H,
a
compatible
SV
cluster
I'm,
only
ever
gonna
terminate
to
the
local
SED
host
for
it.
For
my
connections,
so
the
API
server,
if
configured
in
this
way
will
only
ever
you
try
to
terminate
to
a
local
host
on
2379.
A
A
A
How
do
we
ensure
that
we're
writing
to
the
leer
and
the
reach,
and
the
way
that
happens
is
actually
in
the
way
that
it's
built
into
the
way
that
ed
city
itself
works?
You
can
you
can
you
can
write
or
read
from
any
member
in
the
cluster,
and
what
will
happen
is
the
sed
server?
Will
proxy
writes
to
the
leader?
A
Is
it
in
a
cube,
ATM
deployment,
the
key
baby,
a
server
only
connects
yeah,
the
the
sed
node
on
that
same
node,
where
the
control
plane
is
right
exactly
and
because
of
the
way
se
D
works
right.
We're
persisting
that
connection,
we're
persisting
rights
back
to
the
leader.
We
need
to
make
sure
that
we
write
to
the
leader
we
can
read
from
anything,
and
so
the
other
thing
that's
neat
about
this
is,
if
you
think
about
the
way
this
lays
out.
A
If
you
have
multiple
SD
nodes
or
if
you
have
multiple
control
plane
nodes,
those
control
plane,
nodes
are
going
to
be
putting
all
of
their
read
load
on
that
local
member,
and
only
the
member
that
is,
the
leader,
is
going
to
be
doing.
Double-Duty,
the
member
that's
gonna,
be
doing
the
leader
is
going
to
be
handling,
writes
and
reads,
but
only
but
but
from
the
read
perspective,
it's
actually
being
pretty
well
dispersed
across
the
set
as
far
as
load.
A
That
is,
the
default
set
up
for
s4
cube
ATM,
yes,
yeah
I
agree,
and
it
also
adds
the
added
complexity
right.
So
like
one
of
the
things
that
actually,
that
makes
a
challenge
challenging
is
like
what,
if
I
wanted
to
hello
away
the
note
and
bring
it
back
right.
So,
let's,
let's
do
that?
Let's
do
docker,
exact
yeah,
I
I'm
control
plane
and
then
we'll
do
cube,
ATM
reset
chef,
and
so
now
what
I've
done
is
I've.
A
Like
wiped
out
all
the
certificates
and
I've
wiped
out
all
of
the
sed
data
and
to
actually
get
this
member
back
in
I
would
have
to
go
back
over
to
one
of
the
other
members
upload.
The
certificates
get
the
join
token
for
the
control
plane
and
then
come
back
over
here
and
do
cube.
Atm
join
a
pro
plane
like
that
to
actually
join
this
member
back
in
and
I
would
have
to
go
back
through
that
state
of
managing
the
of
managing
the
at
City
relationships.
A
A
A
If
y'all
remember
list-
and
the
good
news
is
that,
because
we
did
cube
ATM
reset
part
of
the
queue
medium
reset
is
actually
removing
that
member
from
the
cluster
and
so
that
actually
a
little
bit
more
of
the
magic
of
sed
does.
But
what,
if
I,
actually
just
lost
that
node?
How
would
I
handle
the
ruble
of
that
of
that
I?
Think
I'd
have
to
go
back
in
here
and
delete
that
manually
to
be
able
to
do
that.
Openshift
does
also
so
does
cube
ATM,
and
so,
let's
explore
that
all
right,
clear,
I'm
delete
cluster.
A
But
if
you
think
about
it,
like
you
know,
we're
basically
coupling
the
the
fault
domain
into
that
control,
plane
note
right.
This
is
this
is
the
reason.
This
is
the
way
that
cube
idiom
thinks
about
the
problem.
The
control
plane,
node,
your
API
server,
your
controller
manager
running
there
all
of
those
things
they
are
all
a
part
of
the
same
failure
to
mean.
A
You
have
to
think
about
like
okay.
Well,
now,
I
have
to
worry
a
little
bit
about
the
failure
domain
of
that
state
as
like
couple
things
that
are
stateless.
Like
controlling
notes
and
those
things
into
it,
so
if
I
wanted
to
add
another
set,
another
API
server,
I'm
coupling
that
with
adding
another
sed
member
and
that
may
not
be
the
right
thing
depending
on
how
you're
actually
managing
the
lifecycle
of
your
clusters
so
interesting
stuff.
A
A
You
know
what
I
have
actually
scripted
this
can't
shared
scripts
joined
Sh
right,
so
I
basically
made
a
little
script
that
I'll
tar
my
CA
file,
my
my
entire
my
C
file
and
then
it'll
just
use
this
command
to
rejoin
a
member
to
the
cluster.
So
let's
just
go
ahead
and
paste
that
in
and
now,
if
we
go
back
down
to
here,
we
jump
back
into
our
node.
A
We
do
opt
in
at
the
CTL.
Sh.
Remember
lists!
Go
back
to
3
members
now,
so
we
got
our
entity
cluster
here.
We're
gonna
play
with
actually
like
using
STD
as
an
external
SED
cluster.
We're
going
to
talk
about
how
to
configure
that
and
all
that
stuff
as
well,
but
first,
let's
show
that
it's
empty
and
so
we're
gonna
use
that
same
command,
sed
CTL
command.
This
is
a
trick
to
interact
with
EE.
Remember
there.
A
Will
you
get
quote
quote
from
key
and
then
just
to
limit
the
output.
We're
gonna
do
keys.
Only
so
only
show
me,
the
keys
I
can
see,
there's
nothing
in
there.
It's
empty
as
all
day
as
all
get-out
at
the
moment,
which
is
good
right
because
we're
gonna
about
to
bring
up
a
cluster
that
will
use
this
sed
cluster
for
our
for
our
configuration
move
into
our
kind
directory
and
then
we're
gonna.
Look
at
the
two
different
option:
the
two
different
configurations
that
I've
got
here
right.
A
So
we
got
to
get
all
three
host
names
right
and
we
got
to
get
all
that
stuff
all
wired
up,
but
let's
make
the
assumption
that
we
did
that
and
let's
bring
that
kind
cluster
up,
so
that
was
to
kind
create
cluster
config
ID,
each
member
and
I'm
just
gonna
bring
up
like
the
like
at
the
control
Eve.
This
is
all
remember
how
many
control
plane
knows
I've
got
going
on
here,
but
let's
play
with
it
and
see.
A
A
A
A
A
A
A
A
A
A
Let's
see
where
we
field
looks
like
something
broke,
what
we
got
doing,
I
want
to
chat
how's
everybody
doing.
Mr.
Josh
Ross
is
saying
hello.
We
got
T
MUX,
oh
yeah,
nice
to
manage
lifecycle
independently
I,
agree,
terminated
versus
a
lock
I
haven't,
played,
relax,
read
I'm
gonna,
let
xbox
oh,
but
maybe
worth
checking
out.
A
A
A
A
A
A
A
Relative
directory
I'm
mounting
in
the
shared
relative
directory,
but
the
problem
is
that
I'm
doing
it
from
the
kind
directory,
because
my
shared
directory
is
actually
in
from
this
path.
The
beauty
of
relative
directory
is
my
friends.
That
is
the
challenge.
So
now,
if
we
do
kind,
create
cluster
and
fig
kind
kind,
each
member,
this
time,
it'll
work.
How
about
that?
No
I
think
it
actually
worked
out.
I've
got
that
part
wired
up,
I
think
I
had
that
right.
A
A
A
A
Sorry,
that's
not
what
I
wanted
I
was
jumping
in
here
to
show
something
else:
okay,
so
we
have
all
of
these
connections
each
one
of
them
going
each
one
of
them.
Each
one
of
our
hosts
is
connecting
to
all
three
Etsy
dinos,
in
fact.
So,
if
we
docked,
if
we
jump
into
one
of
our
control,
planes,
kind,
control,
plane,
Tucker,
exact
gi9
control
plane
and
we
do
SS
en
crap.
A
A
A
We've
seen
Oh
see
hate
that
crap
see
unique
sort,
unique,
see,
old-school
tools
all
right,
so
we
have
on
this
particular
node
right.
We
have
68
connections
to
each
of
the
SED
members.
A
Now
this
actually
means,
and
again
there
were
some
challenge.
There
are
definitely
some
challenges
in
the
way
this
works,
but
some
of
the
challenges
are
that,
like
the
API
server
configuration
is
static.
I
can't
I
can't
update
the
the
members
dynamically
I'd
have
to
actually
go
to
each
API
server
and
change
the
member
list.
A
If
I
had
to
remove
an
Aruba
net
city
member
and
bring
up
a
new
sed
member
okay,
but
let's
go
ahead
and
play
with
the
idea
of
what
happens
if
I,
if
I
turn
down
one
of
my
numbers
so
again,
kind
of
in
this
sed
class,
remember
like
we're
coming
up
with
different
ideas
and
things
to
break,
and
so
let's
do
that.
Blocker
exec.
A
Ti
@
çd
member
zero
bash
and
then
shared
at
see
the
ADM
reset,
and
now
I
only
have
two
members
but
I
still
have
quorum.
I
can
see
that
member
zero
sees
no
connections
which
makes
sense
and
I
could
see
that
the
number
of
connections
hasn't
changed
each
of
the
sed
member.
Each
of
the
API
servers
is
still
only
connecting
to
that
to
the
to
the
remaining
two
members
right.
A
So,
if
I
go
back
in
here,
I
can
see
that
on
my
control,
plane,
node
I'm
still
only
can
I'm
still
only
connecting
to
the
two
members
that
are
left.
I
have
no
connections
going
out
to
the
other.
One,
no
established
connections
anyway,
but
I
haven't
increased.
The
number
of
connections
to
the
remaining
members
is
that
interesting,
but
if
I
were
to
kill
one
more
member.
A
Hey
and
then
we
feel
it,
we
should
see
this
join.
Oh,
you
know
what
we
don't
have
any
control
play
notes
anyway,
keep
cuddle
described
and
get
knows.
Yeah
I
only
have
masters,
I
didn't
get
all.
A
Okay
and
the
leaders
gonna
be
member,
two
he's
the
only
one
that's
left
like
that
could
be
the
leader,
so
let's
play
with
a
failure:
node,
real,
quick
or
a
failure,
mode,
real,
quick
and
then
we'll
also
play
with
like
an
upgrade
case,
which
I
think
it
also
to
be
kind
of
interesting
talking
about
sed.
So
in
our
case
like
we
said,
the
neat
thing
about
this.
Is
that
because
there's
no
sed
running
on
the
control
play,
node
I
can
wipe
that
control
play.
Node
I
can
reboot
it.
A
I
can
do
all
that
stuff
and
I.
Don't
have
to
worry
about
managing
the
state
right.
I
can
just
I
could
just
worry
about
the
stateless
application
so
which
makes
it
a
lot
simpler
to
think
about,
because
sed
is
not
running
on
the
control
plane.
Node,
that's
the
kubernetes
manifests.
We
can
see
that
sed
is
not
in
the
list
right.
So
that's
a
good
thing.
A
A
And
actually
we're
starting
to
see
connections
come
back
right.
It's
the
same
thing
we
saw
before
some
of
these
connections
are
now
being
able
to
are
able
to
reestablish
to
the
other
members,
and
we
should
see
a
level
off
at
about
208
as
time
goes
by
and
connections
are
reestablished.
So
basically,
the
API
servers
on
the
cube,
the
API
servers
are,
are
starting
to
kind
of
work
back
in
and
bring
and
rehydrate
those
connections
that
had
been
terminated
and
reconnect.
A
A
Shared
at
CD
ATM
reset,
and
now
we
only
have
two
members,
but
this
time
for
a
member
one,
instead
of
doing
a
reset
which
will
actually
take
the
action
of
effectively
remove
changing
quorum
back
down
to
a
single
member,
I'm
gonna
just
turn
it
off.
Let's
see
what
happens
right
so
if
I
do
systemctl
stop
at
CD.
A
So
in
this
case,
I'm
in
a
failure
mode
in
that
I
have
only
a
single
member
remaining
and
I
have
lost
quorum,
and
when
that
and
when
you're
in
that
state,
that
means
the
the
kubernetes
cluster
is
not
usable,
even
though
I
still
have
connections
to
that
the
remaining
member
I'm
not
able
to
use
it.
So
how
do
you
think
I
could
fix
this?
I
could
restart
that
CD.
A
A
Okay,
good!
You
can
still
see
me.
Okay,
what's
rear,
doesn't
like
my
my
chat
window
went
away
and
I
was
trying
to
figure
out
why,
but
that
doesn't
really
matter.
As
long
as
y'all
still
see
me,
we're
gonna
keep
going.
So,
let's
talk
about
quorum,
real,
quick
and
we'll
explain,
what's
happening
here.
Okay,
so.
A
A
A
All
right,
so
this
is
actually
the
doctor.
I
was
looking
for
it's
in
the
administration
side
for
me,
too,
and
I.
Imagine
that
there's
probably
also
some
of
this
content
for
v3
as
well,
but
I
mean
this
part
of
it.
It
still
kind
of
applies
to
both,
to
be
honest,
because
it's
really
about
the
quorum
piece,
so
optimal
cluster
size.
They
were
recommending
the
sed
cluster
size
at
three
members,
five
members
or
seven,
and
it
describes
like
the
fault,
tolerance
right
and
also
kind
of
what
the
majority
part
means
right.
A
So
if
you
have
a
single
member,
the
majority
is
one
and
you
have
a
failure
on
alert
of
zero.
Obviously,
because
if
that
member
goes
away,
you're
done
there's
no
more
data.
If
you
have
two
members,
the
majority
is
two
because
it's
moving
toward
a
quorum
of
three,
which
is
interesting.
I
knew
a
failure,
tolerance
of
a
single
member.
A
So
when
I
added
that
second
member,
when
the
when
I
told
both
of
the
members
in
the
cluster
that
there
was
another
member
out
there,
I
went
to
a
majority
of
two,
and
that
means
that
I
now
have
a
quorum
of
qu
and
if
I
were
in.
If
I
were
to
turn
one
of
those
members
off
the
cluster
itself
will
determine
it.
So
it
will
determine
itself
in
an
unhealthy
state
and
it
will
not
proceed
in
some
cases.
You
can
still
read,
but
you
won't
be
able
to
write
because
quorum
has
been
lost.
A
So
when
I
did
my
recovery,
I
basically
turned
that
member
back
on
and
that
brought
me
back
to
majority,
and
that
means
that
I
could
continue
to
interact
with
the
sed
cluster.
The
way
I
had
been
before,
but
you
can
see
how
this
maps
going
further
right,
so
if
I
jumped
into
three
members
I
would
have
I
could
actually
lose
one
member
without
adversely
affecting
my
cluster.
If
I
popped
up
to
four
members,
I
could
lose
two
without
it.
A
I
could
lose
one
without
without
losing
without
losing
quorum
in
my
cluster
and
that's
and
these
this
cluster
size
assumes.
This
number
is
assuming
voting.
Members
will
talk
a
little
bit
about
that
as
well,
but
once
you
get
up
into
five
members,
then
you
can
actually
lose
two
members
without
losing
the
without
losing
access
to
the
cluster
and
then
it
kind
of
starts
to
grow
from
there
right.
A
Assuming
that
all
members
of
your
entity,
cluster
are
voting
members
and
then
changing
that
C
cluster
and
the
member
of
migration.
These
Doc's
are
really
good,
definitely
worth
checking
them
out.
I,
like
a
lot
of
this,
was
written
when
we
were
a
core
OS
and
have
been
updated
since
so
our
failure
mode
right
here.
A
This
I
would
actually
have
to
reinstate
this
member
as
a
as
a
quorum
of
one,
so
I
could
take
a
snapshot
or
I
could
restart
this
color
or
I
could
restart
this
cluster
from
a
single
snapshot,
bringing
back
a
majority
of
one,
and
then
the
cluster
would
come
back
up
and
work
or,
alternatively,
I
could
just
come
back
into
that
state
and
continue
to
work
there.
So
let's
do
that.
I.
A
Put
it
to
you,
do
you
want,
do
you
want
to
see
me
lose
quorum
and
then
fix
it
like
in
such
a
way
that
we
like
leverage,
ICD
ADM,
to
restore
from
a
snapshot?
You
all
want
to
see
that
I'm
watching
vlogs
or
the
chat
see
if
that's
something
that
might
be
interesting
to
you.
If
so,
I
could
probably
do
like.
You
know
another
episode
on
that
CD
or
something,
but
it
was
interesting.
I
can
they're
remaining
sed
logs
yeah
sure.
Let's
do
that.
A
A
A
A
So
we
can't
even
do
we
request,
because
it's
totally
done,
but
let's
go
ahead
and
do
docker,
exact,
yeah,
I
kind
at
CD
members,
zr1,
systemctl
start
at
2d
and
then,
as
soon
as
that
becomes
healthy.
Again
we
can
see
the
connection
happening
right
and
then
BOOM
we've
just
unblocked
the
world
and
now
we're
like
getting
stampeding
herd
right.
A
It's
saying
took
took
too
long,
probably
because
we're
still
sharing
the
same
io
for
everything,
and
so
they
that
is
one
of
the
big
other
big
challenges
of
bet
CD
and
one
of
the
things
to
really
keep
in
mind.
I
want
to
make
sure
you
cover.
That
is
the
other
big
challenge
of
one
of
the
other
big
benefits
of
moving
n
CD
off
of
control.
A
My
entire
system
is
on
a
single
disk
right
now,
including
these
at
CD
members,
and
that
means,
as
I
increase
the
amount
of
I/o
that
I
need
in
other
things,
I'm
gonna
lose
margin
to
a
CD
and
that
and
that
can
actually
cause
things
to
be
bad
yeah
quorum.
2
is
worse
than
quorum,
1
I
agree,
but
yeah,
ok,
so
cool.
So
that's
any
other
questions.
Before
we
move
on.
A
A
A
A
A
Really
talking
about
how
quorum
and
raff
works
and
I
thought
that
there
was
actually
a
talk
specifically
about
an
sed
I
know
that
there
have
been
a
few
different
raff
comedic
descriptions
of
how
it
works
and
sed,
and
especially
because
EDD
now
has
different
modes,
which
is
actually
really
cool.
So
NCD
now
has
a.
A
A
All
right,
but
we
need
to
keep
moving
forward.
So,
let's
keep
going
next
thing.
I
wanted
to
talk
about
was
monitoring,
because
one
of
the
questions
that
somebody
asked
was
was
in
understanding
like
how
to
how
to
monitor
sed
and
how
to
debug
it
and
that
sort
of
stuff
now
what's
neat
is
sed
is
a
go
program,
so
it
does
have
a
debug
P
prof.
If
you
enable
it,
you
can
actually
really
dig
into
like
the
crazy
details
of
where
HEV
the
binary
is
spending
its
time.
A
So,
if
you're,
if
you
have
a
scenario
where
you're,
where
things
are
just
not
working
the
way
you
expect
you
can
really
kind
of
dig
into
like
what's
happening
there,
I
don't
really
see
that
happening
super
often,
but
there's
also
a
debug
requests
endpoint,
which
gives
you
the
ability
to
kind
of
debug,
what's
happening
in
the
requests
and
right,
and
so
you
can
see
like
what
the
keys
and
the
values
are.
This
was
a
question
that
mr.
A
A
It
can
be
used,
there's
a
literacy,
oh
wow,
that's
broken,
but
there
is
actually
an
sed
dashboard
in
there
for
me
in
the
graph
on
Oh
marketplace,
or
you
can
actually
instrument
this
stuff.
But
so
it
has
an
example
of
what
that
dashboard
is
when
configuring
sed
typically
you're
gonna
need
a
client
certificate
to
authenticate
to
sed
even
for
metrics,
and
so
it's
important
to
keep
that
in
mind.
Right,
you're,
gonna
asset,
because
you're
gonna
have
to
authenticate
to
edit,
even
for
metrics
and
let's
just
kind
of
show
that
off
a
little
bit
here.
A
I'm
going
to
turn
off
the
see
a
check,
and
then
we
can
see
this
result,
which
is
a
little
confusing
at
first.
Maybe
if
you
don't
understand
what's
happening,
but
what
it's
telling
you
is
that
this
is
not
enough
information
for
me
to
allow
this
connection.
I
need
a
client,
sir
also,
so
here's
one
way
to
get
that
right,
so
shirt
equals
Etsy
@
CD
and
then
we'll
excuse
the
API
server
at
CD
client
search
key.
A
A
A
I'm
gonna
go
ahead
and
pull
a
rep
for
help
which
pulls
it,
which
kind
of
describes
like
the
the
helpline
for
each
of
the
metrics
that
is
exposed,
and
so
we
have
things
like
making
sure
like
how
many
milliseconds
DB
camp
action
took.
How
many
milliseconds
particular
deletes
took
or
events,
and
this
is
just
looking
at
sed
as
kind
of
a
general
tool
right.
It's
like
for
this
particular
member
here
are
the
the
metrics
related
to
things
that
we
think
are
important
to
monitor
for
sed
right.
A
So
how
long
it
took
to
actually
commit
the
back
end
for
sed
disk.
What
the
F
sink
duration
period
is.
These
are
really
pretty
important,
metrics
to
understand
kind
of
the
overall
health
of
sed
in
general.
Right
like
how
long
it
takes
for
us
to
persist,
a
disk
can
be
an
indicator
in
and
whether
there's
a
oh
starvation
happening
on
the
node,
and
if
there
is
IO
starvation,
you're
gonna
see
things
taking
longer
to
to
coalesce.
A
You're
gonna
see
things
failing
in
kind
of
interesting
ways,
and
that's
why
it's
actually
pretty
cool
that
these
metrics
are
exposed
by
every
member.
We
also
have
a
metric
for
whether
this
particular
member
is
at
City
or
is
the
leader
or
not.
We
have
metrics
coming
back
to
go
mem
stats
right,
exposing
how
much
how
much
memory
is
being
used
and
kind
of
the
GC
mechanisms
and
all
of
that
sort
of
stuff.
For
those
things,
how
many
CPU
seconds
are
in
use
open
file
handles
resident
memory?
A
That's
it's
a
pretty
complete
metric
set
that
describes
the
health
of
this
set
city
cluster,
but
one
of
the
things
I
wanted
to
point
out
is
that
we
also
because
we're
actually
ven
during
in
sed
as
part
of
the
API
server.
We
also
expose
some
really
interesting
information
from
the
API
server
and
I.
Think
I've
talked
about
this
in
a
previous
episode,
but
if
not,
let's
go
ahead
and
look
at
that
real
quick.
So
we
do
get
raw.
A
So
each
of
the
API
servers
are
also
exposing
a
metrics
end
point
and
we
can
grep
for
sed
inside
of
there
and
we
can
see
basically
how
what
we
expose
at
the
sed
layer
from
the
client-side
now,
because
we're
doing
this
at
the
client-side,
we
can
actually
see
some
other
really
interesting
information,
especially
if
you're
trying
to
troubleshoot
a
failed
sed
cluster.
There's
a
lot
of
really
good
information
that
is
exposed
at
the
client-side
like
how
many
connections
or
the
different
types
you
know
how
long
it
takes
actually
make
a
request
for
a
particular
information.
A
Object
count:
that's
what
I'm
looking
for.
So,
if
we
do
grip,
let's
see
the
objects
count,
we
can
see
how
many
objects
of
which
type
are
being
stored
in
at
CD,
and
so,
if
you're
looking
for
challenges,
if
you're,
if
you're
seeing
sed
kind
of
slow
down-
and
you
think
perhaps
what's
happening-
these
are
overloading
it
or
you
think
that
something
has
been
generated
that,
like
it's
overloading
it's
a
particular
type.
This
is
actually
a
pretty
interesting
troubleshooting
step
to
understand
what
you're,
storing
by
type
in
entity.
A
Looking
at
these
events,
when
you're
looking
at
the
object
counts,
one
of
the
things
that
is
highlighted
here
is
that
the
biggest
object
count
that
we
have
inside
the
cluster
right
now
is
events,
and
it's
also
the
highest
churn.
It
represents
the
biggest
amount
of
churn
inside
of
a
net
CD
cluster,
and
so
one
of
the
patterns
for
HDD,
especially
in
big,
stable
environments.
A
It's
to
move
the
events
off
onto
a
different
@ze
cluster
I'm,
not
going
to
explore
that
here
in
this
episode,
but
it
is
a
configuration
of
the
API
server
in
which
you
can
specify
a
particular
path
in
this
case
registry
events
and
push
that
to
a
different
entity
cluster
and
rather
than
having
that
sit
against
your
main
pet
city
cluster.
And
this
is
one
of
the
mechanisms
by
which
you
can
actually
handle
the
scaling
of
a
CD,
especially
in
large
clusters
differently.
A
So
I
just
wanted
to
call
that
out
so
back
to
this
example
in
their
lab,
which
I
think
it's
still
interesting
right.
So
I'm
gonna
go
ahead
and
join
the
member
zero
member
back
in
and
I'm
gonna
play
with
putting
a
load
balancer
a
bet
CD
in
front
of
our
cluster.
So
we'll
do
kind,
delete
cluster
blow
air
cluster.
Here,
real,
quick,
I'm,
gonna
kind,
create
cluster
config
and.
A
A
A
So
connection
comes
in
here
and
I
establish
a
new
connection
back
to
the
backend
node,
but
I
don't
turn
to
terminate
TLS
and
I'm
trying
to
balance
based
on
the
least
number
of
connections,
so
the
goal
would
be
to
make
it
so
that
I
have
an
even
number
of
connections
to
each
of
the
members.
So,
let's
play
with
this
connectivity
pattern
real
quick
before
we
before
we
move
on
to
the
next
piece
here,
so
I'm
going
to
go
ahead
and
bring
up
that
cluster,
try
and
create
cluster
config
kind.
A
A
So
this
is
actually
kind
of
neat
at
least
I
like
it
so
what's
happening
here.
Is
that
the
when
we
try
to
join
beyond
the
first
member
there's
already
a
member
named
kind
control
flame
too?
And
it's
already
in
status?
Ready,
because
when
we
deleted
the
members
when
we
deleted
the
old
cluster,
we
just
left
the
state
in
sed
behind,
and
so
what
that?
What
that
meant
was
that
there's
already
an
existing
definition
for
certain
things
inside
the
cluster?
A
A
Opt-In,
yeah
Sh,
remember,
lists
we'd,
see
all
of
our
members
are
up
and
if
we
do
our
get
just
like,
we
did
before
right.
A
A
A
We
still
have
minions
kind
control,
plane,
control,
plane
to
control,
plane
3.
This
is
an
interesting
artifact
of
like
the
history
of
kubernetes.
Originally
these
were
called
these
weren't
called
nodes.
They
were
called
minions
and
then
we
also
have
these
lease
objects
right:
the
lease
control
plane
to
control,
plane,
3
control,
plane
or,
let's
take
a
look
at
the
health
of
the
minion
real,
quick.
A
A
A
A
A
A
A
A
A
Each
member
was
actually
connecting
to
each
member
of
our
sed
node
right.
So
in
our
old
configuration
cat
kind
kind,
each
member
we
had,
we
were
telling
each
API
server
about
each
Exedy
member,
but
on
our
new
configuration,
we're
only
telling
it
about
the
load
balancer
and
that
reduces
the
number
of
connections
that
the
API
server
is
going
to
make,
because
it
only
has
one
endpoint.
A
It
has
the
endpoint
that
is
the
load
balancer,
and
so
it's
actually
going
to
terminate
all
of
the
connection
pool
that
the
sed
client
and
so
I,
embedded
inside
of
sed,
is
going
to
use
toward
that
load,
balancer
and
so
for
each
member
for
each
API
server.
We're
gonna
establish
that
same
number
of
connections,
I
think
believe
it's
a
30
or
you
know,
probably
leave
a
little
bit
less
24
connections
per
/.
It's
right
around
20,
20
connections,
20
connections
for
a
control,
plane,
node
right
and
we
can
see
what
it
is
right.
A
A
So
we
have
70
connections
from
this.
They
have
68
connections
from
this
host
and
will
have
68
connections
from
each
of
the
two
each
of
the
three
hosts,
but
in
our
case
that's
spread
across
three
hosts.
So
we
see
those
68
connections
kind
of
dispersed
across
each
host.
But
what
about
the
different
failure?
Scenarios
right
so
like
written
right
now
we're
seeing
the
the
termination
right
now
we're
looking
at
a
chi
proxy
we're
seeing
those
connections
balanced
across
each
sed
member.
A
That's
what
we're
looking
at
here,
but
let's
play
with
a
different
failure
scenarios
that
we
played
with
before
right.
So
let's
go
ahead
and
terminate
or
let's
go
ahead
and
remove
this
member.
And
what
do
you
think
is
gonna
happen?
Any
theories
about
what's
gonna
happen
if
I
remove
a
member
from
behind
H
a
proxy.
What
is
gonna
happen
to
the
connection
path,
we're
using
a
load
balancer
now,
so
what's
gonna
happen?
What
do
you
think
anybody.
A
So,
unlike
the
previous
connection
pattern
right
where
we
saw
the
where
we
saw
the
the
connections,
the
connection
count
remain
the
same.
Instead
we're
seeing
more
connections
on
the
remaining
members
right,
because
the
API
server
is
still
trying
to
connect
to
the
only
thing
it
knows
about
which
is
the
load
balancer
and
what
it
just
got
was
a
whole
bunch
of
resets
back
from
or
a
whole
bunch
of
connection
closes
kind
of
closed
on
a
whole
bunch
of
the
existing
connections
that
I
had,
and
so
then
it
was
like.
Well,
let's
try
that
again.
A
A
What
do
you
think
will
happen
so
there's
our
for
listening
ports
and
now,
as
we
wait
for
a
second
we're
gonna
start
seeing
connections
get
shifted
over
as
they
come
and
go
as
connections
die,
we're
gonna
start
seeing
that
balance
gets
get
brought
back
up
as
the
API
server
terminates
its
connections
and
starts
re-establishing
new
connections.
We're
gonna
see
that
balance
start
community
turned
up
in
our
log.
Here
we
see
exiting
embers,
zero
become
become
healthy
again
and
then,
as
connections
rotate
over
here.
A
A
A
Wow
immediate
balance,
right
and
again
because
AG
a
proxy
comes
up
and
it
starts
to
disperse
connections.
We
were
actually
done
for
just
a
brief
moment
there
well
and
actually,
because
of
H
a
proxy
because
of
the
restore
command,
it
probably
didn't,
actually
lose
any
connectivity,
but
I
can
still
do
get
get
pods
a
let's.
Do
a
watch?
Keep
it
all
get
comments
egg.
A
So
now
every
two
seconds
right:
if
I
do
that
criminal
command
that
restart
again
I
didn't
actually
see
the
hit
in
sed.
So
in
this
case,
because
I'm
using
H
a
proxy
and
because
H
a
proxy
has
effectively
live,
reloading
I
didn't
see
a
drop
in
connectivity
from
the
API
servers.
I
was
still
able
to
connect
to
it.
A
A
A
And
new
connections
are
just
going
to
start
balancing
and
you
know
if
we
just
let
this
run,
they
will
balance
over
time.
We're
gonna
see
those
connections
balanced
across
the
the
remaining
set.
It's
not
going
to
be
super
aggressive
because
remember
that
kubernetes
uses
a
connection
pool
back
to
the
sed
cluster
right,
and
so
it's
going
to
just
maintain
that
connection
pool
and
as
it
iterates
over
those
connections
and
they
terminate
or
they
get
closed.
We're
gonna
establish
new
connections
via
the
HT
proxy
to
existing
nodes.
A
A
A
Yeah
so
I
mean
so,
let's
talk
about
like
resilience,
real
quick
and
then
this
is
electric.
You
know
the
last
topic
I
want
to
talk
about
before
we
move
on
right.
So
what
we
talked
about
here
is
a
couple
of
different
patterns
for
hosting
sed
we've
talked
about
why
stacked,
sed
and
external
sed
have
different
trade-offs
with
stock
density.
A
You
can
kind
of
couple
it
into
tooling,
like
cube
ATM
with
external
LCD,
you
can
decouple
state
from
the
control
plane
and
you
can
like
easier
manage,
like
you
know,
scaling
up
or
scaling
down
your
control
plane
nodes
without
having
to
also
scale
up
and
down
at
CD,
so
scaling
these
things
differently.
It's
sort
of
the
same
argument
about
like
whether
you
would
want
up
a
stable
thing
and
your
application
in
the
same
pod
or
not
right.
Separating
these
things
out
into
different
pods,
give
you
better
control
over
the
different
aspects
of
it.
A
We've
talked
about
different
deployment
patterns.
If
we
deploy
a
configuration
in
which
we
tell
the
API
server
about
each
of
the
sed
members,
then
we
get
more.
We
get
a
bigger
connection,
pool
and
a
more
resilient
connection
pool
back
to
all
of
the
sed
members,
but
at
the
same
time
we
have
a
static
configuration,
an
API
server.
We
can't
really
modify
that
list
without
restarting
the
API
server.
A
A
We've
talked
about
putting
a
load
balancer
in
between
SED
and
the
cluster
between
between
SED
and
your
kubernetes
cluster,
and
the
benefit
of
that.
Putting
a
load
balancer
is
that
you
get
a
little
bit
more
resilience
and
in
the
way
that
you
determine
the
backend
connection,
that's
right,
but
at
the
same
time
you
end
up
with
a
smaller
connection
pool
for
the
API
server.
The
API
server
is
only
going
to
establish
those
64
connections
to
that
single
sed
member.
A
So
that's
kind
of
interesting
also,
and
that
means
that
the
resilience
wise
you're
gonna
have
to
kind
of
deal
with
that
behavior.
So
these
are
all
different
things
that
you
can
kind
of
model
and
play
with
and
and
explore
in
the
way
that
you
think
about
the
way
sed
interacts
with
your
sed,
with
your
kubernetes
cluster
and
and
and
what
I'm
gonna
commit
here
is
basically
a
lab
that
would
allow
you
to
kind
of
explore
those
things
independently
of
manipulating
an
existing
cluster
right,
and
so
it's
just
kind
of
a
an
interesting
way
of.
A
Basically,
you
know
exploring
the
different
tools
and
and
the
different
tooling
that
you
have
at
your
disposal.
So
one
of
the
other
things
I
really
love
about
this
lab
is
that
you
can
also
break
things
and
fix
things
and
explore
different
models
for
how
to
fix
things
like
if
you
wanted
to
actually
explore
getting.
You
know,
breaking
quorum
down
to
a
level
one
and
then
figuring
out
how
to
reestablish
quorum.
A
That
is
a
thing
that
you
could
explore
in
this
lab
without
having
to
actually
break
the
world
now,
I
might
come
back
and
do
another
sed
member
on
another
sed
session
on
just
breaking
quorum
and
fixing
it
and
those
sorts
of
things
like
problems.
I've
seen
in
the
wild
and
those
sorts
of
things,
but
but
yeah,
it's
like
it's
definitely
a
failure
scenario
management.
It's
like
you
know
in
like
in
security.
We
have
this
idea
of
or
in
application
and
infrastructure
security.
A
A
It's
gonna
be
a
huge
part
of
your
success.
In
in
delivering
distributed
systems
to
your
customers,
so
one
of
the
things
to
keep
in
mind
I
hope
that
this
was
educational
and
helpful.
If
y'all
are
interested
in
seeing
me
like,
come
back
and
do
another
SED
episode,
please
just
let
me
know
in
the
chat
and
I'll
probably
keep
I'll.
Probably
do
that
thanks.
So
much
for
your
time
and
hanging
out
with
me
on
this
beautiful
Friday
afternoon
and
y'all
have
a
great
weekend.