►
From YouTube: TGI Kubernetes 092: Continuing Minecraft Controller
Description
Episode notes up at https://github.com/heptio/tgik/blob/master/episodes/092/README.md.
Work on the controller starts at 00:23:43.
Come hang out with Joe Beda as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Joe talking about the things he knows. Some of this will be Joe exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
In this episode we will pick up where we left off with the Minecraft controller we started in https://tgik.io/083. We'll be doing some coding and exploring what it takes to build an operator/controller.
A
Hello,
everybody
and
welcome
to
t
GI
kubernetes
I,
am
your
host
Joe,
Beda
I'm,
a
principal
engineer
at
VMware
and
for
those
who
don't
know
TTI
kubernetes
is
are
mostly
weekly.
Youtube
live
stream
where
I
play
around
with
all
things
kubernetes
and
we
learn
a
bunch
of
stuff
together
and
we
are
on
Episode,
oh,
my
goodness
92
now
so
we've
been
doing
this
for
a
while.
So
first
of
all,
sound
check.
Do
I
sound
good
to
everybody.
Things
working
well,
awesome
awesome!
A
So
I
like
to
actually
you
know,
as
we
start,
these
things
make
sure
I
say
hi
to
everybody
in
the
chat,
I
love
making
this
as
interactive
as
possible.
So
you
know
the
with
us,
as
always
as
lumati
good,
to
see
you
and
Rory,
also
from
Scotland
Shahar
good
to
see
you
we're
off
from
Bangladesh
Duffey
from
our
cat
team
here
at
VMware
that
stands
for
kubernetes
architecture
team.
Also,
often
a
host
of
tea
gik
is
joining
us,
at
least
for
a
little
bit.
A
Thanks
for
joining
us
Duffy
Marko
from
Belgrade,
so
I
can't
figure
out
how
to
silence
slack
here.
So
slack
is
now
causing
me
troubles.
This
is
the
problem
like
usually
I
was
able
to
silence
it,
but
somehow
that
broke
recently
anyway.
So
so
you
may
hear
a
slack
going
off
quit
it
if
it
causes
too
much
annoyance.
A
Let's
see
we
have
somebody
from
Perth,
it's
4:00
a.m.
Wow.
Thank
you
for
joining
us.
I,
say
Marco
Steve,
another
another
VMware
person.
Thanks
for
joining
us,
Steve
pay
shoe
josè
Sandeep
I
am
from
Milan
Syed
good
to
see
y'all
alright.
So
this
is
going
to
be
an
interesting
episode.
My
plan,
this
episode
is
I'm,
going
to
continue
doing
some
live
coding
from
stuff
some
stuff
that
we
did
several
weeks
ago,
essentially
using
cube
builder
to
build
a
minecraft
controller.
A
Now,
here's
the
dirty
secret
I
haven't
been
doing
a
lot
of
coding
recently,
so
much
of
my
job
is
essentially
drawing
boxes
on
a
whiteboard
or
sort
of
the
equivalent,
with
with
word
and
Google
Docs
and
such
confluence.
So
so
you're
gonna
see
me
be
rusty,
which
is
I,
think
part
of
the
fun
here
and
then,
oh,
so,
okay,
so
somebody's
telling
me
that
I
can
go
through
and
tell
slack
to
silence
itself
with
I.
A
How
do
we
manage
graceful
shutdown
of
these
things,
because
it
turns
out
that
you
can
lose
State
with
Minecraft
servers
if
you
don't
actually
gracefully
shut
them
down
and
then
there's
also
interesting
issues
with
respect
to
networking,
because
the
Minecraft
API
is
not
HTTP
and
so
a
lot
of
our
standard
patterns
don't
necessarily
work
there.
We
won't
be
able
to
get
to
all
that
today.
I
promise
you,
but
at
least
that's
that's
the
stuff
that
we're
going
down.
A
So
when
let
me
go
ahead
and
get
started
here,
one
of
the
things
that
we
like
to
do
and
I'm
going
to
switch
to
my
screen
is
review
news
of
the
week
what's
been
happening
in
the
kubernetes
world,
and
so
Duffy
and
Jorge
helped
to
pull
some
of
these.
These
links
together,
which
I
think
is
great
so
the
first
thing
I'd
like
to
do
is,
is
announce
that
we
have
results
for
the
kubernetes
steering
commission
steering
committee
and
the
folks
joining
us
is,
as
chris
blacker
derrick
karr
got
reelected.
A
He
a
member
of
the
steering
committee
and
he's
continuing
to
be
Nikita
from
Lutze.
So
she
is
a
great
story.
Her
and
Lucas
gave
a
great
talk
at
cube
con.
You
can
find
the
youtube
link
talking
about
their
journey
to
join
kubernetes,
and
so
it's
great
to
see
Nikita.
Actually,
you
know
get
involved
to
the
point
where
she's
now
on
the
steering
committee,
which
is
awesome
and
then
Paris
who's
been
a
longtime
member
of
the
community
super
involved
and
contributor
experience
and
other
stuff.
A
It's
really
exciting
to
see
that
happen,
so
this
is
really
quite
the
milestone
for
yes,
oh
no
I'm
still
getting
like
slack
doing
its
thing
on
me
here,
I,
don't
know!
What's
going
on
so
there's
really
quite
the
milestone
here
for
for
this
stuff,
so
I'm
gonna
go
ahead
and
I'm
gonna.
Give
me
like
three
seconds
here.
Is
there
a
way?
I
can
tell
slack
mute
me
now
forever
and
always
stupid
thing.
I
know
I.
A
Think
part
of
it
is
that
it
ends
up
being
a
preference
on
a
per
workspace
by
poor
workspace
basis,
and
so
that
ends
up
being
a
pain
to
actually
get
selecting
you.
So
unless
I
can
find
something
in
the
next
three
seconds,
I'm
just
going
to
quit
slack
so
I
do
have
like
Duffy
and
George.
Sending
me
notes
here
and
that's
why
I
want
to
keep
it
open,
but
I
think
instead
we're
just
going
to
have
to
communicate
over
the
the
comments
here.
A
So
I'm
gonna
quit
slack
okay,
cool
yeah
I
did
the
option,
click
on
the
top
right,
so
I
did
have
I,
have
notifications
turned
off
but
I
think
there's
something,
and
maybe
the
slack
client
where
actually
ignores
that
now
so
I
think
that's
that
might
be
what's
going
on
so
anyway.
So
Duffy,
you
know,
tried
to
add
the
keynote
to
the
to
the
chat.
A
But
then
it
squashes
links
there,
and
so
it
got
added
to
the
show
notes,
and
so,
if
the
rest
of
you
want
to
actually
add
something
to
the
show
notes
feel
free
to
go
ahead
and
do
that
and
then
lo
Maddie's
asking
TGI
K
episode.
100
is
right
around
cube
con.
So,
okay,
so
we're
gonna.
Do
a
live,
TGI,
K
type
of
thing
at
cube,
con
I,
don't
think
we're
set
up
to
do
a
live
stream.
A
There
I
mean,
if
you
saw
like
you,
know,
I
think
here,
like
I'm
gonna,
we're
gonna
do
an
inception
type
of
thing
here.
If
you
saw
like
my
setup
here,
I
have
like
a
huge
monitor
and
my
laptop
and
I
have
the
whole
thing
sort
of
setup
to
be
able
to
broadcast
it's
really
hard
to
broadcast
from
just
a
laptop
and
so
I.
A
You
know
I'd
love
to
do
something
live
there,
but
I
think
we'll
probably
have
to
plan
something
for
episode,
100
separately,
to
be
able
to
you,
know,
axela
brait
that
and
do
a
bunch
of
stuff
okay.
So,
let's
see
other
news
I
want
to
get
through
this
fast
cube.
Vert
joins
the
CN
CF
cube,
or
it's
a
really
interesting
project,
mostly
being
driven
by
Red
Hat
right
now,
which
essentially
is
about
running
VMs
inside
of
containers.
A
So
there's
different
ways
that
VM
technology
and
container
technology
are
mixing.
This
is
a
really
interesting
way
of
doing
it.
In
some
ways,
this
mirrors
I
think
the
way
we
build
compute
engine
back
in
the
day.
There's
also
parallels
between
this
and
some
of
the
work
we're
doing
in
at
VMware
around
project
Pacific
different
sort
of
ways
to
skin
the
cat.
So
this
is
actually
really
interesting
and
I.
A
A
If
I
believe
at
the
sandbox
level,
we
already
have
a
1.17
alpha,
I
think
you
know
this
really
to
me
is
a
because
we
just
got
16
it
feels
like,
but
this
is
indication
that
we're
starting
to
get
a
lot
of
the
sort
of
release
mechanics
down
so
that
we
can
actually
sort
of
start
the
next
release
cycle.
As
soon
as
we
finished
the
previous
one,
great
talk
here
from
from
the
name
of
the
person
was
Kenta
ESO,
Kenta
I,
so
on
sort
of
inside
of
a
kubernetes
controller.
A
This
goes
super
deep
and
it
covers
a
lot
of
the
sort
of
stuff
that
we
we're
gonna
be
talking
about
today,
but
then
actually
starts
digging
into
not
just
sort
of
like
the
diagrams
about
this.
But
what
other
different
components
that
come
into
play?
Things
like
informers,
Lister's
work
queue
stuff
like
that,
so
it's
a
really
cool
deck.
That
goes
really
deep
and
you
know
even
to
the
point
where
you
know
I
think
I'm
gonna
move
in
fast
here
but
like
it's
like.
A
So
that's
cool
and
let's
see
so
we
have
a
ami,
a
mean
from
Algeria
good
to
see
you
Steve
CPP
for
life
from
San
Francisco
good
to
see
you
also
who
is
CPP
for
life.
That
rings
a
bell
but
I'm
not
putting
the
name
to
the
handle,
for
some
reason
say:
hi,
okay
and
then,
let's
see
so
Alex
Ellis,
so
Alex
does
open.
Faz
I
haven't
had
a
chance
to
play
with
this
yet,
but
I
think
this
might
be
a
fun
episode.
A
So
it's
Dimitri,
okay,
good
Steve
Dimitri.
So
a
while
ago,
I
was
playing
around
with
yeah.
He
hates
cap
yeah,
that's
funny!
So
Dimitri
doesn't
the
cap
stuff
yeah,
which
is
great,
so
I
did
this
thing
a
while
ago,
github.com
Jamie
de
and
like
like,
and
this
was
like
I'm
like
hey,
how
can
I,
adapt
and
rock
to
kubernetes
and
what
it
ended
up
doing.
I'm
like
alright,
some
code.
I
ended
up,
not
writing
code.
It
ended
up
being
a
simple
dockerfile
that
did
nothing
but
like
run
and
grab
along
with
some
cute
control
run.
A
So
it's
like
hey.
You
know
that
doing
like
and
rock
to
be
able
to
get
access
to
services
really
interesting.
You
know
I
kind
of
left
it
at
that
I
mean
for
those
who
aren't
familiar
and
Rock.
Is
this
thing
that
you
can
run
on
your
laptop
that
essentially
exposes
your
laptop
to
the
Internet,
so
it's
like
a
security
nightmare,
but
it's
also
really
useful
for
being
able
to
sort
of
share
stuff
off.
A
Lets
me
say
hi
to
folks,
so
we
have
a
Bhargav
from
BB.
What
is
BB
ganesh
from
delhi
and
fully
geared
bear
from
portugal
good
to
see
y'all.
Let's
see,
there's
a
meet
our
contributors
this
week,
it's
a
monthly
show.
So
is
this
okay,
this
one
hasn't
started
yet
so
this
is
gonna
be
on
october,
2nd,
which
is
wait
night
streamed
alive,
oh
no,
no,
okay!
This
is
this
is
the
previous
one.
So
there
was
a
lot
that
happened
there
and
so
DIMMs
was
on
it.
A
Nikita,
Paris
and
Kieran,
and
so
dims,
Nikita
and
Paris
are
all
now
on
the
steering
communities
committee,
and
so
this
is
a
great
way
to
sort
of
to
have
people
share
their
tips,
the
ins
and
outs
of
what's
happening
and
then
and
then,
if
you're
interesting,
becoming
a
contributor
check
out
the
contributor
summit,
which
is
upcoming
in
San
Diego,
you
can
register
to
that.
It's
adjacent
to
cube
con
and
so
the
contributor
summit.
A
There
was
an
interesting
development
around
sto
and
K
native
and
I.
Think
you
know
if
you've
been
sort
of
paying
attention
to
a
bunch
of
the
sort
of
inside
baseball
sort
of
you
know.
Ups
and
downs
of
the
cloud
native
world.
There's
always
been
this
question
of
you
know
sto
and
K
native.
Are
these
things
actually
going
to
become
part
of
a
foundation
part
of
the
CNC
F?
What
is
their
relationship
to
kubernetes?
A
Are
they
going
to
be
donated?
Are
they're
not
going
to
be
donated
what
Google's
intentions
about
these
things?
And
you
know
this
last
week
there
was
a
message
sent
to
the
the
K
Native
developer
list
and
I've
heard
through
the
grapevine
that
the
same
thinking
is
being
applied
to
sto,
where
there
is
no
intention
from
Google
at
this
point
to
donate
these
projects
to
the
ciencia
for
any
other
foundation.
A
And
so
it's
good
that
we
have
clarity
there,
but
I
think
it
really
I
think
highlights
in
my
mind
the
difference
between
open
source
and
open
governance,
where
open
source
is
that
hey
the
source
is
out
there.
You
can
fork
it,
maybe
you
can
contribute
to
it,
but
at
the
end
of
the
day,
there's
a
single
company
or
single
entity
that
owns
the
the
roadmap.
The
use
of
the
trademark,
all
the
things
that
actually
add
up
to
sort
of
defining
the
direction
of
the
project
and
and
kubernetes,
is
very
much
open.
A
Governance,
I
think
both
at
the
CN
CF
level,
but
also
at
the
steering
committee
level
and
I
think
the
fact
that
we
announced
the
results
from
our
election
and-
and
you
know,
as
a
founder
of
the
prod
I'm
no
longer
on
it,
I
think
that's
a
great
example
of
sort
of
open
governance
in
action
and
I.
Think
what
we're
seeing
is
that
there
is
no
open
governance
for
sto
and
K
native
now.
What
this
means
practically
for
folks
I
think
we're
still
figuring
out
what
that
means,
but
it
is
a
development
and
I
know.
A
A
lot
of
folks
are
are
interested
that
in
digesting
that
so
that's
just
a
little
bit
of
a
a
little
bit
of
color
a
little
bit
of
thinking
there.
It's
definitely
been
something
that
has
been
on
folks
Minds
over
the
last
week.
So
yeah,
you
know,
and
if
folks
have
any
questions
about
that
I'm
happy
to
answer.
But
it's
a
it's
one
of
those
things.
I,
don't
think
we're
gonna
actually
see
anything
immediately,
but
oh
and
I
skipped
over
one
item
here.
A
So
there's
a
CV
in
the
kubernetes
api
server
Rory
who's
who's
online
here
is
actually
the
one
who
filed
this
issue.
This
is
a
really
interesting,
interesting
thing.
I'm,
sorry
I
skipped
over
this.
So
there's
this,
it's
called
a
billion
laughs
attack
for
yamo,
okay.
So
here's
the
fascinating
thing
is
that
you
can
take
a
look
at
yeah
nice
catch
Rory.
You
can
take
a
look
at
yeah,
Moe
and
yeah.
Moe
is
actually
I
think
we
mostly
use
it
as
a
plain
old
data
container.
A
You
know,
maps
and
lists
and
stuff
like
that,
essentially
a
different
way
of
writing
JSON,
but
there's
actually
other
capabilities
built
into
lamo
yeah
Moe,
where
you
can
essentially
have
references
that
refer
to
other
things.
And
if
you
do
this,
you
can
have
essentially
fairly
compact
text.
But
if
you
were
to
say,
hey
evaluate
this
yeah
Moe
file
as
a
JSON
file
and
resolve
all
of
those
references.
This
thing
ends
up
getting
really
really
really
big
and
so
it.
A
This
is
essentially
a
amplification
attack
where
you
can
send
a
small
piece
of
yamo
to
the
API
server
and
that
explodes
to
a
huge
amount
of
memory
in
RAM
as
it
decodes
the
thing
and-
and
this
isn't
the
same
vein
of
you-
can
do
the
same
sort
of
thing
with
zip
files.
So
there's
like
what
are
they
called
they're
like
zip
file
bomb
I?
That's
probably
not
the
right
name.
A
So
so
this
is
the
same
vein
there
and
the
fact
that
kubernetes
is
actually
subject
to
this
is
kind
of
fascinating,
because
most
of
the
tooling,
most
of
the
time
when
people
actually
deal
with
kubernetes
will
use
yeah,
mold
client-side,
but
then
will
use
you
know
either
either
json
or
even
proto.
When
talking
to
the
api
server-
and
so
it's
very,
very
rare
for
folks
to
actually
send
GM
over
the
wire
to
the
api
server,
but
for
historical
reasons,
you
actually
can
you
can
send
Yambol
to
the
api
server.
A
A
Where
we're
going
to
look
to
actually
found
the
expansion
of
these
things
when
you
upload
yeah
Mille,
but
really
fascinating
an
interesting
attack,
I
think
it
goes
to
show
in
some
ways
that
unintended
consequences.
It's
like
hey.
Why
not
accept
yam?
Oh
well,
here's
here's!
Why
not
it's
kind
of
a
kind
of
interesting-
and
so,
let's
see
so
Lamanna-
is
asking
about
the
the
CN
CF
stuff
I'll.
Get
to
that
in
a
second.
A
In
slav
says
how
the
mo
bomb
effects
server
side
apply,
well,
server
side
applied,
doesn't
send
llamo,
it's
essentially
still
different
ways
of
actually
storing
content,
server
side
and
then
merging
it.
It's
still
very
much
sort
of
plain
old
data
mechanisms,
whether
that
plain
old
data
B
be
proto
or
JSON,
we're
still
not
sending
the
ammo
over
the
over
the
wire
there
and
so
I.
Don't
think
that
server
side
apply
is
actually
in
a
you
know,
intersects
with
this
in
any
way,
but
you
know
always
surprised
right.
A
A
So
it's
really
hard
for
me
to
say
and
then
there's
a
question
also
about
the
the
CNC
of
landscape
page.
There
are
commercial
products,
there
are
non
CNC
have
open-source
projects.
There
are
all
sorts
of
things
that
actually
show
up
on
that
landscape,
page
so
being
on.
The
landscape
doesn't
mean
that
you
are
part
of
the
CNC
F
and
it
doesn't
really
mean
anything
in
terms
of
of
whether
you're
open
governance
or
not
so
yeah,
so
I
think
you
know.
A
One
of
the
things
in
maybe
I'll
suggest
is
the
folks
at
the
CN
CF
is
that
you
know
marking
which
projects
we
think
are
open.
Governance
might
actually
be
something
that's
worth
highlighting
on
that
on
the
landscape
page,
one
of
the
things
that
Chris
Anna,
Cizek
and
I'm
probably
pronouncing
his
name
wrong.
Who
is?
Oh
god,
what's
his
official
title
at
the
CN
CF,
one
of
the
things
he
set
up
was
open
governance
dev
and
if
you
go
here,
this
is
actually
a
great
checklist
of
sort
of.
A
Yes,
we
CTO
of
the
C
and
C
F.
Do
you
think
the
old
Hecky
of
projects
are
on
their
way
to
CN
CF,
open
governance,
yeah
we're
definitely
looking
at
those
I
think
for
us.
You
know
you
know,
as
we
see
a
certain
amount
of
traction,
it's
something
that
we're
going
to
be
looking
to
expand
and
and
and
take
them
there
if
it
makes
sense.
My
personal
take
on
this
is
that
it's
okay,
oh
no,
chris,
is
CEO
of
the
CN
CF,
not
CTO.
A
My
personal
take
on
this
stuff
is
that
it's
fine
for
open
source
not
to
be
open.
Governance,
I,
think
the
the
it
creates
confusion
and
stress
for
people
when
you
know
when,
when
folks
bring
assumptions
or
there
are
implied
contracts
with
your
community
and
I,
think
any
time
when
folks
change
up
open-source
licenses,
what
it
ends
up
doing
is
changing
the
implied
contract
with
your
community
and
so
I.
A
Think,
a
lot
of
folks,
assumed
and
I
think
Google,
you
know,
was
definitely
not
clear
early
on
that
sto
and
cane
ADA
were
on
their
way
towards
a
foundation
with
open
governance,
and
it
was
really
going
to
be
a
community
driven
project
and
I.
Think
the
the
honest
here
is
really,
you
know
and
folks,
don't
feel
bad
building
their
business
on
top
of
a
open
community
project,
open
governance,
project
I.
A
A
A
At
that
time,
we
were
using
not
o,
not
customize
shoot
I'm
gonna,
like
fix
some
stuff
up
here,
we
were
using
a
cube
builder,
a
sort
of
a
pre-release
version
of
cube
builder,
one
of
the
things
I
did
Thievery
feverishly
in
like
the
half
hour
before
this
episode
is
I.
Try
to
update
the
project
cube
Builder
version,
2.0
dot,
one
and
I
was
like
three
seconds
away
and
I
made
one
mistake
with
get
here,
so
I'm
gonna
go
through
and
fix
some
stuff
up.
You're
gonna
watch
me
with
get
for
a
second
here.
A
A
Git
commit
dash
dash,
amend
and
I,
always
remember
that
it's
two
dashes
one
M
I
can
go
ahead
and
do
this
and
we
can
see
here,
I
Ament
didn't
mean
customize,
2.0
I
meant
Q,
build
or
2.0,
so
I'm
gonna
do
that
get
push
and
it's
going
to
tell
me
no
I
can't
do
it
get
push.
I'm
gonna
do
f,
f
for
screw
it.
You
can
fill
in
the
gaps.
There
is
the
bottom
of
the
terminal
cut
off
there
we
go.
Okay,
there
we
go.
Is
that
better?
Can
you
see
it
now?
A
A
The
other
thing
I
did
is
is
that
the
way
that
I
had
to
upgrade
cube
builder
is
what
I
ended
up
doing
is
I
ended
up,
creating
a
new
cube
builder
project
and
then
essentially
using
the
diff
features
built
into
the
S
code
to
find
places
where
things
were
different,
find
the
stuff
that
actually
mattered
and
copied
copied
my
code
over
as
I
did
that
I
missed
two
files,
and
so
I'm
gonna
actually
recreate
those
real,
quick,
I
think
the
easiest
thing
for
me
to
do
is
just
to
recreate
those.
A
A
Let's
play
there,
I
don't
want
to
rich
TIFF
I
want
to
actually
look
at
the
actual
car.
Take
me
to
the
file
copy.
No
there.
It's
it
here
view
file.
There
we
go
view
file
raw
control,
a
control,
C
or
command
a
control,
C
V,
okay,
so
boom.
There's
that
one
and
then
we're
gonna
do
a
little
hacking
here
license.
B
A
There
we
go
I
know,
I
could
have
done,
get
checkout
head
and
then
blah
blah
blah
I
know
it's
like,
like
that's
that
ends
up
like
when
it's
just
a
couple
of
files:
I'm
just
like
I'm,
just
gonna
copy
and
paste
some
stuff.
It's
easy
enough!
Okay!
So
now
we're
good
okay,
so,
where
we
left
things
off
last
time
and
let's
see
where
we
were
at
and
I
think
I
should
be
good
to
go
here.
I
think,
last
time,
I
was
working
against
a
cluster
that
was
running
in
Amazon.
A
A
What
I
did
is
I
made
sure
that
we
could
build
this,
so
actually
things
build
so
at
least
I
know
they
build,
and
then
one
of
the
things
that
you
need
to
do
as
part
of
you
know,
installing
your
controller
into
a
new
cluster
is
you
need
to
call
make
install,
which
essentially
creates
the
manifests
and
applies
them,
and
so
so
I
did
a
make
install.
We
can
do
that
again.
This
ends
up
calling
customized
directly
versus
using
cube
control
and
and
you
control,
k
and
so
I
had
to
make
sure
I
had
customized
installed.
A
Also,
the
latest
version
of
that
I
did
that
with
brew,
so
there
we
go
so
so
we
got
that
installed
and
now
we
can
do
make
run
when
we
do
make
run.
This
will
actually
go
through
and
this
actually
runs
up
generating
the
controller
and
then
running
the
thing
locally.
So
this
is
running
locally
on
my
machine,
this
is
a
Mac
binary.
A
That's
using
the
cube,
config
environment
variable
that
the
cube
control
is
using,
and
then
this
is
talking
to
my
kind
cluster,
which
is
also
running
on
my
machine,
because
it's
kind
though
it's
actually
kubernetes
and
docker,
it's
running
with
docker
Dockers
running
in
a
VM
managed
by
by
a
dog
or
desktop.
So
that's
that's
what
we
have
going
on
here
and
so
what
we
have
is
this
thing
comes
up
and
we're
starting.
The
controllers
are
starting
workers,
and
so
this
is
as
far
as
I've
gotten
as
far
as
testing
this
thing
out.
A
B
A
A
A
Is
this?
No?
This
is
the
CRT
themselves
samples
we
have
minecraft,
so
here's
our
sample
here
so
we're
gonna
create
a
server.
This
thing's
called
my
server
and
we're
setting
a
bunch
of
stuff
in
this
spec
saying:
yes,
we
agree
to
the
Minecraft
EULA.
We
give
the
server
a
name
which
server
type
do
we
want
vanilla
versus
like
the
hackable
ones
like
spigot,
and
all
that
and
then
what
users
do
we
want
to
actually
allow
to
log
in
and
opt
and
stuff
like
that?
We
can
expand
this
list
over
time
and
I.
A
Think
one
of
the
things
I'd
love
to
do
is
get
to
the
point
where
we
can
do
things
like
plug-in
management
and
stuff,
like
that,
using
our
controller
but-
and
so
like.
The
end
result
here,
which
I
think
is
cool,
is
that
we're
gonna
end
up
with
a
minecraft
server
API
for
being
able
to
actually
launch
a
managed,
minecraft
servers
and
we're
building
that
up.
So
what
we
can
do
here
is
we
can
go
into
config
samples
and
we
give
cube
control
well,
let's
do
we
can
do
cap.
My
server.
B
A
B
A
Is
a
isn't
it
like
it's
a
cap
deploy
right
and
and
I
like
cap,
because
it
has
actually
shows
you
what
its
gonna
be
doing
and
it's
like
the
kindest
server
here.
It's
what's
interesting.
It
doesn't
actually
list
the
group,
so
that'd
be
interesting
at
some
point
but
anyways.
It
knows
that
we're
gonna,
create
this
and
then
wait
for
it
to
reconcile
there's
nothing
to
reconcile.
It
would
be
interesting
if
we
could
have
enough
metadata,
and
this
would
be
something
between
you
know,
metadata
that
gets
reflected
through
the
open,
API
stuff.
A
That
kind
could
actually
be
able
to
read,
so
we
could
actually
recognize
something
as
a
condition,
and
we
actually
know
when
the
thing
is
reconciled,
but
we're
not
writing
conditions
yet,
but
it's
something
that
we
can
do
interesting
thing
to
think
about
anyway,
so
we
have
this
things
being
created
now.
What
should
be
happening
if
things
work
well,
is
that
the
controller
should
be
doing
stuff.
A
What
we
say
here
is
that
it's
reconciling
and
it's
trying
to
treat
create
a
controller,
but
then
it's
running
that
loop
again
and
again,
and
as
it
does
it's
trying
to
recreate
the
thing,
so
we
have
Q
control
get
pods.
We
have
our
minecraft
server
pod
running
okay,
so
we
at
least
like
we
create
a
CR
D
that
causes
the
thing
to
actually
create
the
pod.
That's
where
we
left
things
off
last
time
because
we're
not
doing
true
reconciliation.
Yet
in
our
controller.
A
A
A
So
this
is
actually
saying:
hey
we're
trying
to
log,
and
then
it
tries
to
affect
that
server.
If
we
can't
find
the
thing,
then
something
happened
and
will
actually
return
and
then
here's
the
code
that
actually
constructs
a
pod
and
creates
it,
and
so
what
we
need
to
do
is
we
need
essentially
the
next
thing.
We
didn't
to
do
is
say,
say
if
you
know
pod
not
created
already,
then
you
know
we
want
to
go
ahead
and
do
this.
Okay,
so
we
have
to
actually
figure
out
whether
the
pod
has
been
created
already
or
not.
A
A
That's
the
first
thing
that
we
can
do
the
second
thing
that
we
can
do
or
we
can
make
a
synchronous
call
to
the
API
server
to
say
well
get
me
the
pod
live
and
so
as
you're
building
a
controller,
there's
a
decision
that
you
need
to
make
in
terms
of
whether
you
want
to
use
cached
values
or
whether
you
want
to
use
live,
Ally's
and
I
think
a
lot
of
times.
Things
will
be
faster
and
more
more
resilient
and
you'll
put
less
load
on
the
server.
A
If
you
use
the
cache
version
now,
one
of
the
things
that
I
want
to
do
here,
because
I
haven't
written
a
lot
of
code
like
this
in
a
while
is
I
want
to
actually
go
through
the
example
controller.
That
is
part
of
the
cue
builder
tutorial
and
just
copy
code
there,
because
I
don't
know
if
I'm
smart
enough
to
figure
this
stuff
all
out
by
hand
here.
So
here
here's
what
we're
doing
we're
listing
all
the
active
jobs
and
and
update
the
status.
So
here
so
we're
gonna.
We're
gonna
copy
this
pattern
here.
A
So
what
we're
gonna
actually
do
is
here
we
have
child
jobs,
but
we're
actually
going
to
call
this
child
pods
and
so
that's
kubernetes
batch.
But
this
is
actually
not
going
to
be
a
kubernetes
batch
thing.
This
is
going
to
be
a
core
Kord,
odd
list.
Okay,
so
we're
gonna
get
a
list
of
pots.
That's
can
be
our
trial
pods
and
then
we're
calling
this
on
our
here.
So
this
is
the
server
reconciler.
So
I,
don't
what
is
a
server
server,
reconciler,
reconciles
a
server
object
and
we're
calling
list
on
that?
A
The
server
reconciler
is
also
a
client,
and
so,
in
this
case
we're
not
using
a
cache
we're
making
a
live,
call
to
the
API
server
to
actually
get
this.
So
that
means
every
time
we
reconcile
we're
getting
the
freshest
possible
information
about
the
pods,
and
so
this
will
actually
go
through
and
this
will
actually
go
through
and
actually
hit
the
API
server
on
the
other
side.
So
here
we
go
so
child
pods
up
no
child
pods
in
namespace
matching
Fiat
Field
the
job
owner
key.
So
what
is
the
job
owner
key
owner
key?
A
A
A
A
There's
a
whole
bunch
of
things
that
we're
actually
setting
up
here
as
we
do
this,
so
one
of
these
things
is
that
it's
it's
a.
This
is
the
name
of
the,
so
we
have
the
API
version,
the
kind.
So
this
identifies
essentially
the
type
of
the
object
that
is
the
owner,
and
then
this
is
the
name
of
it
and
then
there's
also
a
UID,
so
things
get
recreated
and
stuff
like
that.
We
can
figure
that
stuff
out
and
then
what
is
the
controller?
True
actually
do.
What
is
that
supposed
to
be.
A
Points
to
the
managing
controller,
yeah,
interesting
I,
don't
know
why
it's
control
or
true
versus
I,
wonder
when
that's
false
I'd
love
to
actually
understand
that.
That's
not
something
that
I
have
a
deep
amount
of
knowledge
on
okay
and
then
block
owner
deletion.
If
true
and
if
the
owner
has
foreground
deletion
finalizar,
then
the
owner
cannot
be
deleted
from
the
key
value
store
until
its
references
removed
defaults
to
false.
So
this
actually
is
sort
of
some
referential
integrity
type
stuff
going
on
here,
all
right.
A
So
what
we're
essentially
now
saying
is
just
to
sort
of
break
all
this
down
to
sort
of
pop
a
couple
different
levels.
What
we're
actually
saying
here-
and
let
me
is
that
we
want
to
go
through
and
we
want
to
list
all
of
the
pods
that
have
an
owner
reference.
That
point
to
this
particular
that
point
to
this
particular
server
object
that
were
reconciling,
and
so
where
did
we
have
so
we
had
VARs
here.
A
A
A
A
What
if
we
have
no
pods,
what
if
we
have
too
many
pods
for
some
reason,
what
happens
if
somebody
out
a
man
went
through
and
created
a
pod
assigned
to
our
controller
and
then
do
anything,
and
then
we
can
do
like
what
if
the
pod
doesn't
match
our
spec,
recreate
something
like
that?
Okay,
so
those
are
the
types
of
things
that
we
can
go
ahead
and
do
here.
Let's
actually
save
this
one
for
later,
because
this
is
a
harder
one.
A
Is
less
than
zero
or
is
is
greater
than
or
equal
to
or
greater
than
one
then
delete
the
extra
Tom
so
we'll
just
go
ahead
and
delete
some
extra
pods
because
it's
like
that
shouldn't
be
happening.
But
this
is
probably
a
pretty
rare
case.
So
we're
not
going
to
sweat
it
too
much,
and
then
we
can
do
also
if
length
of
the
child
pods
is
less
than.
A
B
A
Okay,
so
now
what
we're
doing
is
we're
actually
getting
a
list
of
the
pods
and
if
we
don't
have
any
pods
listed,
we're
gonna
go
ahead
and
that's
when
we're
gonna
create
stuffing.
So
this
should
eliminate
some
of
the
errors
that
we
have
going
on
in
our
controller.
So
if
we
look
at
our
controller
now
we
have
a
bunch
of
stuff
happening,
so
I
quit
the
controller
and
then
I'm
gonna
actually
go
through
and
cube
control
get
pods.
A
And
that
thing's
running
I'm
gonna
do
cube
control,
delete
pod
MC
my
server,
so
this
will
go
ahead
and
delete
that
pod
and
because
the
controller
is
not
running,
it's
not
going
to
get
recreated,
but
now
what
I
can
do
here
is
if
I
do
make
run.
What
should
happen?
Is
this
controller?
Will
wake
up,
it'll,
say:
hey,
look,
I
have
a
ser
D.
Let
me
go
ahead
and
something
went
wrong.
Let's
figure
out
what
happened?
What
should
is
that
say,
I
have
a
CR,
D
I
should
have
a
pod.
A
Let
me
go
ahead
and
check
my
paw
and
and
if
I
don't
have
a
PI,
let
me
go
ahead
and
create
one.
So
it
looks
like
we
got
a
failure.
It
says
unable
to
list
child
pods
server,
D
fo,
my
server
error
index
with
name
field.
Metadata
controller
does
not
exist.
So
we
something
got
messed
up
here.
What
am
I
actually
screwing
up.
A
Me
a
second
here
so
we're
actually
constructing
the
pod
in
the
meta
we
have
labels,
annotations,
name
namespace,
a
bunch
of
this
stuff.
We
are
going
through
adding
that
doing
that
here
we
have
set
controller
reference,
so
this
is
the
helper
to
be
able
to
do
that.
Let's
actually
go!
Look
at
this
code,
go
to
definition,
yeah
I'm,
surprised.
It
says
metadata
got
controller,
that's
that
confused
me
a
little
bit.
So
this
is
in
controller.
Util
set
controller
reference.
A
Well,
there's
a
bunch
of
fun
utilities
here
that
we'll
be
able
to
use.
So
we
have
things
like
creator,
update,
creates
or
updates
the
given
object
in
the
kubernetes
cluster.
The
object
desired
state
should
be
reconciled
with
the
existing
two
using
the
past
and
reconcile
function.
Okay,
so
we
have
that
set
up
signal
hand.
Okay,
so
we
have
a
few
helpers.
There
set
controller
reference,
sets
the
owner
reference,
blah
blah
blah.
So
that's
all
good.
A
A
A
A
A
A
A
Owner
key
Fung
Wah
objects
returns
a
list
of
strings
grab
the
raw
thing,
so
this
should
be
a
cord
up
pod,
so
this
is
essentially
taking
the
raw
object
down
casting
it
to
a
cored
out
pod.
If
it's
not,
this
will
actually
panic
on
you.
We
have
this
get
controller
of
with
meta
v1
I'm,
not
sure
why
we
have
a
squiggle
there.
Oh
it's
just
a
spell
check
thing:
that's
the
owner!
If
the
owner
is
nil,
then
return
that
make
sure
that
it's
a
pod.
A
B
A
Go
okay,
so
everything
okay,
so
let's
go
through
and
actually
look
at,
so
manager
get
filled
into
server
returns.
A
field
index
are
configured
with
the
client
index
field,
so
I
believe
what's
happening
here
is
this
is
doing
essentially
client-side
indexing
that
server-side
indexing.
So
if
we
go
to
definition
here-
and
this
is
in
controller
runtime-
which
is
sort
of
a
helper
library
on
top
of
client
go
but
the
let's
see-
and
so
this
is
interfaces
go
on
this
thing-
controller
runtime,
package,
client.
A
A
Index
our
function
knows
how
to
take
an
object
and
turn
it
into
a
series
of
non
namespaced
keys.
Namespaced
objects
are
automatically
given
namespace
and
non
namespace
variants,
so
keys
do
not
need
to
include
the
namespace
field
and
s
who
knows
how
to
index
over
a
particular
field
such
that
it
can
later
be
used
by
the
field.
Selector
adds
an
index
with
the
given
field.
Okay,
so
we're
essentially
creating
that
field
there
as
part
of
this
index.
I
don't
quite
understand
this
I
got
to
be
honest
here.
B
A
A
A
A
A
A
B
A
B
A
A
Alright,
ok,
so
now
we
got
a
lot
less
noise,
we're
not
seeing
any
errors
showing
up.
So
what
we're
actually
seeing
here
is
that
we're
saying
creating
pod
for
server
run
and
that
this
is
actually
the
name
of
the
pod
that
we're
creating
and
then
there's
a
lot
of
other
things
that
are
essentially
kicking
the
reconciler
and
so
what's
happening.
Is
that
every
time
the
pod
changes
we're
actually
going
to
go
through
and
actually
update,
we're
going
to
actually
reconcile
again,
because
we
want
to
do
two-way
reconciliation.
A
We
want
to
be
able
to
actually
create
modify
the
pod
when
server
changes,
but
we
also
want
to
actually
maybe
update
the
status
of
server
when
the
pod
changes,
and
so
this
thing
actually
will
I
think
kick
things
both
ways.
So
let's
go
ahead
and
actually
see
if
we
can
update
the
status
then
as
part
of
this,
oh,
the
first
thing
we
can
do-
and
this
is
something
that
we
talked
about
last
time-
is:
if
we
look
at
the
controller.
A
One
of
the
things
that
we
have
here
is
that
we
were
playing
around
with
this
idea
of
generated,
name
and
so
generated.
Name
means
like
hey:
let's
not
actually,
sort
of
name
the
thing
let's
go
through
and
actually
let
the
name
be
generated.
Unique
name
be
generated
based
on
a
prefix
and
so
we're
going
to
call.
This
thing
is
going
to
be
MC
name
and
then
this
and
then
we're
going
to
turn
this
into
generated
name-
and
we
couldn't
do
this
before,
because
we
weren't
doing
full
reconciliation.
A
A
And
so
now
we're
successfully
reconciled.
Everything
is
stable,
but
now,
if
I
do
a
cube,
control
get
pods,
I
can
do
cube,
control,
delete
pod
MC,
my
server
that'll
delete
it
and
then
what
should
happen
here
is
that
this
thing
actually
should
go
through
and
re
reconcile
and
actually
recreate
a
new
pod.
This
is
what
should
happen.
It
says
pod
deleted.
Oh,
we
got
an
error.
We
got
an
error.
What
would
happen
here.
B
A
A
Okay,
okay,
so
here
we
did
created
pod
for
server
run,
and
one
of
the
things
that
you
can
see
here
is
that
now
we're
actually
getting
a
random
name
here,
and
so
this
is
an
example
of
how
we
have
things
like
when
a
replica
set
creates
pods
and
it
has
a
random
name.
This
is
one
way
that
you
can
actually
create
the
random
name
for
the
set
of
pods
that
you're
that
you're
actually
managing.
Now
we're
only
creating
one
pod
here
for
this,
so
it's
like
it's
not
like.
A
So
what
we're
actually
doing
here
is
we
are,
if
I
look
we're
going
through
and
constructing
the
pod,
creating
it
and
then
as
part
of
creating
it
I
believe
the
pod
gets
modified
with
the
result
from
the
from
the
server
and
then
you
can
actually
see
now
we're
actually
taking
the
pod
there,
and
the
name
is
actually
coming
out
of
it
as
we're
dumping
that
so
so
name,
space
default,
pod,
blah
blah
blah.
So
that's
actually
really
cool
that
we
see
that
happening.
A
Let's
see
how
we
do
it
on
time.
Okay,
so,
let's
see,
let's
do
a
little
bit
more
work
here,
because
I
think
this
is
fun.
The
next
thing
I
want
to
do
is:
let's
actually
try
and
write
the
status
back
out.
Okay
and
what
we're
gonna
have
here
is
okay,
so
the
next
thing
we're
going
to
do
is:
let's
decide
what
we
want
our
status
to
look
like,
so
we're
going
to
go
back
to
our
server
type
here
and
we
have
spec
in
status
right
now.
A
A
Running
boolean,
okay,
so
we're
gonna
do
a
couple
of
things
there
we're
gonna
actually
have
in
the
minecraft
server
in
the
status
we're
going
to
list
the
the
pod
name
that
we're
using
and
then
we're
gonna
have
a
pool
in
terms
of
whether
it's
running
or
not.
Now
in
real
life
and
I
think
you
know
it's
gonna
be
fiddly
code,
but
like
we
don't
you
know
we
probably
want
to
have
more
detailed
status
here,
but
this
is
actually
a
great
way
for
us
to
go
ahead
and
get
started
and.
A
Put
it
up,
and
so
the
first
thing
we
need
to
do
here
is
figure
out.
A
A
B
A
A
There's
some
sort
of
like
thing
going
on
where
it's
actually
like
people
are
responding
to
stuff
that
isn't
in
our
stream.
Weird
okay
looks
like
a
bug:
okay,
so
cannot
use
to
a
variable
type
view
on
a
PI
has
a
string
value,
Oh
items,
dot,
name:
okay:
there
we
go
okay
and
now
we
can
actually
go
through
and
we
can
do.
A
B
A
B
A
B
A
B
B
A
A
B
A
B
A
B
B
Q
builder
scaffold
scheme:
do
we
have
that
I
am
confused?
I
am
very
confused:
go
schemes,
setup
log
scheme,
runtime,
main
old
stuff.
B
A
B
A
A
Okay,
so
this
is
unable
to
update
cron
job
status,
so
it's
something
about
using
the
date.
We've
got
her
we'll
update
the
status
verse
here,
just
like
before
we're
gonna
specify
I
meant
specifically
update
the
status
sub
resource,
we'll
use
the
status
part
of
the
client
with
the
update
method.
The
status
rub,
cease
or
ignores
changes
aspect,
so
it's
less
likely
to
conflict
with
any
other
updates.
A
A
A
A
So
we
have
to
declare.
Maybe
this
is
the
thing
that
we
missed.
We
have
to
declare
that
we
want
the
status
sub
resource.
Maybe
we
don't
even
have
a
status
sub
resource
on
our
type,
so
that
might
be
it
does
the
make
one
actually
update
the
CR
D
the
API
server.
Note.
Oh,
that's
probably
a
good
point
also,
so
we
probably
have
to
do
the
make
install
again.
That's
a
really
good
point.
A
Okay,
so
let's
do
both
of
these
let's
go
through
and
make
sure
that
we
have
the
status
the
status
of
resources
enabled
via
blah
blah
blah
when
enabled
updates
to
the
main
resource,
so
I,
don't
think
we
did
that
generating
CRD.
So
this
would
be.
This
is
an
on
not
server
spec
service
status
boom.
Okay,
so
we
need
to
actually
go
through
and
add
that
okay
and
then
we're
gonna
do
a
make
install.
A
A
A
Well,
I
see:
okay,
so
here
is
validation,
sub
resource
status,
so
we
actually
have
a
sub
resource
actually
specified
there.
I
don't
think
we
had
that
before
so
good,
good
catch,
Mike,
okay!
So
now
now,
let's
make
sure
that
we
actually,
let's
go
back
to
our
controller.
Our
controller
is
calling
status.
A
Successfully
reconcile
okay
cool,
so
we
probably
skipped
a
step,
and
this
probably
was
something
that
is
new
since
the
the
Vita
2
stuff
or
maybe
I,
missed
it
the
first
time
around,
because
we
weren't
actually
doing
it
yet.
But
now,
if
I
do
cube,
control,
get
pods
control
delete
pod
MC,
my
server
blah
blah
blah.
If
I
delete
that
oh
we
heard
out,
we
hit
another
bug.
What
do
we
hit?
What
do
we
hit
okay,
creating
pod,
and
then
we
actually
had
a
panic.
The
panic
was
server
controller
line
88.
A
A
Time
to
enter
debug
mode,
yeah
I
mean
like
we
can
start
doing,
delve
and
stuff
like
that.
I
think
we're
gonna
might
run
out
of
time,
but
a
104,
ok,
so
that
that
is
also
showing
up.
Ok,
so
here
we're
doing
pod
name,
and
so
something
happened.
Oh
you
know
what
this
got
shadowed,
freaking
eh!
This
is
okay.
This
is
like
my
least
favorite
thing
about
go
right
here:
okay,
so
what
happened
here
is
I
declared
pod
here
and
then
I
declare
a
new
Pat
pod
with
the
colon
equal.
A
This
actually
creates
another
variable
that
actually
shadows
this
variable
and-
and
so
that
means
that
this
thing
stays
null
as
we
actually
go
through
this
line.
So
what
I
have
to
actually
do
is
we
have
to
do?
We
have
to
do
a
VAR,
Error
error.
We
do
that.
We
do
this
now
we're
actually
not
creating
a
shadowed.
Variable
and
it'll
actually
set
this
particular
pod
here.
That
will
then
be
used
down
here.
So
this
is
the
problem
that
we
had
so
now,
let's
go
through,
and
that
is
the
worst.
A
We
let
the
whole
shadow
thing
that
should
be
an
error.
Like
if
you
actually
have,
if
you
create
a
new
variable
in
a
sub
scope
using
the
colon
equal
and
it
actually
shadows
the
variable
about,
actually
you
know
what
the
syntax
that
I
think
we
should
have
is
that
you
should
have
the
colon
next
to
the
next,
to
the
thing
that
you're
creating.
A
So
it
should
be
like
air,
:
and
then
pod
colon
says
actually
create
a
new
one
of
each
of
those
things,
so
you
can
actually
explicitly
say
which
of
the
things
on
the
left,
you
think
are
new
variables,
not
old
variables,
but
you
know
too
late
now.
So
what
now?
Okay?
So
now
we're
going
to
do
Cube
control
delete
pod.
This
thing
here.
A
Okay,
and
so
what
happened
here
is
this
is
fascinating,
okay,
so
successfully
reconciled
and
then
something
some
sort
of
concurrency
happened
here.
So
he's
created
the
pod
and
then
it's
like
oh
I'm,
unable
to
update
this
operation
cannot
be
fulfilled
on
servers
minecraft
eat
at
my
serve.
The
object
has
been
modified.
Please
apply
your
changes
to
the
latest
version
and
try
again
so
something
behind
the
scenes.
A
Something
behind
the
scenes
actually
updated
the
thing
the
status,
so
we
actually
had
two
status
updates
that
conflicted,
but
then
things
actually
ran
again
and
eventually
the
whole
thing
sort
of
stat
settles
out
and
I'm,
not
sure
exactly
why
it
happened
like
that.
But
if
we
do
Q
control
get
server.
A
A
At
this
you
can
see,
we
do
have
actually
have
the
pod
name
and
it's
running
so
we're
actually
up
and
running.
Oh,
you
know
what
happened
here.
I'll
tell
you:
what
happened
is
that
the
race
conditions
it's
beautiful,
okay,
but
this
is
actually
the
sort
of
stable
like
this
I
mean
understanding.
What's
gonna
go
here
and
I'll
explain,
it
is
actually
like.
If
you
understand
how
this
works,
you
understand
it
sort
of
the
fundamental
stability
of
kubernetes,
where
things
actually
just
sort
of
like
work
themselves
out.
A
A
I'm
gonna
ignore
it,
but
the
latest
one
should
actually
go
through
and
win
over
time,
and
so
it.
The
idea
is
that
you
keep
retrying
until
this
stuff
actually
works.
Now.
I
think
there
maybe
still
be
some
races
here,
if
I'm,
if
I'm
thinking
correctly,
because
when
I
actually
go
through
okay.
So
what
happens
is-
and
this
is
the
way
that
the
this
higher
level
framework
is
work
works.
If
you
end
up
returning
an
error
from
the
reconcile
and
then
it
will
actually
in
queue
it
to
rerun
reconcile
in
the
future.
A
So,
let's
we're
gonna
go
to
the
let's
see
so
reconcile
if
we
go
through
and
let
me
actually
make
sure
so.
This
is
server
reconciler,
okay,
this
is
actually
implementing
a
so
the
manager
thing
actually
is
the
one
that
does
that.
Okay.
So
if
we
actually
look
at
manager-
and
we
go
to
type
definition
of
this
thing,
what
we
can
see
is
that
here's
a
bunch
of
things
on
the
manager-
if
we
look
at
reckons.
B
A
This
controller
implements
the
kubernetes
api.
It's
a
reconcile
and
want
to
go
to
the
reconciler
is
the
thing
that
we're
going
to
look
at
go
to
type
definition,
and
this
is
all
part
of
the
controller
runtime
helper
library
reconcile
reconciler
purfles,
a
full
reconciliation
reconciliation
for
an
object
referred
to
by
the
request.
The
controller
will
riku
the
request
be
process
again.
If
the
error
is
non
nil
or
if
result,
Riku
is
true.
A
A
This
thing
to
actually
retry
it,
and
so
that
means
that
as
long
as
we're
returning,
an
error
reconcile
will
continually
be
called
until
things
actually
settle
down
and
that
ability
to
be
able
to
deal
with
it's
like
an
optimistic
concurrency
like
we
try
and
make
sure
that
that
we
do
a
lot
of
things
at
once.
If
they
work
out
their
work
out,
if
they
don't
work
out,
then
we
essentially
re
reconcile
and
eventually
things
actually
settle
down.
A
Okay,
when
I
was
playing
with
around
with
controller
one
time
default,
concurrency
was
set
to
one
not
sure
if
queue
builder
actually
modifies
that
that's
actually
an
interesting
question
because,
like
the
only
way
I
could
think
this
would
happen
is
if
we
actually
do
have
some
concurrency
happening
it
may
be.
You
know
it
may
be
concurrency
where
am
I
at
okay.
So
let's
look
at
main
here
because
that's
probably
where
we
would
set
it.
B
B
A
B
A
Controller
dot
option
scheme
scheme
metrics
leader
election,
enable
leader
election.
All
that
do
I
actually
have
two
of
them
running
at
once,
or
something
like
that.
We
have
the
logger.
We
have
that
naval
leader
election
is
set
to
false
all
right.
Let's
look
at
make
file
when
I
do
a
run.
What
are
we
actually
running?
The
thing
with
run?
It's
just
plain:
we're
not
setting
any
options
with
the
thing.
A
Something
is
updating
the
status
behind.
Ok,
so
one
of
the
things
that
we
can
do
here,
let's
actually
go
through
and
ok,
here's
how
we're
gonna
do
bug
this,
and
so,
like
a
lot
of
times,
you
really
understand
stuff
because
you
see
like
hey.
This
was
probably
fine.
This
will
probably
work
out
I'm,
pretty
sure
that
you
know
there's
no
serious
bug
here,
but
like
really
root
causing
and
understanding
it.
This
is
how
you
really
test
you
understanding
them
and
know.
A
A
Okay,
so
we
got
it
looks
like
we
got
two
updates
as
far
as
I
could
tell
the
first
update
and
and
and
unfortunately,
it
doesn't
actually
sort
of
put
any
dividers
between
stuff,
but
the
this
is
the
first
update
here
and
you
can
see
we
set
the
pod
name
in
and
yeah,
so
we
got
two
updates
that
actually
were
successful.
The
first
one
actually
went
through
and
actually
set
the
pod
name
into
the
status
and
then
the
second
one
went
through
and
actually
set
running
equals.
A
True,
with
this
thing
now,
along
the
way,
was
there
actually
other
metadata
that
got
set
on
this?
We,
we
have
a
bunch
of
the
cap
stuff
that
is
like
the
original
stuff.
We
have
the
creation
timestamp
that
generation
we
only
had
well.
The
generation
is
two
here,
because
we
just
updated
the
the
sub
resource
of
the
status
we,
okay,
we
have
resource
version.
A
That's
global
across
all
of
Etsy
D
that
number
it's
an
incrementing
counter,
but
you're
not
supposed
to
actually
look
into
it
self
link
the
UID,
so
I
didn't
see
any
other
changes.
Out-Of-Band,
let's
see!
Oh,
and
this
time
we
didn't
get
any
error,
so
we
actually
didn't
hit
the
race
condition
that
time.
A
In
US
law,
saying
when
you
send
the
date
of
the
API
service,
setting
some
metadata
so
yeah,
so
that
may
be
that
Demetria
I
wonder
if
cached
in
memory
sources
aren't
being
updated
for
some
reason,
but
you
are
doing
a
get
at
the
beginning
of
the
reconcile
yeah.
So
we're
not
actually
pulling
anything
out
of
an
informer,
cache
I.
Don't
think!
Isn't
it
the
case
that
you
are
not
overlaying
the
pod
from
the
reads
day,
ie
the
object
is
going
to
be
different
and
hence
causing
the
update
to
fail.
A
A
Sorry
I'm
doing
stuff
here,
maybe
that
works,
maybe
doesn't
okay,
so
we're
doing
so.
Pod
is
either
going
to
be
as
we
go
through
here.
Pod
is
either
going
to
be
the
one
that
we
create,
in
which
case
I
believe
create
actually
updates
pod
based
on
the
result
from
the
server
and
so
if
I
go
through
and
if
I
do
go.
To
definition
of
this
create
saves
the
object
option:
the
kubernetes
cluster,
okay,
that's
not
useful,
but
I
believe
it
actually
gives
you
an
updated
version
of
the
object.
I
believe
that's
what
happens
there.
A
I
could
be
wrong,
but
we
do
know
that
after
we
ran
this,
the
log
actually
had
the
updated
name,
so
we
at
least
got
some
information
back,
and
so
that
was
enough
of
a
name
to
be
able
to
put
in
here
and
then
we
actually
can
see
if
it's
running,
and
so
it
worked
last
time
where
we
didn't
get
the
race.
We
didn't
get
any
errors
this
time.
Okay,
so
let's
actually,
let's
run
this
again
one
more
time
and.
A
A
We
got
one
update
as
the
pod
that
we
were
deleting
went
from
running
to
not
running.
We
actually
went
through
and
we
actually
removed
the
running
boolean
from
this
right,
and
so
this
is,
as
we
were
deleting
and
we
did
a
status
update
for
that,
and
then
we
ran
another
reconcile
to
be
able
to
actually
when
we
started
the
new
pod
to
be
able
to
actually
do
status
equals
that
for
the
new
pod.
So
we
actually
remove
the
old
PAP.
A
Didn't
you
know,
change
the
status
and
then
we
actually
saw
that
thing
go
to
running,
so
we
actually
saw
a
sensible
set
of
updates
to
actually
come
through,
but
we
got
some
errors
here.
In
terms
of
again,
operation
cannot
be
fulfilled
on
servers.
The
object
has
been
modified,
so
I
have
to
think
that
we
have
some
sort
of
concurrency
going
on
here.
A
A
B
A
A
Ok,
so
I'm
running
out
of
time
here,
because
I
got
a
meeting
at
3:00
I
got
to
think
so
like
either
we
have
stale
information
in
the
queue
or
we're
actually
getting
some
concurrency,
even
though
we
don't
think
we're
getting
concurrency,
and
so
one
of
those
is
is
happening
here.
I
would
love
to
actually
dig
into
the
the
manager
here
and
actually,
let's
just
read
some
code
here:
I'm
going
to
go,
go
to
definition
of
this
thing
here,
a
manager
as
a
manager
to
manager
go
to
definition.
A
Set
field
get
config,
get
scheme,
get
client.
There's
a
lot
going
on
here.
I
see
we
had
a
sleep
and
Fifi.
She
goes
away
exactly
well,
I,
don't
think
it's
an
issue
because
I
think
it
like
it
settles
down.
So
we're
not
actually
in
you
know,
I'm
not
worried
about
about
us
actually
having
real
problems
here.
I
think
I
think
the
thing
is
sound.
I
just
don't
feel
like
I.
A
A
A
A
B
A
B
A
Right
well,
I
am
gonna
I'm
out
of
time.
Now,
unfortunately,
you
know
this
is
not
a
satisfying
like
place
to
stop
for
any
of
us.
What
I'm
gonna
do
is
I
want
to
check
this
stuff
in
it's
gonna,
be
an
up-to-date
thing.
I
think
we
made
some
real
progress
because
we
have
a
real
reconcile
loop
now,
I
think
for
me.
The
next
step
is
like:
let's
actually
go
through
and
update
our
readme,
so
we
keep
track
of
stuff
populate
status.
Okay,
we're
doing
that
boom
will
go
through
and
like
can
we
do
like
a
squiggle?
A
Does
this
work,
I?
Don't
get
marked
down
supports
that
recreate
pod?
Well,
we
haven't
done
that.
We
haven't
done
that
okay,
so
we
you
know
we
haven't
done
a
lot,
but
we're
populating
the
status
we're
doing
true
reconciliation
now,
I
think
you
know
next
time
if
I
were
gonna,
do
this
I
want
to
do
something
a
little
bit
more
fun.
A
I
want
to
get
to
the
point
where
I
can
I
can
actually
go
through
and
create
and
manage
the
service
to
this,
so
that
I
can
actually
get
into
that
and
then
I
think
part
of
the
status
that
we'll
want
to
do
as
we'll
want
to
manage
that
hey,
hey
this
thing
creates
both
the
service
and
the
pod.
Let's
actually
reflect
both
of
those
into
the
status
and
maybe
actually
populate
that
with
the
IP
address
and
an
address
that
we
can
connect
to
minecraft
to
to
actually
be
able
to
do
that.
A
A
Fernan
says:
could
you
try
another
read
with
a
reflective,
equal
and
see
what
change
probably
updating
the
status
yeah
that
might
be
worthwhile
to
actually
go
through
and
do
that
to
try
and
debug
it
because
clearly,
like
it
thinks,
there's
a
there's,
a
there's,
a
diff
there
I
think
we're
out
of
time.
Yeah
I.
Think
next
time
I
like
like
will
either
pick
up
on
debugging
this
or
I
think
maybe
we
can
go
through,
try
and
resolve
it,
otherwise
and
and
sort
of
give
you
all
the
answer.
A
But
if
folks
do
have
an
answer
feel
free
to,
like
you
know,
put
you
know,
send
some
PRS
to
my
to
the
github
repo
and
we
can
actually
start
from
there.
I'll
try
and
keep
an
eye
on
that,
and,
and
hopefully
we
can
figure
out
this
mystery
and
and
and
next
time
we
start
we'll
pick
up
and
we'll
do
the
service
stuff
so
that
we
can
actually
connect
to
the
thing
so
that'll
be
a
lot
of
fun.
A
So
alright,
thank
you,
everybody
for
joining
in
and
and
hopefully
we
all
learned
something
it's
a
different
flavor
of
tea
gik.
When
I'm
doing
the
coding
here
so
hopefully
you
enjoy
that,
let
me
know
what
you
think
and
I
will
see
you
and
I'm
gonna
be
at
a
conference
next
week.
So
I'll
see
you
in
a
couple
weeks.
Probably
alright
talk
to
you
all
later,
thanks
for
joining
in.