►
Description
Join our #TGIK8s livestream this Friday! Naadir Jeewa & Anusha Hegde will walk us through #Kubernetes Cluster API Provider BYOH (Bring Your Own Host): an open source, infrastructure provider for already existing edge, bare-metal or VM-based Linux hosts.
Notes: tgik.io/notes-176
A
Hello
welcome
to
tgik,
but
all
new
time
it's
9
00
am
gmt.
This
is
tgik
europe
joining
me
today
is
inisha
and
today
we're
going
to
be
talking
about
plus
api.
Once
again,
you
know
what
it
is
when,
whenever
I
appear
in
the
talking
class
api
so
anisha
do
you
want
to
introduce
yourself.
B
A
Myself
so
first
things:
first,
let's
get
up
the
note,
so
we
are
in.
A
A
You
want
that
should
work,
so
there's
the
link
to
the
notes.
Do
you
follow
along
yeah?
If
anyone
wants
to
take
notes
as
well,
that'd
be
great
yeah,
so
let's
go
through
our
usual
stuff.
So
have
a
look
at
the
news,
so
I
mean
happy
thanksgiving
to
everyone
in
the
us.
So
it's
relatively
quiet
what
has
been
happening
is
kubernetes.
123
is
continued
to
be
worked
on
and
the
beta
is
out.
So
I
just
picked
out
some
of
the
things
that
interested
me
start
with
the
easy
one.
A
So
I'm
as
everyone's
slowly,
tired
of
knowing
that
I
I
work
on
aws
and
class
api
aws.
So
it
was
intrigued
to
find
out
that
csi
is
finally
on
by
default
for
aws
and
gce
from
we
want
to
read
the
long
awaited
migration
of
cloud
providers
out
of
the
kubernetes
tree
is
coming
up.
A
So
that's
if
you
as
you,
the
history
behind
that
is,
there's
a
lot
of
cloud
provider
code
in
the
kubernetes
repository
and
basically
want
to
pull
that
out,
reduce
the
millions
of
lines
of
dependencies
that
pulled
into
kubernetes
by
moving
the
cloud
cover
integrations
externally.
So
that's
finally
happening.
It's
been
long
delayed,
so
that's
one
thing:
that's
happened.
A
Interestingly,
cri
v1
is
now
the
default
kind.
This
one
kind
of
shocked
me
last
year
when
I
was
looking
at
it
like
see
container
one
time
was
still
on.
We
won
alpha
t
and,
given
that
kubernetes
has
been
out
for
ages,
so
we're
only
just
moving
to
on
cri,
which
is
cool,
I'm
just
going
to
shut
down
slack.
Just
give
me
a
second
there.
We
go,
don't
want
any
notifications.
While
we
on
this,
I
don't
know
if
anyone's
familiar
with
what
the
differences
are
with
diy.
A
Does
anyone
know
anyone,
but
I
think
it's
good,
you
know
less
reliance
on
alpha
and
beta
stuff
in
the
kubernetes
ecosystem.
It's
great
those
of
you
using
cube
cuttle
in
your
like,
if
you're
just
doing
automation
through
shell
scripts
and
things,
then
there's
a
feature
been
added
that
you
can
use
cube,
cutting
weight
on
json
pass.
So
that's
if
you're,
not
you
don't
want
to
write
your
own
kubernetes
controller.
A
You
don't
want
to
like
go
particularly
then
use
shelling
out
to
cube
cattle
and
then
waiting
on
jason
pass
might
be
something
you
wanted
to
do.
So
you
can
wait
on
a
particular
conditions
to
happen
and
then
move
on
and
finally,
this
one's
pretty
interesting.
Let
me
ship
I'm
not
been
sharing
my
screen.
Have
I
there
we
go
so
validation
rules
for
customer
resource
definitions
using
the
cell
expression
language.
So
those
of
you
who
are
writing
kubernetes
controllers
you'll
probably
find
that
your
validation
web
hooks
you.
A
A
So
you
can
like
do
more
complex
values,
more
complex
checks
inside
the
kubernetes
api,
which
means
you
don't
have
to
write
your
own
custom
webpart.
So
this
is
very
alpha
at
the
moment.
A
Scott
goes
yeah
yeah,
another
validation,
language
yeah.
I
mean
love,
adding
them
not.
A
When
is
too
many,
you
know
it
doesn't
matter,
yeah
more
more,
more
programming
inside
yammel.
I
love
that
as
well,
that
that's
also
good
to
see
so
why
joke
so
yeah
you
could
yeah.
This
is
stuff.
You'd
really
couldn't
could
not
do
just
with
open
api
validation
alone,
so
this
is
very
beta
at
the
moment.
This
is
the
sort
of
follow-on
to.
A
There
was
a
proposal
a
couple
of
months
ago
around
replacing
the
entire
way
white
crds
with
something
that
is
ceo,
based
that
didn't
make
its
way
through,
but
small
incremental
changes
like
this
have
so
we'll
see
where
this
goes.
There's
some
caveats
around
using
it.
If
you're
interested
have
a
look
at
pr.
A
Those
are
the
things
I
picked
out
for
this
release
and
I
should
click
on
the
white
screen
that
would
help,
and
I,
as
if
I
was
just
going
through,
what's
happening
in
ecosystem,
we
have
a
couple
of
projects
that
been
accepted
for
the
sandbox
in
the
cncf.
A
First
of
all,
is
this
kubernetes
openshift
backup
operator?
Not
really?
I
have
seen
this
floating
around
before,
but
you
say
it's
another.
It's
a
sort
of
backup
system
for
kubernetes
based
on
rustic,
so
it's
similar,
I
guess
to
valero.
If
those
have
used
it.
So,
yes,
I
think
it's
yeah
and
also
it's
doing
some
of
the
same
things
as
valero
does
and
yeah
bait
uses
rustic,
which
are
sv
compound,
so
you
can
backup
them
volumes
to
like
s3
compatible
storage.
A
So
that
might
be
something
we
might
want
to
try
out
at
some
point
on
a
future
episode,
we'll
see
and
also
cube
rs,
and
that
is
exactly
what
you
think
it
might
be
so
rust,
kubernetes,
client
and
controller
one
time,
hi
downgrade
question
from
saman.
So
does
this
validation
override
existing
opa
policies
in
the
cluster?
Oh
right,
this
is
their
ceo.
A
No,
it
does
not.
So
this
is
the
ceo
validation
rules
are
for
crd
authors,
so
this
is
say:
you're
writing
an
app
kubernetes
based
application,
and
you
want
to
validate
the
input
that's
coming
in
from
the
user.
Like
I
don't
know
like
you're
the
same
cluster
api,
someone
tries
to
create
a
vm
with
minus
negative
50
gigabytes
ram.
Now,
technically,
you
can
do
that
through
an
open
api
validation.
That's
obviously
incorrect
so
say
you
didn't
have
that
feature,
then
you
could
use
the
ceo
language
to
do
that.
To
do
that.
A
One,
the
only
exception
is
is
say,
your
platform,
engineering
team
and
you're
writing
your
own.
You
want
your
own
controllers,
you
might
have
your
acme
core
deployment,
crd
or
something,
and
you
might
be
using
opa
today
to
do
the
validation
then.
Yes,
this
would
be
a
replacement
cool
thanks
david
yeah,
thanks
youtube
for
pointing
me
here:
hey
bring
your
own
host
provider,
it's
very
interesting,
you're
streaming
eu
friendly
time.
Yes,
it's
good
to
be
awake
for
once
all
right
yeah.
So
that's
the
news.
A
Does
then,
if
anyone's
got
anything
else,
they
think
that's
been
interesting
this
week
post
it
in
the
links
in
the
chat.
Otherwise,
let's
get
going
with
the
show
all
right
so
bring
your
own
host,
not
sure
if
everyone
knows
what
that
means
in
terms
of
a
cluster
api
phrase,
so
we're
not
going
to
talk
in
the
basics
of
cluster
api.
We've
done
this
plenty
of
times
on
tgik
and
I
will
post
links
to
all
the
previously
on
tgik
later.
B
Yep
so
yeah,
so
basically,
why
bring
your
own
host
right?
B
So
we
call
the
bring
your
own
host
provider
as
something
for
your
already
existing
hosts.
So
what
are
these
already
existing
hosts
now
say
suppose,
you're
on
an
infrastructure
stack
of
aws
as
your
gcp
anything
right
or
you
could
be
using
esxi
for
your
virtualization
layer.
It
could
be
on
hyper-v,
so
you
are
in
a
specific
infrastructure
and
setting,
and
you
want
to
deploy
kubernetes
clusters
now,
of
course
you
can
do
it
several.
You
could
do
it
on
your
own
like,
but
then
the
life
cycle
management
headache
would
be
on
you.
B
That's
where
all
the
awesome
project
like
clustrapia,
comes
into
picture,
so
it
has
simplified
to
a
great
extent
of
how
you
can
manage
the
life
cycle
of
all
these
kubernetes
clusters
and
for
your
different
infrastructure
needs
like.
If
you
have
aws,
we
have
a
cluster
api
provider
for
aws
for
vsphere.
We
have
capi
for
vsphere.
B
Similarly,
for
a
lot
other
infrastructure
providers,
the
list
is
long,
but
it's
also
not
exhaustive.
Right.
Like
one
example,
I
can
think
of
is
hyper-v,
so
there's
no
plus
three
pair
provider
for
hyper-v.
So
now
what
do
you
do
like?
You
have
hyper-v,
but
you
want
to
deploy
kubernetes
clusters
and
you
also
want
to
leverage
all
the
goodness
of
cluster
api.
So
how
do
you
do
it?
So
this
is
where
the
bring
your
own
host
comes
into
picture,
so
using
the
bring
your
own
host
provider.
B
If
you
have
hosts
that
are
running
on
hyper-v
it,
you
could
have
hyper
vpns.
You
can
have
kubernetes
node
bootstrap
out
of
these
vms
and
then
you
can
form
a
kubernetes
cluster.
I
don't
want
to
wave
my
hands
too
much,
so
I
have
one
diagram.
Probably
I
can
show
you
all.
I'm
gonna
share
my
screen.
B
B
For
those
of
you
aware
of
what
api
is
and
what
different
providers
do-
and
now
they
probably
can
keep
me
honest
here
so
whenever
we
want
to
create
a
cluster
based
on
aws
or
vsphere
azure,
anything
so
cluster
api
has
this
concepts
of
creating
machine
deployments
machine
sets.
It
has
different
controllers
that
continuously
reconcile
the
current
state
to
the
desired
state.
Right
now
suppose
say
you
want
to
create
a
aws
cluster,
a
simple
aws
clustered
with
one
control,
plane,
node
and
one
worker
node.
B
So
essentially
these
are
two
machines,
so
you'd
need
two
vms
or
two
ec2
machines
for
this.
So
since
we
are
taking
the
example
of
aws
cluster,
let's
take
aws
plus
repair
provider.
So
whenever
there's
a
request
for
a
control,
plane,
machine
or
a
worker
node
machine,
the
first
thing
the
controller
does-
is
look
at
how
many
machines
are
requested
now
for
control
plane,
it
has
requested
for
one
machine.
So
that's
the
desired
state
and
the
current
state
is
zero.
B
So
it
goes
and
provisions
the
hardware
for
it
that
is
it
spins
up,
ec2
machines
or
in
case
of
vsphere
it
spins
up
vsphere
vms,
and
then
we
have
these
ovas
or
amis
that
are
published,
that
for
a
given
kubernetes
version
and
for
a
given
os,
we
use
the
image
builder
project
to
bake
all
these
kubernetes
binaries
into
the
ova
or
ami
itself.
Right
so
vms
are
deployed
out
of
these
templates
and
then
finally,
the
kubernetes
node
part,
wherein
I
think
kuberium
is
kind
of
widely
used,
bootstrap
provider.
B
So
based
on
the
bootstrap
provider,
you
either
execute
qbm
commands
or
whatever
bootstrap
provider
you
are
using,
and
then
you
get
kubernetes
node
out
of
it.
This
node
now
can
join
the
cluster,
so
this
is
what
typically,
a
cluster
api
infrastructure
provider
does
right
from
provisioning,
your
hardware
to
os
to
having
kubernetes
binaries
and
then
finally
bootstrapping
the
node.
B
B
Whatever
os
upgrades
are
there,
the
user
has
the
freedom
to
do
it
as
per
their
choice,
but
what
the
bring
your
own
host
provider
does
is
given
a
kubernetes
version,
we
will
install
all
those
binaries
and
then
finally,
we
also
use
the
cubidium
bootstrap
provider,
so
we
use
the
cubedium
provider
to
bootstrap
into
a
kubernetes
node.
So
essentially,
if
you
see
this
is
a
two-step
process,
wherein
you
have
a
host
provisioning
step,
that
is
the
hardware
plus
os,
and
then
there
is
the
node
provisioning
step,
that
is
the
kubernetes
package
and
the
node.
B
So
with
bring
your
own
host
provider,
we
have
decoupled
this
two
steps.
That
is,
you
do
the
host
provisioning
and
we
will
do
the
node
provisioning
for
you
and
another.
I
think
interesting
tidbit
is
that
cluster
api
has
the
machine
immutability
concept
wherein
which
means
like,
if
you
want
new
machines,
if
you
want
to
create
new
machines
or
if
you
want
to
scale
your
cluster,
all
this
hardware
is
created
on
the
fly
that
hardware
as
in
it
could
be
vms
or
ec2
machines.
These
are
spun
up
on
as
requested.
B
Similarly,
when
you
want
to
scale
down
your
cluster
or
you
want
to
delete
your
cluster
altogether,
these
vms
are
simply
deleted.
They
are
thrown
away,
but
that
is
not
the
case
in
bring
your
own
host.
So
we
cannot
say
that.
Okay,
I
need
to
scale
down
this
host.
It's
discarded,
it's
not
discarded.
Instead,
whatever
kubernetes
binaries,
that
we
are
installing,
whatever
modifications
we
are
making
to
the
host,
we
do
a.
B
I
want
to
air
quote
it
best
effort
clean
up
of
your
host
and
you
know,
try
to
return
back
to
whatever
state
it
was
initially
so
that
you
can
reuse
this
host,
maybe
as
part
of
another
cluster
where
it
could
be
taking
on
a
different
kubernetes
version,
or
you
could
completely
rip
it
down
and
install
new
os
head.
Basically,
you
can
reuse
your
hardware,
so
this
is
where
we
are
slightly
deviating
from
the
machine
immutability
concept.
B
So
this
is
like
the
basic
difference
between
python
the
rest
of
the
providers.
Whether
do
you
want
to
like
add
something
else
or.
A
No,
that's
great,
that's
perfect,
so
we've
got
some
questions
in
chat.
So
first
one
was
vsphere
related
said
it
was
question
around
vsphere
needing
drs
and
the
answer
is
yes:
it
does
we
we're
not
recreating
drs
in
chatri.
Does
that
be
too
complicated?
A
And
yes,
many
hundreds
of
years
of
engineering
question
for
sunan
that
does
bring
your
own
host
extend
for
edge
or
iot
use
cases?
Well,
since
the
os
footprint
required
is
minimal.
B
Yep
a
good
question:
I
will
get
to
it
so
yeah
again,
I
have
like
another
beautiful,
simple
diagram,
I'm
just
going
to
throw
it
in
here.
So
this
is
what
bytch
is
as
of
today.
We
will
see
all
of
this
in
detail,
probably
during
a
hands-on
when
we
do
a
hands-on,
a
free
watch
provider,
but
I
just
want
to
give
an
understanding
of
where
each
component
lies
when
it
comes
to
byh
provider.
B
So,
yes,
we
are
a
cluster
api
provider,
so
we
have
something
called
a
management
cluster
and
a
workload
cluster.
So
the
line
that
I've
drawn
here,
the
white
line
that
you
see
horizontal
divides
this
into
two
spaces.
One
is
the
management
cluster.
Now
this
management
cluster
has
to
reside
in
a
data
center
or
it
has
to
be,
and
it
has
to
be
powered
by
one
of
the
existing
infrastructure
providers,
that
is
to
say,
your
management
cluster
should
be
on
either
vsphere
aws
azure.
B
So
that
is
like
one
constraint
we
have
as
of
today
so-
and
this
is
in
a
data
center
and
now
comes
the
interesting
part
where
the
workload
clusters
are
present.
Now
these
workload
clusters
are
b
h,
clusters,
that
is
all
the
nodes,
the
control
plane
nodes
and
the
worker
nodes
are
bootstrapped
from
biohosts,
and
now
this
could
be.
B
I
want
to
say
it
as
a
buyer
site,
because
this
could
be
again
a
data
center,
wherein
you
have
some
spare
capacity.
You
have
some
just
servers
lying
around
and
you
want
to
deploy
kubernetes
on
it
or
it
could
be
a
retail
store
outlet
where
you
do
have
some
amount
of
beefy
servers,
but
within
a
limited
amount
it
could
be
edge.
B
Use
case
like
wherein
not,
I
would
not
say
iot,
because
those
are
like
very
limited
hardware,
since
we
are
still
using
cube
adm
for
bootstrapping
the
node,
so
whatever
prerequisites
is
mandated
by
a
kubernetes
bootstrapping
process,
I'm
not
very
sure
of
the
number
something
like
I
think,
two
cpu
or
four
vcpu,
whatever
qb
mandates
that
is
required.
B
So
if,
if
we
consider
edge
in
terms
of
thick
medium
and
thin
edge-
so
probably
I
can
say
this
suits
well
for
thick
and
medium
edge
as
of
today,
just
because
of
the
amount
of
space
that
is
needed
to
bring
these
clusters
up.
Otherwise,
even
though,
if
you're
successful
in
bringing
this
up
in
an
iot
device,
you
wouldn't
have
you
know
much
room
left
for
your
actual
applications
to
run.
B
B
So
one
interesting
you
know
take
away
from
this
diagram
I
want
to
call
out
is
the
host
agent.
So
this
the
the
host
agent
is
the
component
that
does
you
know
quite
a
bit
of
magic
on
the
host
side.
So
this
is
a
binary
so
that
you'll
have
to
install
on
your
host
and
given
some
credentials
to
it.
That
is
how
to
reach
to
the
management
cluster.
It
will
register
itself
as
a
bio
host,
saying,
hey,
I
am
available.
I
am
up.
I'm
running
I'm
available
to
be
bootstrapped
as
a
kubernetes
node.
B
So
if
we
consider
something
as
a
by
host
capacity
pool,
we
can
say
a
host
has
registered
itself
into
the
capacity
pool.
So
similarly
say
if
you
have
10
hosts,
you
run
host
agent
on
all
of
these
and
they
will
register
themselves
as
available
capacity.
So
now
they're
just
lying
around
there.
It's
there's
no
action
going
on,
but
once
you
do,
you
apply
a
cluster
template
like
you
want
to
create
a
workload
cluster
with
one
control,
plane,
node
and
three
worker
nodes.
B
That's
where
the
bih
infra,
that
is,
in
the
management
cluster,
the
green
box
that
you
see.
So
this
is
our
device
infrastructure
provider
that
is
installed
in
the
management
cluster.
So
this
starts
the
reconciliation
process.
First,
the
control
plane
node.
Obviously
so
it
looks
for
okay,
I
need
one
control
plane.
It
means
I
need
to
claim
one
host,
so
it
looks
at
the
available
host
capacity
pool.
B
Are
there
any
hosts
available?
Yes,
if
there
is,
it
then
starts
the
bootstrap
process.
Now
we'll
see
more
how
the
host
agent
installs
kubernetes,
binaries
and
then
bootstraps
into
kubernetes
node,
maybe
for
now
we
can
say
that.
Okay,
now
the
host
agent,
who
strapped
it
into
a
node
next,
we
have
the
control
worker
node
request.
Since
I
had
requested
for
three
worker
nodes,
it
says:
okay,
are
there
three
available
capacity
in
my
capacity
pool?
B
Then
it
means
that
your
cluster
will
kind
of
get
stuck
in
a
provisioning
status
where
it
is
just
looking
for
available
hosts
in
the
capacity
so
yeah.
So
you
can
create
as
many
workload
clusters
in
your
remote
location
as
possible.
It,
it
all
depends
on
how
many
hosts
you
can.
You
know
spare
for
those
workers.
B
A
Yeah
so
yeah,
I
think
we
can
probably
go
to
my
attempt
at
trying
to
get
this
working
in.
A
B
Yeah,
so
at
the
moment,
like
even
cluster
api,
I
assume
does
not
support
in-place
upgrades
right.
So,
similarly,
for
now
we
don't
support
that
yet,
but
it
could
be
possible
because
the
very
essence
of
pious
provider
is,
we
are
decoupling,
the
post
provisioning
from
node
provisioning.
So,
yes,
we,
we
are
already
deviating
from
an
immutable
machine
concept.
B
So,
yes,
that
could
be
possible,
but
we
are
not
supporting
it
as
of
now
so
right
now,
an
upgrade
would
mean
you
scale
down
a
cluster,
which
means
we
uninstall
all
the
kubernetes
things
that
we
had
installed
it.
The
host
goes
back
into
the
capacity
pool
and
the
next
time
you
have
to
pick
up
another
host
and
install
new
kubernetes
versions
on
it.
So
it
could
be
any
different
host
that
you're
picking
up-
and
you
cannot
say,
bring
me
back
the
same
post
that
I
had
so
basically,
there's
no
in-place
upgrade.
A
Okay,
it's
cool
and,
I
think
there's
there
are
two
more
questions,
but
I
think
it
might
make
more
sense
to
actually
just
do
it
and
I
think
it
should
answer
the
questions
as
go
along.
So
let's
have
a
look
at
what
I
my
crazy
attempts.
So
people
might
know
from
previous
episodes.
I
do
have
a
vsphere
lab
at
home
because
I'm
very
silly
so
what
we've
got
today
so
it's
simpler
than
last
time
than
when
we
tried
to
do
when
we
were
doing
cube,
rip
and
tinkerbell.
A
If
people
were
familiar
with
what
happened.
Last
time
I
was
thinking
wit.
I
struggled
to
get
lots
of
things
working.
It
turned
out.
I
had
mislabeled
monster
freelance
on
my
network,
so
that's
all
gone,
we're
all
using
my
just
my
single
server
subnet.
That's
we
don't
need
anything
else
for
this
demo,
we're
not
doing
anything
with
bgp,
for
instance,
this
time
round.
So
what
have
we
got
today?
So
we've
got
a
machine
which
I've
called.
A
I
did
create
these
folders,
always
good
yeah,
so
white
tgik
there
we
go
there
we
go
at
the
moment.
You
can
ignore
that
fedora
we
might
play
with
that
later,
see,
see
what
happens
so.
I've
got
a
machine,
that's
off.
That's
called
ubuntu
2004
template,
I'm
just
actually
going
to
turn
it
on
for
a
second,
oh,
I
need
my
keys
hold
on.
Give
me
a
second,
because
I
need
my
yubi
key.
B
So
what
you're
asking
that
is
there
a
way
to
select
which
nodes
will
be
no
at
the
moment
you
cannot
select
which
nodes,
so
all
the
hosts
that
you
register
are
going
to
be
treated
equally.
But
yes,
that
is
a
very
good.
You
know
idea
as
in
if
you
want
to
assign
beefier
hosts
to
control,
plane
notes.
That
makes
like
perfect
sense,
but
at
the
moment
all
the
hosts
are
treated
equally.
A
Thanks
and
as
keith
says,
it's
not
dns,
it's
really.
It's
either
that
or
mte
right.
So
those
are
the
three
million
one
of
the
three
things
we
ever
see.
So
I
believe
this
machine.
I
think,
if
I
checked
that
so
I
had
to
do.
A
Super
hacky
so
specifically
did
not
use
cluster
api
to
start
off
with
these
all
this
stuff
started
life
as
just
a
busy
that
I
installed.
So
no
no
cheating
in
these
regards
so
I
have
a
vm.
I
basically
installed
ubuntu
from
an
iso
yesterday,
oh
and
there's
key
police
keeps
saying,
are
the
big
problems
like
ntp?
Yes,
ntp
keep
your
time
in
sync,
because
it's
very
important
in
kubernetes,
please
right
so
on
this
machine.
What
have
I
done
so?
I
did
at
get
upgrade
obviously
and
stilled.
A
I
missed
some
updates
yesterday,
so
eight
missing.
So
if
we
look
at
this
machine
we
do
not
have
a
kubelet.
We
don't
have
cube
ctl.
We
do
not
have
q
adm,
so
we
don't
have
none
of
those
things.
Do
we
have
docker
no
do
do
we
have.
A
I
have
quite
a
cri
ctl
have
a
look
at
the
list
of
services,
so
it's
very
plain
install
apart
from
some
things
I
had
around
to
make
it
work
later.
So
it's
a
very
default
install
it's
basically
just
running
open,
ssh
and
I
what
I
have
done
is.
I
have
turned
off
the
swap
beforehand
and
there's
a
couple
of
I
can't
remember
now.
A
Did
you
do
any
ctrs,
no
b2c
micro,
so
there's
stuff
that
you
would
normally
do
for
cube
adm?
I
think
we'll
see
this
on
the
other
machine
that
I've
got.
So
it's
nothing
special
around
here.
A
Yeah,
there's
no
firewall
on
any
of
this,
so
that's
good
shell,
so
I
don't
have
ufw
install
so
it's
just
the
minimal
ubuntu
server
install
and
I
think
by
default
you
don't
get
the
firewall.
It's
only
with
the
workstation.
A
For
too
much
I'm
a
fedora
guy,
not
in
the
literal
fedora
wearing
weirdo
scents,
just
the
door
in
terms
of
fedora
linux,
let's
be
clear
here,
so
what
I?
What
we
have
done
on
this,
I
have
previously
downloaded
to
bring
your
own
host
agent.
So
that's
gonna,
be
I
think.
If
we
go
to
our
getting
started
right,
we
basically
have
a
requirement
to
do
that,
yeah
so
download
the
on
on
the
host
download.
It
run
it.
So
what
I
in
these
instructions,
so
you
can
do
this,
you
don't
need
to
do.
A
You
could
literally
just
run
this
on
the
machine.
So
if
I
did
that
now,
it's
just
going
to
error
right
because
I
haven't
set
up,
you
need
to
give
it
a
cube
config.
So
this
might
explain
what
some
of
the
communications
that
might
happen
well,
at
least
it
will
in
a
minute-
and
there
is
one
other
thing
I've
dropped
on
this
did
I
drop
it
on
here.
D
A
This
vm
here
case
management
tgik.
So
this
is
a
single
single
node.
Kubernetes
cluster
didn't
use
cluster
api
for
this.
This
is
just
them.
B
I
think
there's
a
question
by
vishal
right,
so
agent
would
need
access
to
the
management
cluster
by
cube
config.
Yes,
so,
yes,
you
have
to
provide
the
ip
that
it
can
talk
to
the
control
plane.
How
does
it
work
because
it
has
to
be
on
the
same
network,
so
you
have
to
be
either
on
the
same.
It
could
be
a
private
or
public
doesn't
matter,
but
the
ip
has
to
be
reachable
from
this
vm.
A
Unless
you
put
your
laptop
on
the
internet
or
something
which
I
guess
you
could
do
using
one
of
those
reverse
proxy
tunnels.
I
suppose
you
don't
need
to
hack
about
with
tls
certificates
that
gets
pretty
difficult
pretty
quickly.
So
that's
why?
For
this
environment,
I
provisioned
a
onenote
management
cluster
somewhere
else,
so
we
on
this
machine.
This
one
does
have
this.
Is
this
normal
cube
adm
build?
So
what
do
we
have
on
this?
We've
got
enough.
A
Don't
have
anything
installed,
I
think
we've
barely
it's
barely
just
running
so
standard
cube,
adm,
deploy
and
then
andrea
is
a
cnn
cni.
So
that's
what
we
got
here
this
one.
A
I
did
initially
clone
this
from
the
other
template
and
then
I
ran
into
some
issues
with.
If
you
didn't
know,
if
you,
if
you,
if
you're,
not
using
cluster
api
to
provision
vms
and
you're,
not
using
cloud
in
it
or
stuff,
I
did
run
into
a
problem
where
all
my
machines
ended
up
with
the
same
ip
address
and
despite
the
mac
address
being
different.
It
turns
out,
at
least
on
the
bunty.
A
Yeah
this
default
config.
It
actually
uses
the
systemd
machine
id
as
the
unique
identified
sensory
dhcp
server,
which
doesn't
because
I'm
didn't
use
cloud
in
it.
Yeah
it's
got
again.
Ubuntu
uses
the
machine
id.
So
if
I'm
cloning
a
vm
ubuntu
vm,
even
if
I
change
the
mac
address,
it
keeps
sending
the
same
id
to
the
dhcp
server.
So
it
ends
up
with
the
same
address.
So
one
thing,
the
other
thing
I
have
changed
in
that
template.
B
A
Yeah
so
the
way
around
it,
so
you
can
reset
the
machine
id
and
okay.
Is
it
hostname,
cto
systemctl,
one
of
those,
but
the
other
solution
is,
if
you
just
add
this
to
the
netplan,
the
hcp
identify
mac,
it
will
force
it
to
use
the
mac
address
and
then
it
doesn't
matter.
If
they're
the
same,
I
think
in
a
production
environment
you
should
make
them
different,
but
perhaps
it's
this.
This
does
what
we
want
right.
So
that's
all
we
got
so
we
don't
have
any
other.
A
We
don't
have
any
other
yeah
yeah
scott
said
you
should
always
delete
the
machine
id
before
cloning
template.
That
is
true,
but
also
I
was
doing
this
last
night,
so
you
should
use
packer
or
something
like
that
and
have
some
cleanup
scripts
is.
I
think
it's
the
real
answer
to
any
of
this,
not
just
like
copy
all
your
logs
around
them
even
to
the
extent
of
having
the
same
essence,
ssh
key.
You
want
to
move
the
host,
so
this
is
all
bad.
A
Do
not
do
this
in
production
please,
but
for
the
purposes
of
us
hacking
around
today
this
is
fine.
So
what
do
we
have
else?
So
we
need
to
get
some
vms
that
we're
going
to
use
to
bring
your
own
host
absolutely
do
not
want
to
register
ubuntu
template,
so
we're
gonna
clone
this.
I
didn't
you
have
not
used
this
before.
So
this
is
a
new
thing.
A
A
I
have
a
snapshot
so
we're
going
to
use
link
clones
because
it's
faster
and
we're
going
to
power
on
some
machines
and
we're
going
to
call
it
that
so,
let's
hopefully
this
works.
I
almost
certainly
do
not
have
on
this
machine.
So
let's
do
this.
In
my
laptop.
A
Anyone
can
remember
what
the
python
mod
is
cool.
Okay,
fine,
I
think
that's
it
we'll
find
out.
A
Yeah,
so
what
else
have
we
got?
Sorry
did
we
have
I've
gotten
in
my
variable,
so
I
do
have
my
vsphere
credential
in
that.
Do
not
do
this
in
production,
that's
bad,
but
that
was
so
we're
gonna
use
the
vmware
vm
inventory
in
a
bit.
We
don't
actually
have
the
vms
yet
and
then
we're
gonna
just
gonna.
First
clone
these.
D
A
D
A
Okay,
so
we
are
cloning,
some
vms.
Now,
as
it's
linked
clones,
they're,
pretty
quick,
so
we've
got
now
we're
gonna
have
four
vms,
so
actually
in
terms
of
other
bits
within
two
cpus,
because
cube
adm
has
a
hard
requirement
on
having
two
to
be
cpus,
I'm
not
sure
what
the
history
behind
that
is.
You
can
force
it
to
skip
it,
but
for
some
reason
today,
there's
a
hard
requirement
around
having
two
cpus.
A
A
They
just
said
you
have
to
have
tv
cpu,
so
you
didn't
see
the
problem,
what
it
sounds
like
so
maybe
maybe
we
should
get
rid
of
it
at
some
point
or
maybe
revisit
that,
but
you
can
turn
it
off.
You
can
turn
off
the
pre-flight
check
so
yeah.
So
so
these
all
come
up.
They
because
we
have
that
hacker
in
the
net
plan.
We
actually
have
them
on
different
ips,
it's
good.
So
what
we
want.
These
are
ipv6.
A
We
are
not
using
ipv6
here,
but
it
is
really
it.
It
will
work
you
can
attempt
to
connect
to
here.
If
you
want
to
own
me,
but
these
are
fine
awards.
I
hope
I'm
pretty
certain
they're
fine,
so
you
will
not
be
able
to
access
it
directly
yeah.
I
also
have
not
changed
my
password
for
recent
since
the
last
time.
We
did
this
and
I
showed
it
on
screen
yeah.
If
you
get
on
here,
good
luck-
I
don't
have
anything
important
in
here.
This
is
purely
a
demo
environment.
A
I
guess
you
pop
onto
my
home
network,
which
would
be
bad.
I
suppose
anyway,
one
one
for
the
class
api
security
audit.
Someone
can
have
a
go
at
me
about
that
later
right.
A
A
Ignore
all
the
man
in
the
middle
errors-
that's
all
fine,
don't
care,
they
all
have
the
same
ssh
coast
key
as
well,
so
he's
all
going
to
do
template.
We
need
to
fix
that.
So
I
could
change
that
manually
right.
So
I
could
be
hostname
ctl
set
host
name,
blah,
blah
blah
blah
and
then
I
think
the
other
requirement,
because
bits
of
kubernetes
use
it
scott
says
he's
going
to
let
pushkar
know,
but
I'm
not
changing
my
password.
It's
good
thanks.
A
Those
who
don't
know
put
push
class
hits
on
santa
sig
security
so
and
there's
helping
us
out
with
doing
doing
our
security
audit
for
classic
okay.
So
we
need
to
change
this.
This
is
another
thing
from
a
quick
start.
One
thing
I
did,
I
don't
know
if
it
was
important.
I
kept
the
one
two
zero
one
one
and
changed
only
that
one.
I
didn't
add
an
entry
one,
two:
seven:
zero
zero
one.
Does
that
make
a
difference?
It
seemed
to
work
for
me.
B
Yeah
it
works.
I.
A
A
So
if
we
go
back
to
video
yes
code,
so
we
have
another
thing:
that
happens:
oh
yes,
yeah!
So
the
next
thing
we
do
is
so
we're
gonna,
just
hack
the
host
file,
we're
gonna
set
the
host
name
and
we
are
gonna
start
this
bring
your
own
host
system
d
unit
on
that
unit.
So
I
didn't
notice
one
thing
about
that:.
A
Yeah,
so
I've
got
this
config.
I
had
to
put
restart
always
on
that,
because
one
thing
guy
we'll
see
in
a
minute,
but
there's
probably
a
little
bug
a
little
minor
bug
in
this.
It
doesn't
error
when
it
can't
connect
to
the
management
cluster,
so
we'll
just
exit
and
if
it
exits
without
an
error
code,
systemd
won't
restart
it
on
its
own.
A
So
you
have
to
put
restart
always
to
force
it
at
the
moment,
so
that
should
I
think
it
should
probably
return
an
error
code
if
it's
exiting
because
it
can't
connect
or
some
other
error,
that
that
was
a
little
minor
thing
yeah.
So
we're
going
to
start
that.
So,
let's
take
a
look
again
at
so
we
don't
have.
A
A
A
So
this
is
just
constantly
restarting
restarting
so
yeah,
so
just
bring
your
own
host
agent
is
connecting
to
that
management
cluster,
and
it
looks
like
it's
trying
to
create
a
crd
called
bring
your
own
host.
Is
that
right.
B
Yep
it
it
registers
itself
as
a
b
wire
host
crd,
so
we
have
defined
a
bunch
of
crds
like
if
I
have
to
draw
parallel
with
vsphere
right
for
every
vsphere
cluster.
You
have
a
bio
cluster
for
a
vsphere
machine.
You
have
a
b
wire
machine
and
for
a
vsphere
vm,
you
have
the
p
by
a
host.
A
Yeah,
so
eventually
this
will
time
out
and
system
d
will
stop
starting
it.
So
there
is
a
another
change.
I
could
do
to
the
system
d
unit
to
just
make
this
start
forever,
but
let's
make
this
slightly
healthier,
I
suppose
so.
A
I
give
us
a
new
new
tab.
Let's
do
that
as
a
new
tab.
I
think
okay
right,
so
we
want
to
do
we
need.
We
need
to
install
plus
api,
I
suppose,
on
this
machine,
so
for
my
computer
crash
for
a
second
right,
so
I've
in
in
my
little
demo
environment.
For
today
I
do
have
a
sim
link
just
to
make
right
a
bit
earlier
easier
for
me,
so
simulink
the
doc
cluster
api
directory
to
this
cluster
api
config
directory
here,
because
we
need
to
use
cluster
capital
so.
A
One
zero
zero
three.
C
D
A
We
should
be
fine,
we
need
to
create
a
cluster
customs.
A
So
what's
what
do
we
have?
So
we
have
cluster
cuttle,
config
repositories.
Yeah
there
you
go
so
hard
coded
in
cluster.
Cuttle
is
links
to
various
repositories
that
include
different
providers
so
that
bring
your
own
host
is
new.
It's
in
alpha,
so
it
doesn't.
We
haven't
yet
added
this
to
cluster
api.
Every
that's
right!
Is
there
a
pr.
A
Yeah
one
two,
three
yeah,
so
you'll
be
eating
one
dot,
one
or
one
two,
eighteen,
depending
on
which
one
you
hit.
So
we
need
to
hack
it
in
to
our
into
cluster
castle
right
now,
so,
fortunately
classical
lets
you
override
with
your
own
configurations.
We
do
all
of
this
in
the
end-to-end
testing.
Anyway,
you
can
do
this
on
your
local
environment.
It's
also
what
we're
doing
inside
downstream
products,
so
those
are
using
cantu
community
edition.
A
You
can
see
in
the
open
source
repos
like
our
own
versions
of
this.
That
then
have
our
downstream
builds
of
it,
which
are
the
same
as
the
upstream.
Just
the
only
difference
is
they're
signed
by
vmware,
but
otherwise
same
so
we
are
going
to
do
that
and
I
just
got
a
meeting
in
mike
for
being
in
10
minutes.
That
is
not
going
to
happen.
A
Yeah
exactly
yeah,
so
we
have
no
meeting
fridays
in
vmware,
or
at
least
our
our
bit
of
vmware
does
so
yeah
it's
good
and
we
are
hiring
so.
A
There
is
that
going
on.
Specifically,
we
are
hiring
someone
in
europe.
Anyone
in
europe
to
work
on
cluster
api
so
do
reach
out,
and
I
I
am
also
looking
for
a
product
manager
for
aws,
so
be
either.
Those
interests
you
reach
out
to
either
of
us
reach
out
to
me
on
the
aws
done,
but
reach
out
to
edward
blossom
and
plus
the
api.
A
A
A
A
Experiment
from
another
day
right,
yeah,
that's
working,
fine,
hello
from
kansas,
kansas
city,
the
sean
bit
early.
Isn't
it
nice
for
you
to
join
us
in
tgik
europe
right?
So
we
now
have,
let's
close,
that
let's
just
have
a
single
screen
using
the
lower
resolution.
They're
normal,
so
yep
you've
got
the
core
cluster
api
controller.
It's
the
bring
your
own
host
shawn
is
blaming
keith.
A
A
Right
where
were
we?
Yes?
So
so
it
might
have
timed
out
systemd,
oh
in
your
own
host
there
we
go.
Oh
okay,
so
systemd
did
not
climb
out
on
this
machine.
So
maybe
we
should
go
back
to
our
machine
and
see
what
just
happened
there
on
on
there.
So
we
have
so
after
we
installed
the
cluster
api
components.
A
It's
got
look
at
the
machine
right,
so
we
previously
it
was
just
crash
looping
because
it
couldn't
find
any
went
back
up.
We
didn't
have
any
well.
First
of
all,
it
was
their
cluster
wasn't
running
so
we
turned
oh,
no,
no!
A
So
it's
crash
sleeping
for
ages,
yeah,
so
no
matches
behind
bring
your
own
host
once
the
crds
got
installed
at
some
point
in
time
the
crd
was
installed
and
it
tried
to
create
a
resource
and
that
failed,
because
the
bring
your
own
host
provider
wasn't
running
yet
so
it
was
still
crash
looping.
So
you
need
you're
using
validation,
web
hooks
and
they're
mandatory.
A
A
A
B
We
have
this
yeah
one
type
called
kate's
note:
bootstrap
succeeded.
This
is
like
after
you
install
kubernetes
and
cubanium
everything.
It
will
go
into
a
true
state
until
then,
if
you
see
the
reason
waiting
for
machine
left
to
be
assigned
right
so,
like
I
said
it
just
registers
the
by
host
in
the
capacity
pool
like
the
list
of
b
wire
hosts
that
you
saw
one
two,
three,
four,
that's
the
capacity
pool
now
it's
just
waiting
there
for
a
machine
to
be
assigned,
so
this
machine
is
equivalent
to
a
vsphere
machine
or
a
bio
machine.
B
So
this
is
created
by
the
machine
deployment,
part
of
trust
api,
it
spins
up
new
bio
machines,
which
then
gets
attached
to
this.
So
at
the
moment
we
have
not
created
any
bio
machine
or
we
have
not
created
any
machine
deployment.
That's
why
you
see.
This
is
just
waiting
there
for
something
to
happen
to
that
buyer
post.
A
B
Yeah
we
have
code
that
will
look
at
the
network
interface
name,
that
is
ens-192,
so
this
is
mainly
because
we
use
cube
whip
for
our
control
plane,
ip,
that
is
for
load
balancing.
We
need
to
be
able
to
identify
the
default
interface
to
attach
the
cubevip
to
so
that
is
where
we
record
this
information.
The.
A
D
D
B
D
D
B
So
at
the
moment
like
when
you're
starting
the
py
host
as
a
service
right,
so
you
just
provided
the
cube
config
argument,
so
you
can
add
labels.
Also,
so
suppose
say
you
have
site
a
and
site
b.
You
could
attack
site
labels
or
attach
labels.
Saying
hey,
make
this
your
control
plane
node.
So
at
the
moment
we
can
attach
labels,
but
the
state
in
which
it
is
currently
we
are
not
acting
on
those
labels.
A
A
If
it
can
register
and
know
bring
your
own
host
today,
presumably
it
can
also
register
as
a
different,
bring
your
own
host
to
what
it
says
it
is
just
by
having
I
could
manually
use
the
cube
config
to
do
something.
So
what
would
it
look
like?
I
think
it's
something
to
think
about
like
as
we
move
forward.
A
B
Yeah
yep
conflict
file,
password
isn't
true.
I
think
one
thing
like
like
I
just
explored
was
about
no
discovery,
or
I
forget
what
it
is
called
node
feature
discovery
is
so
that
probably
that
tells
you
what
what
your
the
cpu
is?
How
much
ram
do
you
have
so
yeah?
We
were
also
looking
into
it,
but
not
yep
not
nailed
down
what
approach
today.
A
Yeah
and
I
think
in
the
future
we
were
think
certainly
in
the
original
plans
for
this
we
were
looking
at
maybe
using
something
like
spiffy
or
spire,
to
provide
no
data
station,
so
you
can
actually
have
a
provable
identity
of
the
machine.
So
then
you
can't
just
say
I
am
actually
the
control
plane
machine.
Please
give
me
please
bridging
me,
so
you
can
do
you
will
be
able
to
have
that
tighter
security
so
yeah,
if
you're
screaming
at
the
me
using
the
admin
config.
A
B
C
A
This
thing
is
creating
a
workload
cluster,
so
we,
the
examples
here-
are
for
doing
it
in
docker,
so
we
we
can
go
past
all
this,
oh
yeah,
by
the
way
on
that
template.
I
had
installed
these
things.
The
these
are
prerequisites
for
cube
adm.
Today,
I
guess
in
the
future.
You
can
bake
that
in
as
well
and
just
have
those
installed.
I
suppose
don't
even
need
to
do
that
then
yeah,
so
we've
got
our
hosts
registered.
So
we
are
ready
to
do
that.
A
So
we
need
the
control
plane
endpoint,
that
control
endpoint
will
not
work
in
my
environment,
so
we'll
need
to
use
a
different
one
as
which
so
my
network
is
192.168.192..
A
B
Yeah,
I
want
talk
about
why
we're
using
122.3,
specifically
because,
right
now,
how
by
structured,
is
that
the
agent,
so
we
have
a
component
called
the
pyo
host
agent
installer,
so
this
installer
is
responsible
for
giving
a
os
that
is
ubuntu
2004
that
we
have
and
given
a
kubernetes
version,
it
installs
all
those
kubernetes
binaries
onto
this
hosts.
So
how
does
it
I
mean?
Where
does
it
get
from
these
binaries?
So
for
a
given
os
and
of
kth
version,
we
have
a
defined
oci
bundles.
B
So
these
are
nothing
but
karma
image,
package,
oci,
bundles
or
images.
So
if
you
just
look
at
one
of
the
existing
image
right,
it
just
contains
cubelet
cube,
ctl,
cubedium,
debian
files
and
container
d
tar,
and
probably,
if
you
have
specific
configuration
that
you
want
to
pass
in,
you
could
do
that
as
well.
But,
right
now
what
we
provide
by
default
is
kubernetes
1.22.3!
B
That's
why
you
see
for
the
first
part
of
the
demo.
We
will
be
using.
You
know,
122.3,
because
we
don't
want
to
break
anything
right
now.
We
could
play
with
later.
A
Fair
enough
so
we'll
see
this
live
in
a
minute.
I
I
not
use
most
of
the
cargo
tools
apart
from
ytt,
I
think
there's
probably
people
in
chat.
You
know
much
better
than
I
do
so
yeah.
It's
using
this
image
package,
bundles,
we'll
see
in
a
minute.
So
that's
done
with
all
my
it's
all
my
windows
gone.
Don't
need
to
the
wrong
desktop.
A
B
Image
package
is
awesome
and
allows
for
air
gap.
I
think
is
its
image
package
is
super
powerful
and
that
you
could
build
your
own
oci
images,
move
them
across
registries
and
then
like.
If
you
have
an
aircap
system,
sap
it
into
your
system
and
then
push
it
into
a
private
registry
and
use
from
there,
so
yeah
pretty
powerful.
A
A
Yeah
and
it's
scott
says,
image
package
is
awesome
and
allows
for
air
gap.
Yes,
incredibly
important
for
a
lot
of
our
customers,
all
right
so
yeah.
So
I
had
a
funny
moment
there.
Let's
have
a
look
at
this
cluster
template
that
we
generated
so
we've
got
a
cube,
adm
config
template.
Do
I
need
secret
driver?
If
I'm
on
a
real
machine,
do
I
need
secret
profess
or
system
d?
B
B
It
probably
will
because
your
cubelet
will
say
hey,
I
I
don't
know
what
you're
talking
about.
A
Right,
cool,
yeah,
fine.
A
I
think
it
controls
when
pods
will
be
a
bit
evicted
if
there's
certain
problems
with
the
computer,
as
in
like
it's
one
out
of
disk
space
or
stuff
like
that.
So
I
think
this
is
more
for
the
docker
environment.
A
D
A
We
need
any
of
this,
so
I
think
I'll
just
remove
it.
Let's
see
what
happens.
Changing
things
live
that
that's
always
fun.
Why.
D
A
A
Right
yeah,
so
normal
pod
cidr
blocks.
Technically
that
would
be
bad.
A
So
we're
gonna
have
one
we
could
change
that
later
in
a
minute
so
standard
cluster
api,
so
maybe
for
those
who
aren't
familiar
with
classified
mario,
just
walk
through
that
quickly,
so
we
have
a
cluster
resource.
So
this
is
a
core
cluster
api
resource,
independent
of
different
infrastructures.
A
Most
that
mostly
we
have
a
little
bit
like
the
common
sort
of
networking
information.
Then
we
have
a
control
plane
left.
So
this
is
where
you
specify
what
type
of
what
provider
you're
using
for
various
things.
So
we
have
a
you,
can
plug
in
your
own
control,
plane
provider,
so
there's
a
core
one
based
on
cube,
adm
and
then
there's
so
the
most
common
ones
are
there's
one
sport
like
talos,
which
is
a
different
bootstrapping
system
and
then
for
them?
A
No,
yes,
yes,
there
is
and
then
for
the
managed
services
say
eks
on
amazon
or
aks
on
azure.
You
can
plug
that
out
with
your
managed
control,
plane
provider,
and
then
we've
got
infrastructure
reference
we're
going
to
use.
So
you
have
like
a
type
of
cluster
you're,
going
to
have
a
bring
your
own,
bring
your
own
cluster
type
of
cluster.
Go
for
a
minute
machine
deployments.
This
is
another
generic
core
class
api
type.
So
this
is
your
equivalent
of
deployments
with
pods,
it's
just
the
machine
based
equivalent.
A
So
again
we
have
it
template
referencing
templates,
so
we're
using
qbm.
When
you
bring
your
own
machine
template
q,
radium
control
plane,
don't
need
host
dot
properly
internal.
I
don't
think.
A
That's
fine,
I
suppose
it
doesn't
really
matters.
Do
I
need
don't
need
a
host
parker
visioner.
A
Do
you
want
kubrick,
so
kubrick
is
what
we're
using
to
to
make
a
highly
available
control
plane,
even
though,
in
this
case
we're
only
using
a
single
node,
but
we
want.
We
need
a
stable
endpoint
of
some
sort,
and
then
we
have
some
workarounds
for
docker
again,
which
I
can
probably
get
rid
of.
Don't
need
them.
D
A
D
A
And
then
we
have
you'll
bring
your
own
cluster
stuff.
That's
the
control
plane,
endpoint
that
we
provided.
So
that's
what
key
rip
is
going
to
set
up
for
us
question.
D
A
A
B
So
that's
just
the
tag.
You'll
need
the
name
of
the
image.
I
could
probably
send
it
to
you,
because
it's
a
pretty
long
name.
B
A
Who
made
me
whilst
that's
happening,
so
that's
just
more
for
my
own
interest.
We
can
deploy
this.
So
if
we
we
got
the
subjournal
ttl
running.
A
A
Right
so
we
have,
I
guess,
that's.
A
A
Okay,
we've
got
the
stuff
we
want
to
unpack
in
that
cd
etsy,
not
at
cd
etsy.
I
have
no.
I've
noticed
this
issue
where
people
can't
keep
referring
to
slash
epc
directories
of
slash
xcd
now
seeing
like
xcd
on
the
brain
yeah.
So
for
those
who
aren't
familiar,
you
kind
of
need
these
for
cubadm
to
be
successful
yeah.
So
that's
what
that
is.
A
A
All
right
so
we're
ready
to
apply
the
manifest.
Now
we've
got
our
machine
here.
Hopefully
it
won't
explode
after
my
hacky
changes.
C
A
B
A
A
B
So
it
just
chooses
one
of
the
available
hosts,
it
picks
at
random,
one
of
them
right
now
we
have
four
and
once
it
picks
one
of
it.
On
the
b
by
ho
crt,
you
have
the
status
field
right,
so
we
have
defined
status,
dot,
machine
ref.
So
that's
where
you
can
see
the
bio
machine
being
referenced
in
a
bio
host
status
field.
A
B
A
Oh
okay,
I
mean
when
I
tried
it
last
night,
it
just
picked
one
like
so
I
thought
it
was.
Oh,
it
picks
him
up.
Alphabetically,
that's
easy,
but
he
doesn't
so
that's
good.
So,
let's
pop
onto
this
machine,
then
I
suppose
is
instead
of
looking
at
this
one,
which
is
not
doing
anything.
It's
not
selected.
A
A
A
At
least
for
the
purposes
of
this,
who
can
be
bothered
learning
how
to
exit
vi.
A
A
I
think
that
system
b
would
have
start
to
stop
restarting
around
255.
I
think
so.
That's
what
so
we
would
put
in
there
in
time.
A
A
C
A
A
B
But
we
could
also
probably
look
at
the
machine
resource.
D
B
B
A
Yeah,
so
we
can
now
get
our
key
config
for
that
cluster.
Is
it
called
again.
A
B
A
Okay,
so
we
have
a
one
machine
control
plane.
Do
we
have
the
other
machine
being
provisioned,
keep
doing
that.
D
D
C
A
Maybe
we
might
need
to
check
with
furicio
whether
or
not
that
should
just
work.
A
C
C
D
A
There
we
go
so
that's
that's
doing
what
you
should
be
doing
right
today.
I
learned:
don't
just
expect
it
to
work,
but
I
should
find
out
if
that's
deliberate,
because
maybe
you
should
just
default
that
if
you
enter
it
yeah
right,
so
that
is
a
cluster.
So
one
thing
was
so
I'm
just
interested.
So
what
I'm
gonna
do
now
is
you
can
talk
me
through
some
of
this?
Some
we
haven't
done
any
deletion.
So
let's
wait
for
that
machine
to
come
up
and
then
I
want
to
try
deletion.
A
But
whilst
that's
doing
I
do
want
to
check
out
this
replay.
C
A
Okay,
thank
you
yeah.
Yes,
oh
I
see
you,
you
ship
a
configuration,
it's
interesting,
it's
cool
yeah,
just
interested
so
that
script
you
have.
So
if
I
wanted
to
add
like
a
fedora
thing,
how
would
I
go
about
doing
that.
D
B
So
there's
a
couple
of
shell
scripts
that
pulls
in
all
these
qb.
Whatever
version
you
want
and
then
builds
a
bundle
and
then
the
push
bundle
will
push
to
whatever
registry
you
want.
So
we
just
have
a
couple
of
shell
scripts
over
here
and
then,
if
you
see
there's
a
cli
for
this,
it's
a
basic
cli
wherein
you
can
see
which
os
is
supported
and
you
can
have
a
preview
of
the
bundle
install.
So
we
we
will
not
actually
go
and
install
everything.
We'll
just
show
that
this
is.
This
will
get
installed.
B
C
B
B
So
as
and
when
we
want
to
add
more
support
to
other
oss
and
kubernetes
versions.
We
need
to
have
more
of
these
steps
defined.
Okay,
so
right
now,
if
you
want
to
try
it
on
fedora,
probably
what
we
could
do
is:
can
you
go
to
the
main
agent
folder
and
look
at
its
main.co.
D
B
Yeah,
so
you
see
on
line
90
the
skip
installation.
So
if
you
provide
this
flag,
when
you
start
the
agent
itself,
we
will
not
go
through
the
installer
workload,
we'll
assume
that
everything
kubernetes
prerequisites
are
set
up.
We
will
directly
go
to
the
kubernetes
step,
yeah
the
qb
in
it
or
join
so
yeah
you'll
have
to
do
all
the
installation
part
and
use
this
flag
yeah.
So.
A
Yeah,
so
I
think
what
we
yeah
actually.
D
D
D
D
B
Yep,
so
if
you
see
this
is
saying,
architecture
is
linux
and
you
know
amd
64
right,
so
if
you
build
it
for
other
architectures
like
arm
so
we've
had
william
lam,
who
had
tried
this
out
on
arm
and
he
has
published
a
blog
and
he
was
able
to
successfully
get
a
vocal
cluster
on
architecture
as
well.
So
that
is
yeah.
If
anybody
is
interested
to
try
out.
A
Cool
yeah,
so
I
think
I'm
not
sure
I've
got
the
wherever
to
do
anything
today
but
yeah.
I
think
it'd
be
pretty
good
to
try
great.
So
I
guess
what
we
would
do
we
would
so
we
got
this.
Let's
have
a
look.
Just
have
a
think.
D
A
C
A
B
A
Ansible,
where
you're
defining
in
go,
you
know
how
to
do
a
certain
type
of
operation.
Importantly,
I
think
this
is
the
thing
that
the
older
config
management
tools
didn't
have
is
how
to
undo
them
as
well
like
that
was
always
a
challenge
like
how
to
when
you
were
doing
puppet
back
in
the
day,
which
is
why
we
then
eventually
went
to
the
immutable
infrastructure,
because
it
was
easier
to
reason
about
than
trying
to
undo
a
package
installation
and
read
it.
A
Yeah
yeah
and,
given
that
we,
so
I
think
well,
it's
back
in
the
day
like
you
would
do
an
example-
would
be
you'd
spin
up
for
my
old
job,
you'd
spin
up
a
machine
and
then
install
openld
app
and
generate
tls
certificates
and
then
install
jenkins
and
the
node.
But
then
what
is
someone
who
tried
to
remove
jenkins?
How
do
you
uninstall
it
again?
So
that
was
all
really
complicated.
A
So
that's
why
we
started
containerizing
things,
but
I
guess
in
this
instance
we're
only
interested
in
their
very
small
set
of
things
that
we
need
to
do
to
get
kubernetes
working.
A
A
So
that's
something
I'm
playing
around
with
a
bit.
So
this
really
looks
interesting
to
me
yeah
something
I
definitely
want
to
play
with
yeah.
So
I
guess
let's
I
don't
think
I'm
probably
not.
I
have
not
slept
well,
so
I
don't
think
I'm
ready
to
program
today,
but
let's
I
did
not
get
around
to
deleting
the
machine.
So
what
would
be
to
be?
If
I
deleted
that
workload,
I.
B
I
think
it's
one
of
the
very
interesting
use
cases
where
the
vivac
provider
can
allow
for
mix
and
match
of
different
infrastructures
right,
but
right
now,
if
you
want
to
try
it
out
the
constraint
being,
they
have
to
be
on
the
same
network.
We
have
not
solved
the
problem
of
when
you
know
it
gets
disconnected
from
the
management
cluster
or
the
different
nodes
are
on
different
networks,
but
definitely
could
be
one
of
the
future
use
cases,
because
that
is
an
interesting
problem.
A
Yeah,
they
just
need
to
be
rooted
together
right.
They
don't
need
to
be
exactly
on
the
same
subnet,
so
you
could
have
like
an
aws
internet
facing
cluster
and
then
do
that.
I
think
the
one
you'll
talk
about
is
when
what
happens
when
you
turn
the
machines
off,
which
is
a
problem
in
cluster
api
today.
I
think
that's
yeah,
so
a
lot
of
people
doing
that
in
the
edge,
I
guess
telco.
I
guess.
B
A
A
D
A
A
A
A
Yeah,
I
don't
mind
it
picking
another
machine.
Just
wanna
see
I
did
not
see
that
properly.
There
you
go.
A
Oh,
that's
interesting!
So
how
do
we
get
the
q
radium
configuration
over
to
the
machine.
B
B
C
D
A
A
Own
hosts
or
the
remaining,
which
one
is
that
a3,
oh
not
13.,
don't
have
that
many.
D
C
That's
cool,
I
think.
A
A
Yet
danger
was
saying
we
could
fail
to
unlock.
We
do
have
ability
to
attach
labels.
We
could
use
that
for
filtering
later.
Yeah
scott
you've
answered
it
yep,
any
other
questions
folk
have
in
the
audience.
A
As
ever,
if
you
like
this
as
a
typical
youtube
outro,
you
know
if
you
like
this
video
click
on
the
like
bit
button
and
then
click
on
subscribe
and
then
change
the
bell
to
get
all
notifications
for
future
tgik
episodes.
Instead
of
just
the
personalized
recommendations.
A
God
knows
what
the
ai
algorithm
for
that
is
so
yeah
comment
like
and
subscribe
and
we're
not
actually
making
money
from
this.
So
you
know,
but
anyway
it
will
be
nice.
If
you
do
that,
if
you
don't
like
it,
don't
use
one
down,
but
maybe
leave
us
a
comment.
A
You
can
leave
a
comment
to
say
what
we
can
improve,
definitely
and
or
reach
out
to
us.
As
we
mentioned
earlier,
we
are
hiring
we're
hiring
cluster
api
folk
in
both
europe
and
india.
So
do
reach
out.
So
anything
else
you
want
to
say
in
inertia.
B
A
I
hope
you
enjoyed
a
new
time
and
I
hope
it's
been
right
for
everyone
we'll
see
you
soon
and
happy
thanksgiving
for
those
who
are
celebrating.