►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone:
this
is
the
kubernetes
c
cluster
life
cycle
cluster
api
office
hour
meeting
today
is
wednesday
7th
of
april
and
before
starting
a
few
info
about
the
meeting
etiquette.
So
first
of
all
use
your
arrays
and
the
feature
of
zoom.
A
A
Second,
if
you
have
a
topic
that
you
want
to
talk
about,
feel
free
to
edit
the
agenda,
the
agenda
is,
in
this
document.
I've
passed
that
in
the
in
the
meeting
chart
the
link
to
this
document
and
if
you
don't
have
access
to
the
to
this
document,
please
join
the
c
cluster
like
cycle
mailing
list,
and
you
should
get
assets
right
immediately.
A
First
of
all,
thanks
to
shayam
now
in
cluster
pi,
we
have
upgrade
test
and
conformat
test
for
five
kubernetes
version
for
the
workload
cluster,
so
we
starting
from
stable
minus
three,
which
currently
is
kubernetes
version
117,
and
we
are
testing
upgrades
up
to
the
current
master
in
kubernetes,
which
is
121
erc
something.
A
So
thank
you
cheyenne
for
this.
This
is
really
helpful
and
it
will
take
some
time
to
stabilize
those
tests,
but
they
are
really
useful.
A
Second
psa
also
regarding
test
and
test
improvement,
a
big
kudos
to
stefan,
which
is
helping
in
in
in
doing
some
the
improvement
to
our
test.
So
he
added
junit
report
for
the
unit
test
in
prasad
and
periodic
jobs,
and
this
is
quite
impressive
to
see
that
we
are
running
more
than
2000
tests
for
each
run.
A
Secondly,
he
incremented
the
logging
for
cup
d
in
our
end-to-end
test
and
thanks
to
these
additional
logins,
it
was
capable
to
to
find
a
very
subtle
error
in
a
in
our
in
kind,
which
was
causing
many
flags
in
our
end-to-end
test
and
now
that
the
pr
for
fixing
this
is
is
joining
the
is
already
measured
in
kind,
and
we
will
pick
up
the
changes
soon
in
a
in
cluster
api.
B
Thanks
for
too
so
yeah
I
get
to
talk
about
the
not
so
fun
stuff
of
milestone
grooming,
but
I
just
wanted
to
call
out
that
we
have
continued
doing
some
improvements
to
the
mouse
and
grooming
over
the
last
few
weeks
and
the
release
blocking
label
has
merged
and
has
been
applied.
So
if
you
click
on
that
list
of
issue,
this
is
tracking
the
issues
that
are
in
the
zero
four
milestone
that
are
considered
release
blocking.
B
So
that
means
that
we
would
want
to
wait
or
make
an
exception
case
by
case
for
these,
if
they're
not
in
by
zero
four
zero.
If
you
have
anything
that
you're
tracking
that
you
think
should
be
in
here,
that's
not
currently
in
here
please
like
flag
it
to
one
of
the
cappy
maintainers.
So
we
can
make
sure
that
it's
being
tracked
in
here
and
if
you
see
anything
that
is
in
here
that
you're
currently
assigned
to
that
you're
not
currently
working
on
then
now
is
a
good
time
to
check
back
or
unassign
yourself.
B
A
Okay,
thank
you
very
much
cecile
for
this
work.
The
next
one
is
still
on
you.
B
Yeah
and
the
next
one
is
sort
of
related.
This
is
just
a
circle.
Back
last
time
we
talked
about
a
deadline
for
merging
proposals
that
are
targeting
v040
and
we
agreed
on
april
21st.
There
was
an
email
to
the
discussion
to
the
google
group
and
no
major
objections
were
raised,
so
I
think
we're
going
to
stick
with
that
deadline.
B
Yes,
yeah.
I
think
this
is
only
targeting
zero
four
zero
yeah,
so
especially
breaking
changes
and
anything
that
needs
to
be
in
the
v1
alpha
4
release.
A
Okay,
so
if
there
are
no
more
question
with
regards
to
milestone
grooming
and
deadline
for
v0.40,
let's
move
on
to
json
for
the
tinkerbell
provider
demo,
I'm
really
looking
forward
for
this
one.
D
All
right
thanks,
let
me
can
you
go
ahead
and
give
me
permission
to
share
my
screen.
D
D
D
Looks
like
it's
no
longer
letting
me
just
share
a
window,
all
right
I'll.
Do
it
this
way,
then
all
right
so
before
I
jump
in
because
I
know
not
everybody's
familiar
with
what
tinker
bella
is.
I
just
wanted
to
do
a
brief
overview
of
that
tinkerbell's.
Basically,
an
attempt
to
do
a
cloud-native
approach
for
infras
infrastructure
management
and
orchestration.
D
So,
in
addition
to
just
being
able
to
you,
know
provision
an
os
on
an
instance.
You
can
also
run
like
arbitrary
kind
of
workflows
against
hardware
that
you
have
and
there's
a
few
different
components
of
tinkerbell
that
help
enable
that
there's
the
core
tinkerbell
workflow
engine.
This
is
what
allows
you
to
define
the
hardware
that
you
want
to
manage
with
tinkerbell.
D
It
also
allows
you
to
find
what
we
call
templates
and
workflows
templates
are
basically
a
way
that
you
define
actions
that
you
want
to
be
able
to
run
against
hardware
and
a
workflow
basically
ties
an
instance
of
a
template,
together
with
a
particular
instance
of
hardware,
to
make
it
actionable.
D
There's
also
a
few
different
microservices
that
are
involved
in
the
entire
system.
The
first
one
is
boots
which
enables
dhcp
and
pixie
booting
services,
there's
also
a
heagle
metadata
service
and
there's
also
a
worker
component
that
actually
runs
on
the
infrastructure
and
reaches
out
to
tinkerbell
and
queries.
What
workflows
need
to
be
run
on
that
hardware
and
actually
is
responsible
for
executing
those
workflows?
D
All
right
and
let
me
go
ahead
and
switch
my
camera
as
well.
So
hopefully
you
can
see
the
camera
and
what
I
have
here
is
basically
this
box
on
the
right
here
that
one's
actually
running
the
tinkerbell
services
and
then
I
have
five
smaller
boxes
on
the
left
that
are
actually
configured
in
tinkerbell
for
as
hardware.
So,
if
I
come
in
here
and
just
tank
hardware,
we
can
see
that
there
are
five
devices
in
here.
D
I
did
kind
of
cheat
a
little
bit
because
you
can
find
what
the
uuids
of
the
hardware
we
need
to
find
them
in
tinkerbell.
So
I
know
which
physical
box
is
based
on
the
last
letter
of
the
id,
and
so
I
have
that
running.
I
also
have
cluster
api
running
via
tilt
and
the
cluster
api
tinkerbell
provider
running
now
as
well.
D
So
I
have
the
hardware
defined
and
right
now
I
don't
have
anything
defined
on
the
cluster
api
side,
but
I
do
also
have
these
tinker
bell
types:
the
hardware,
the
template
and
the
workflow.
I
also
have
crds
for
those
actually
deployed
to
the
cluster
as
well.
So
if
I
do
a
cube
cut
all
api
resources
here,
we
can
see
that
we
have
these
with
this
api
group
tinkerbell.org
that
defines
hardware
templates
and
workflows
and
there's
a
couple
of
reasons
for
this.
D
D
We
can
see
that
there
isn't
anything
in
here
by
default.
I
need
to
actually
specifically
tell
what
pieces
of
hardware
I
want
to
run,
and
I
do
that
with
like
this
example
right
here.
So
let
me
just
apply
that
real,
quick.
D
D
But
if
I
come
in
here,
we
can
see
in
the
spec.
It
was
the
id
that
I
defined,
but
here
in
the
status,
I
have
a
little
bit
more
information
and
it's
getting
this
information
directly
from
tinker
bell
itself.
So
we
can
see
that
this
hardware
has
one
disk
device
at
dev,
mmc
block
zero
and
it
has
one
network
interface,
that's
configured
to
allow
pixie,
booting
and
and
allow
running
workflows.
D
We
can
also
see
that
there's
this
metadata
here
as
well-
and
this
is
metadata
that
we
can
define
on
the
host
and
make
available
through
the
kegel
metadata
service.
In
this
case.
This
hardware
had
previously
been
used
in
this
cluster
api
demo,
so
it
actually
has
kind
of
the
user
data
that
we
would
inject
into
a
system
kind
of
in
there
right
now.
D
But
that
said,
the
next
thing
I
can
do
is
go
ahead
and
create
a
cluster
and
I'll
do
a
quick
walkthrough
of
the
template
there.
The
other
thing
is,
I
only
created
that
one
hardware,
because
this
this
hardware
that
I
have
doesn't
have
any
type
of
management
layer,
so
I
only
have
the
monitor
plugged
into
the
device
that
I
call
a
and
I
wanted
to
be
able
to
show
the
screen.
So
that's
so
that's
the
reason
why
I
only
applied
one
hardware
device
instead
of
all
five.
D
So
if
I
come
in
here
to
the
cluster
template
we
can
see.
This
is
basically
just
a
standard
cluster
template.
There
really
isn't
much
in
here
other
than
this
provider
id,
and
this
is
just
to
allow
us
to
work
around
the
fact
that
there
isn't
a
kubernetes
cloud
provider
for
tinkerbell
at
least
yet.
So
this
is
what
populates
the
provider
id
gets
it
assigned
to
the
node
when
it
comes
up
and
allows
to
make
the
link
between
the
node
and
the
physical
hardware,
and
that's
really
about
it.
D
D
So,
in
addition
to
having
tinkerbell
installed
on
that
machine,
the
web
server,
that's
on
that
machine
also
hosts
a
pre-built
image
that
I
made
with
image
builder
for
kubernetes.
I
think
it's
1.18.15,
so
when
I
deploy
this
cluster,
I'm
going
to
specifically
deploy
that
version
to
kubernetes
so
that
it
picks
up
the
right
image.
D
D
So
if
I
go
back
up
to
the
command
that
I
ran,
I've
overrided
the
pod
cider
here
and
the
main
reason
why
I
did
that
is
because
the
tinkerbell
host
has
the
ip
address
of
192.168.1.1
and
if
I
would
have
used
the
default
podciter
that
aligns
with
the
default
calico
network,
I
would,
I
would
run
into
collision
issues
with
the
ip
addresses.
So
I'm
overriding
the
pod
cider
here
and
when
I
deploy
the
cni,
I'm
actually
going
to
deploy
psyllium.
D
Instead
of
calico
to
avoid
that-
and
I'm
also
telling
it
only
one
control
plane
machine
right
now,
because
we
don't
yet
have
aha
support,
yet
we
need
to
introduce
something
to
provide
load
balancing
before
we
can
actually
introduce
that
because
we
are
talking
about
physical
infrastructure
here,
and
I
also
told
zero
worker
machines,
because
I've
only
defined
that
one
piece
of
hardware,
along
with
that
version,
1.18.15
kubernetes
cluster.
D
So
if
I
come
in
here
cube
cto
get
cluster
api.
We
can
see
that
we
do
now
have
a
tinkerbell
machine
and
it's
assigned
to
that
a
instance.
The
only
hardware
that
we've
defined.
So
let
me
go
ahead
and
turn
that
on
so
at
this
point.
Hopefully
it'll
come
over,
but
as
the
hardware
comes
up,
we'll
actually
see
it
go
through
a
few
different
steps.
D
The
first
step
is:
is
it's
going
to
pixie
boot
into
the
boot
service
and
it's
going
to
retrieve
what
we
call
a
operation
systems
operating
system,
installation
environment.
So
it's
gonna
download
that
os
image
pixie
boot
off
of
that
and
then
it's
gonna
start
executing
the
workflows
with
the
tink
worker.
D
So,
while
that's
going,
what
I
can
do
is
I
can
show
the
actual
tinkerbell
resources
that
got
created
here.
So
if
I
do
get
tinkerbell
here
and
if
I
can
spell
we'll
see
that
in
addition
to
that
hardware
I
defined
there
is
also
a
template
and
a
workflow
that's
defined
now.
D
And
I
am
not
typing
very
well
today,
so
I
apologize
for
that.
We
can
see
that.
Let
me
find
the
right
part.
Where
is
it
compound?
We
can
see
here
in
the
spec,
how
I
mentioned
that
a
workflow
just
ties
hardware
to
a
template
and
that's
basically,
all
it
does,
and
then
in
the
status
we
see
what
actions
are
defined
for
that
particular
workflow.
By
applying
that
template
to
that
hardware-
and
we
basically
have
three
actions
here.
D
The
first
one
is
is
basically
this
image
to
disk
action,
and
this
is
basically
telling
it
just
take
this
gzipped
image
that
was
built
with
image
builder
and
write
it
to
the
dev
mmc
block
0
device,
and
that
was
pulled
directly
from
that
hardware
information
that
was
defined.
D
The
second
action
that
it
does
here
is
a
basically
a
write
file
action
and
it's
writing
a
file
to
extend
the
cloud
init
configuration
and
what
we're
basically
doing
here
is
telling
it
to
use
the
ec2
metadata
data
source
and
we're
overriding
the
metadata
urls
and
this
169
254
169254
is
a
link.
Local
address
that
I
previously
configured
on
the
tinkerbell
host
and
this
port
5061
is
associated
with
the
hegel
metadata
service
and
I'm
also
telling
it
to
use
strict
id
falls
and
that's
because
heagle
isn't
an
exact
replica
of
the
ec2
metadata.
D
So
if
I
wanted
to,
I
could
extend
the
the
cluster
template
that
I
applied
and
actually
apply
a
particular
ssh
key
to
access
the
host
or
override
the
password
or
things
like
that
and
the
final
action
that's
run
after
writing.
That
configuration
to
disk
is
this
k,
exec
action,
and
this
is
basically
just
telling
it
to
go
ahead
and
k
exec
into
the
kernel.
D
That's
associated
with
that
image
that
I
just
deployed
there.
So
there's
no
need
to
wait
for
the
hardware
to
reboot
it'll
automatically
just
bootstrap
into
the
the
linux
environment
that
we
deployed
to
desk
at
this
point
on
the
hardware
itself:
we've
already
pixie
booted
into
the
operation
system,
operating
system,
installation,
environment
and
we're
running
the
tank
worker.
It's
in
the
process
of
streaming
that
image
to
the
disk.
D
This
hardware
is
a
little
bit
slow,
so
it
takes
a
little
bit
now.
It's
running
the
rest
of
those
and
now
it's
starting
to
k
exec
into
there.
So
now
we're
getting
to
the
point
where
we're
actually
getting
ready
to
bootstrap
the
ubuntu
system
and
run
cloudinet
that
was
defined
through
the
user
data.
D
Cluster
yeah,
because
if
I
did
it
now,
oh
wait.
It's!
It
is
running
at
this
point
now
it
appears
all
right.
D
So
now
it's
completed,
I
have
the
node
and,
as
per
usual,
if
I
deploy
cni,
as
I
said,
psyllium
in
this
case,
within
a
short
while,
as
soon
as
that
pod
becomes
available,
the
pods
for
cilium
become
available.
D
D
If
I
had
some
more
advanced
hardware
with
some
type
of
ipmi
compat
capabilities,
I
could
leverage
the
pb
and
j
component
of
tinkerbell
to
automate
the
power
cycling
of
the
hardware
instead
of
having
to
basically
reach
over
and
power
up
and
power
down-
and
I
know
there
are
other
things
on
the
agenda,
so
I
don't
want
to
take
up
too
much
more
time,
and
I
also
want
to
allow
for
any
questions
that
anybody
may
have
as
well.
A
E
Hi
everyone-
this
is
jerry
and
just
curious.
There's
any
way
we
can
assign
like
a
ip
address
to
the
kubernetes
cluster
in
the
template.
So
in
the
corporate
environment.
Usually
the
iep
address
need
to
be
like
a
pre
reserve.
D
D
It
wouldn't
be
too
difficult
to
wire
through
a
system
to
where
you
could
specify
unused
addresses,
and
we
could
have
the
cluster
api
provider
update
the
information
for
the
hardware
to
have
that
ip
address
and
and
leverage
it
that
way
as
well.
Okay,.
E
So
was
there
any
entry
in
the
cluster
like
a
template
file?
Was
that
one
be
able
to
like
assign
it
there
sort
of
hard
call
it
maybe.
E
D
Yeah,
like
I
said
right
now,
we
expect
the
hardware
to
be
pre-defined
in
there
and
and
we're
letting
tinker
bell
just
manage
the
ip
address
management
right
now,
but
that's
something
we
could
add
in
the
future
for
sure.
Okay.
Thank
you.
Thank
you.
D
And
thank
you
all
if
anybody
has
any
more
questions,
I'll
pop
some
more
resources
into
the
notes,
otherwise
just
reach
out
to
me
thanks.