►
From YouTube: SIG Cluster Lifecycle - Cluster API - Code structure & Makefile targets (EMEA/Americas) - 2022-02-14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
That's
fine!
Okay!
Let's
get
this
started
hello!
Everyone!
This
is
a
left
chat
about
cluster
api.
It
is
yeah,
it
is
a
new
stuff
that
that
we
are
trying.
Basically,
let
me
share
my
screen
and
so.
A
So
if
you
go
in
the
cluster
api
repository
under
discussion,
there
is
let's
chat
about.
So
we
have
our
initial
set
of
topics
that
we
would
like
to
to
talk
about,
and
so
we
would
like
to
keep
these
as
integrity
as
as
fun
as
possible
and
please
make
question
this
is
the
second
episode
about
cod
structure
and
make
file
targets.
A
We
already
had
a
similar
session
for
emea
and
apac
time
zone
the
recording
this
year
here
there
are
another
set
of
topics
that
we
would
like
to
talk
about,
but
if
you
have
any
idea
any
any
stuff
that
you
would
like
to
raise,
please
comment
on
the
issue
or
just
express
your
opinion.
A
Today
I
have
a
another
psa
tomorrow
we
are
going
to
meet
and
talk
about
customer
pi
cluster
resource
set
and
addons.
I'm
going
to
send
an
email
to
the
mailing
list
as
up
sorry
for
the
short
notice.
A
But
yeah
it
will.
It
is
just
an
initial
sharing
of
via
everything
the
meeting
will
be
recorded
and
everything
every
action
will
go
as
usual
in
issues
and
proposal
or
whatever.
So
don't
worry
if
you
can't
make
it,
and
so
if
there
are
no
questions
I
can
start.
Stefan
do
you
want
to
add
something.
A
Oh
okay,
so
talking
about
cluster
api
code
organization,
so
I
think
that
a
nice
way
to
see
these
is
basically
I
using
visual
studio
code.
I
added
all
the
cluster
api,
basically
folders
to
getting
your
and
and
then
I'm
what
I'm
doing
is
is
basically
re
reputing
them
into
the
project
while
discussing
what
do
they
contain
and
why
they
are
there,
and
so
the
the
main
point
that
I
think
that
everyone
should
be
aware
of
of
is
that
classroom.
Apis
repository
contains
many
things.
B
A
A
A
A
This
is
the
cluster
api
api
types
you
get,
you
get
generated,
the
crds
other
stuff
that
are
generated
are,
if
I
remember
all
well,
we
have
airbag
rules
that
are
generated
from
the
markers
that
you
have
on
top
of
your
controllers
and
also
the
web
book
manifest.
A
B
The
hook
markers
are
on
the
webhook
implementation
funcs.
Essentially,
so
there
should
be
right
above
meditate,
great
okay,
but
in
not
for
cluster
class
and
cluster,
because.
B
It
actually
doesn't
matter
where
they
are.
We
are
just
yeah
co-locating
them,
because
it
makes
sense.
A
Let
me
get
a
better
view
of
the.
If
there
are
questions
so,
please
feel
free
to
change
and
then,
of
course
we
have
controllers.
So
where
our
consider
are
implemented,
we
will
have
a
loop
letter
and
then
we
have
an
egg
which
is
basically
a
bunch
of
scrape
that
we
use
for
building
or
testing
and
okay.
So,
let's,
let's
consider
these,
let
me
say
the
seed
of
of
the
cluster
api
project
where,
where
everything
started,
then
you
know
the
project
is
nice.
A
Where
someone
in
cncf
donated
us
a
nice
logo,
and
here
we
have
the
logo
or
the
logo.
We
know
the
format
and
then,
of
course,
that
community
started
growing,
and
so
we
started
creating
creating
our
our
docs
our
documentation.
So
we
have
the
book.
A
A
A
A
Because
we
started
basically
train
training
node
while
when
deleting
machines-
and
there
was
this
project-
that
with
a
nice
implementation
of
cordon
drain,
whatever
we
want-
but
I
don't
remember,
for
which
specific
reason
we
it
was
not
possible
to
import
is
a
dependency.
A
So
we
started
working
on
the
broad,
maybe
for
a
small
feature
that
we
were
me.
It
was
missing.
So
we
started
from
one
side
fixing
the
the
project
and
the
plan
is
to
get
this
third
party
folder
removed
soon
stephan.
Do
you
remember
the
reason.
D
Yeah,
I
think,
if
I
remember
this
correctly,
the
cube
ctrl
train
code
was
available
as
part
of
the
larger
kubernetes
code
base
and
not
available
as
a
module
separately,
which
is
what
is.
It
is
available
right
now
and
that's
probably
why
this
was
copied
over.
A
A
So
there
are
coding
cluster
api
that
everyone
would
like
to
use
it
well,
and
so
we
started
creating
utilities
and,
for
instance,
we
have
the
the
famous
patch
everyone
in
the
provider
is
doing,
reconcile
the
fair
patch,
and
so
this
is
where
the
patch
util
is
implemented.
A
A
But
yeah
we
have
version,
so
this
basically
helps
you
to
inject
into
the
binaries
that
that
we
have
in
your
project
a
minor
all
the
version.
Let
me
say
attributes
that
build
time.
We
have
feature
gates,
and
so
this
folder
provide
fisher
gate
management
that
everyone
can
reuse
in
their
project
instead
to
implement,
and
we
have
also
yeah
some
well-known
errors.
They
are
mostly
used
in
cluster
api
now,
but
some
time
ago
they
were
also
checked
in
providers
where
we,
when
we
were
basically
returning
the.
A
A
So
when
we
do
release
typically,
we
don't
only
release
cluster
api,
but
we
release
cluster
api
and
cut
vk
and
ksdp
and
also
kappa
d,
because
basically
we
use
all
of
them
to
validate
that
everything
works
and
so
before
both
bk
kcp
and
kappa
d
were
in
different
projects,
and
so
we
were
kind
of
playing
a
coordination,
a
release,
orchestration
game
in
order
to
get
all
of
them
released
in
the
same
time,
and
so
in
order
to
make
our
life
a
little
bit
simpler,
we
started
what
we
did.
A
We
deprecated
those
repository,
and
so
now
we
have
cap
bk.
We
have
kcp
and
we
have
cut
b.
A
New
provided
provider
inside
the
kubern,
the
cluster
api
code
made
and
as
you
can
see,
those
providers
itself
are
kuber
builder
project.
So
you
have
api,
you
have
config,
you
have
controllers,
and
then
we
will
discuss
a
little
bit
about
internet.
This
applies
to
cad
bk,
so
booster
recovery
mean
it
applies
to.
A
Kcp
control
playing
kubernetes
api,
config
controllers,
and
it
applies
to
the
docker
provider
as
well
yeah.
There
are
also
some
other
stuff,
but
let
me
see
those
are
implementation
detailed
and
we
will
talk
about
these
two
a
little
bit
later,
so
that's
mean
that
in
the
cluster
api
repository
we
have
we
started
getting
more
than
one
provider.
Luckily
all
of
them
deployed
at
the
same
time,
but
then
of
course,
people
start
start.
A
A
B
A
A
Okay,
thank
you
and
so
to
finish,
let
me
say
to
to
go
back
to
the
most
recent
changes.
We
added
two
two
new
folders,
let's
start
by
internal,
so
what
happened?
What
happened
is
that
cluster
pi
is
being
has
been
graduated
to
2v1.0.
A
Not
these
api
types,
but
let's
say
everything
that
you
can
import
and
use.
Looking
looking
at
this,
we
start
asking
ourselves,
but
are
we
exposing
only
what
is
intended
to
be
user
by
provider
by
provider
or
by
anyone
importing
cluster
apis
as
a
library,
or
are
we
exposing
too
much?
Are
we
having
too
many
public
methods
that
yeah
they
are
there?
People
can
start
using
and
then
this
makes
us
a
little
bit
slow
because
we
have
to
respect
the
guarantee
and
stuff
like
that.
A
So
we
added
internal
like
in
any
girl
program
and
we
started
moving
stuff
into
internal
in
order
to
to
make
sure
and
really
clear
what
is
copy
as
a
library
which
is
mostly
utils
and
the
four
fold
that
we
discussed
before
and
instead
what
is
cap
implementation
detail
and
what
happened
is
that
more
or
less
all
the
controller?
A
No,
I
think
all
the
controllers
now
are
not
anymore
in
this
folder
we
have
an
alias,
but
the
controller
implementation
is
is,
is
now
all
moved
to
internal.
So
here
we
have
all
the
controllers.
What
we
expose.
It
is
just
the
reconciler
type
and
the
setup
with
manager.
A
Now
we
are
generating
three
darker
images.
A
So
one
forecast
api
call
core
one
for
cap
bk
one
for
kcp
and
then
you
need
another
one
for
the
provider.
So
you
need
four
images,
but
maybe
someone
want
to
deploy
what
wants
to
keep
all
those
providers
group
them
in
a
single
executable
and
deploy
just
it.
So
we
want
to
support
this
scenario,
but
everything
else.
A
A
Okay,
visual
book
implementation
implements
the
validator
and
the
default
interface
like
this
feature
is
in
control
around
time
since
ages
and
allows
you
to
implement
books
where
the
web
books
basically
only
acts
on
on
the
on
the
types
that
that
you
are
working
on.
So
this
is
a
defaulter
for
machine
type.
You
get
only
the
machine
objects
in
and-
and
you
are
not
allowed-
basically
to
go
and
read
something
else
or
to
get
a
client
so
and
same
is
for
validators
for
validate.
A
A
A
So
if
everyone
wants
to
package
these
web
books
in
different
way
can
do
this,
but
the
implementation
is
under
internal
and
those
were
books
use
a
different
interface,
which
is
a
custom
default
custom
validator
they
mean,
and
the
difference
is
that
they
got
the
context
and
and
they
and
you
get
you
can
implement
in
on
different
types
and
in
this
type
you
can
get
a
reader
and-
and
so
you
can
do
crazy
stuff
like
when
I'm
validating
the
cluster.
I
want
to
check
that
the
clustered
class
exists,
or
vice
versa,
and.
E
A
Thank
you.
That's
a
good
feedback.
Yeah
historically,
let
me
say
things
happen
in
a
scattered
door,
where
I
try
to
give
a
a
kind
of
sense
out
of
it,
but
yeah
more
or
less
things
got
complicated
over
time
that
that's
the
tldr
sagar.
Might
please
peter.
D
Yeah
and
I
I
think
that
this
this
particular
historical
context,
was
much
more
useful
than
we
are
seeing
more
and
more
newer
providers
coming
up.
So
this
might
actually
be
very
helpful
for
them
to
get
their
code
organization
right
and
very
close
to
cappy
in
the
first
attempt
and
like
not
go
through
the
same
cycle
that
cappy
went
through
so
yeah
yeah.
A
Probably
it
will
be
interesting
also
to
have
a
subfolder
in
api,
because
now
this
is
the
cluster
api
api.
But
what
happened
when
we
promote
this
api,
which
are
different
api
groups?
We
will
move
them
here
and
and
we
will
have
a
clash
of
of
a
folder.
So
sooner
or
later
we
have
to
sort
this
out
and
yeah.
The
other
kind
of
lesson
learned
is
use
util
or
package
to
to
group.
A
C
Yeah
I
just
wanted
to.
I
just
wanted
to
give
give
you
both
a
shout
out,
because
I
think
the
the
way
you
organize
the
information
and
the
the
way
we
kind
of
went
through
the
directories
and
stuff.
I
thought
I
thought
it
was
spot
on
excellent
job
and
I
think
I
think
the
way
you
organized
it
is
awesome,
so
yeah
plus
one
for
me,
like
really
happy
with
the
way
you
guys
did
this.
Thank
you.
Thank
you.
My
great
feedback.
A
And
okay,
so
if
there
are
no
question,
we
can
jump
to
the
next
topic,
which
is
the
make
file,
and
hopefully
the
contest
will
help
in
figuring
out
how
the
make
file
works.
So,
okay,
so
first
of
all
in
cluster
api,
where
it
is
code.
Unfortunately,
we
don't
have
only
one
main
file.
We
have
more
than
one.
We
have
the
main
main
file,
which
is
really
big
and
we
will
focus
on
it.
A
A
Also
infrastructure,
docker
yeah,
so
the
idea,
let
me
say
that
the
problem
happened
more
or
less
when
we
started
adding
playing
around
batteries
included
so
bringing
more
stuff,
and
we
started
by
having
different
make
file,
because
that
was
the
state
of
the
art
when
we
started
importing
and
then
we
start
slowly
to
dropping
ancillary
make
file
and
to
move
everything
into
a
single
one.
A
We
are
not
finished
yet,
yet
kappa
d
is
still
there
stuff
like
that,
but
most
of
the
stuff
are
already
in
the
main
make
file.
So
I
have
a
look:
let's
have
a
look
at
how
it
is
organized.
So
luckily.
A
A
So,
for
instance,
you,
you
start
writing
code,
you
write
the
rpi,
you
write
your
types
and
then,
as
as
we
saw,
we
need
to
generate
because
we,
we
add
markers
and
then
we
got
code
generated.
So
we
have
these
uber
gener.
These
generate
everything.
Basically,
these
generate
all
the
manifest.
This
is
that
these
are
the
folder
under
slash
config,
for
all
the
config
that
we
have
in
the
project
except
cup
d
that
has
its
own
make
file,
but
yeah.
We
will
fix
this
up
soon.
A
A
Typically,
we
go,
we
call
this
one,
because
it's
super
fast
generated
copy
is
is
super
fast.
You
don't
care
really
that
it
generates
one
or
three
generic
conversion
instead
is
slow,
really
slow,
so
typically
for
generate
conversion,
you
go
and
and
and
call
only
the
one
that
that
you
need,
because
otherwise
you
wait,
you
wait
10
minutes
and
it
is
just
boring,
so
write
code
generate
what
you
need
if
you
are
playing
with
the
books.
There
is
also
this
generate
diagram.
If
you
have,
if
you
are
writing
plant
uml
files,
but
we
are
not.
A
A
A
A
Finally,
after
building
usually
tests-
let's
say
this
is
the
sim.
The
this
run
unit
test
integration
test.
These
are,
let
me
say,
all
variants
of
the
same
things.
Just
sorry,
those
are
all
variants
of
of
the
same
things,
just
running
integration
and
unit
with
yeah,
generating
a
unit
report,
which
is
what
we
use
in
proud
to
to
get
out.
A
The
the
red
and
green
boxes,
stuff,
yeah
coverage,
test
coverage,
verbose
just
to
get
more
logging,
and
this
is
the
target
that
that
you
can
use
to
run
and
to
end
test
locally.
Typically,
you
yeah.
It
is
describing
in
the
book
how
to
set
a
variable
so
that
it
runs
only
one
and
one
test
instead
of
many,
because
otherwise
it
will
take
too
long,
and
then
we
have
also
the
the
methods
for
testing
the
book.
A
If
you
change
the
documentation,
you
can
do
make
sure
book
and
spin
up
a
local,
a
local
version
of
the
book
with
live
reloading.
So
you
can
go
test
your
link.
If
you
take
the
work,
they
don't
work,
fix
it
page
reload
and
you
can
check
and
yeah
some
other
utility
get
me
a
kind
cluster
for
testing
with
tilt.
A
A
A
And
finally,
we
have
make
release
so
when
this
again
most
of
them
are
called
by
ci,
but
in
some
cases
you
want
to
test
them
locally.
They
are
all
there
same
goes
for.
A
Building
docker
image
or
for
pushing
docker
image
to
a
repository
now
everything
all
this
stuff
is
run
by
ci,
typical
user
done
and
also
we
have.
We
have
field
for
local
testing,
so
we
don't
really
need
them
and
yeah
to
finish
up.
We
have
the
clean
targets,
so
let
me
say
all
this
stuff
creates
temporary
folder
right
files,
so
you
can
clean
up
them
and
this
is
egg
tool.
So
again,
all
this
stuff
uses
stuff
use
a
controller,
gen
conversion
gen.
A
A
C
So
I
have
kind
of
a
speculative
question
here,
so
I
don't,
you
know,
feel
free
to
tell
me
to
wait
till
later
or
something
if
it's
not
appropriate,
but
you
know
I'm
I'm
thinking
about
as
I'm
looking
through
like
the
makefile
and
the
ci
processes.
We're
talking
about
here.
You
know
I'm
thinking
about
hopefully
someday
when
we'll
be
able
to
use
the
coop
mark
provider
with
the
ci
runs
that
come
from
the
cluster
api
repo,
and
I'm
just
wondering.
C
A
That's
a
good
one,
because
I
think
that
we
have
first
of
all,
we
have
to
figure
it
out,
basically
how
to
express
which
version
of
kubernetes
we
want
to
rely
on
and
then,
as
soon
as
we
get
these.
I
would
like
to
get
these
to
make
this
as
transparent
as
possible.
So
I
like
to
get
kubernetes
in
the
same
way
that
I
I
will
get
docker
when
we
get
rid
of
the
so
as
soon
as
we
get
rid
of
the
nested
locker
file,
I
will
have
make
generate
manifest
docker
for
cooper
mark.
A
Test
end
to
end
now
brings
in
cup
d,
and
I
do
expect
that
it
brings
it
cube
mark
in
a
totally
transparent
way.
Maybe
I
will
need
some
tool,
some
some
stuff
to
fetch,
manifest
or
whatever,
but
in
terms
of
user
experience,
it
should
be
transparent
that
that
we
are
starting
to
use
kobe
mark.
I
don't
know
for
testing
autoscaler.
F
A
That
definitely
me
too,
the
the
the
fact
the
problem
is
that
it
is
a
trade-off.
It
is
a
project
that
that
contains
many
many
stuff,
but
we
want
to
make
this
as
transparent
as
possible,
so
things
goes
cosmo
tilt
up
and
you
get
everything
make
this
target
and
everything
runs.
Otherwise,
we
create
a
barrier
for
new
contributors.
B
I
would
just
do
the
tilt
stuff
in
the
next
session.
I
don't
really
have
it
here
right
now,.
B
Just
depends
if
you
have
starter
questions,
but
I
think,
apart
from
that,
we
would
be
done
for
today.
A
Okay,
that's
fine
for
me,
so
if
people
have
something
that
we
want
to
chat
about,
this
is
the
meeting
is
here
for
that.
Otherwise
we
can
close
it
and
have
some
time
back.