►
From YouTube: SIG Cluster Lifecycle - Cluster API - Code structure & Makefile targets (APAC/EMEA) - 2022-02-10
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone
we
are
here
today
to
chat
about
cluster
api
code
structure,
make
file
how
to
build
and
debug
stuff
and
yeah.
So
so
let
me
start
sharing.
A
Why
I
don't
see
it
okay
here
it
is
so.
Do
you
see
this
to
the
code
yeah?
We
can
see
yes,
okay,
so
the
first
time
it
is
a
less
chat
about
so,
first
of
all
feel
free
to
ask
questions.
Being
me
whatever
and
and
discussing
so
I
first
things
that
that
we
will
try
to
do
is
to
give
a
sense
to
all
the
directors
that
we
see
here
and
so
a
nice
way
to
so.
A
What
I'm
doing
now
is
that
basically
I'm
using
getting
nor
to
idle
the
all
the
folders
and
then
we
will
restore
them
out
in
an
order
that
hopefully
will
help
us
in
understanding
stuff.
A
Let's
start
by
the,
let
me
say
what
what
is
capicor,
so,
if
you
look
at
copy
core
copy
at
the
end,
it
is
a
kuber
builder
generated
project.
So
usually
a
built-up
generative
project
has
a
comes
with
the
following
folder.
So
so
it
comes
with
a
folder
with
the
api,
where
you
have
the
different
api
version
for
each
api
version.
You
have,
of
course,
the
types
and
you
have
also
some
web
definition.
A
Okay,
so
far,
so
good
is
standard,
let
me
say
cube
accounting
and
you
can
find
them.
You
know
the
provider.
Then
there
is
this.
This
folder
config
this
folder
config
it
used
to
generate
the
the
component
yaml,
so
the
yaml
that
we
use
to
install
the
controller
in
the
cluster
and
it
has
some
some
stuff.
A
A
A
Airbag
rules
get
generated
from
markers
that
are
on
the
controllers,
and
yes,
I
I
think
that
that
yes,
some
some
other
question
about
the
config
stuff.
A
I
assume
not
another
thing
that
is,
that
may
be
worth
notice.
Is
that
also
in
the
api
we
have
generated
code,
we
have
generated
the
copy
generated
conversion
stuff,
like
that
and
yeah.
Also
them
are
generated
from
from
markets
for
markers.
So
let
me
say
this
is
a
light
motif.
Every
cuba
builder
project
has
used
heavily
used
generators
and
then
part
of
the
kube
of
cooper
build
a
skeleton.
Of
course
we
have
controllers.
A
We
will
take
a
look
at
this
later
and
we
have
a
folder
which
is
called
x,
where
you
have
a
bunch
of
script
or
utility
for
generating
stuff,
but
yeah.
We
we
have
over
time.
We
have,
although
added
also
a
lot
of
other
stuff
that
we
use
in
in
our
make
files
or
in
tilt
or
or
in
the
book
whatever.
So
this
is
the
the
first,
let
me
say
the
the
origin
that
the
real
starting
point
for
from
copy.
C
A
Okay,
okay,
so
api
config
controllers,
ack
and
yeah
I
mean
I
make
file
that
takes
care
of
all
the
generators.
We
will
have
a
look
at
the
main
file
later
and
and
yeah
that,
let
me
say
the
starting
point.
A
We
started
a
book
for
for
the
documentation
and
we
also
start
having
a
ci
signal.
So
let's
get
this
folder
visible.
So
we
can
take
a
look,
so
logos,
easy
one.
You
can
find
our
nice
logos
script.
There
is
a
bunch
of
script
that
our
ci
job
reference
to
there
is
the
one
for
verified
one
for
the
test
on
some
something
other
for
I
don't
know
apid
for
end
to
end
test,
and
if
you
are
working
on
ci
signal,
you
are
getting
to
know
them,
and
so
okay,
the
projects
started.
A
We
started
adding
things
and
and
making
things
complex,
but
so
far
more
or
less
everything
was
really
died.
Related
to
the
to
the
capricorn
beats
the
next
step.
I
happened.
A
I
if
I
remember
well
sometime
around
the
v1
alpha,
3
and
and
then
basically
what
happened
that
cluster
pi
moved
to
the
current
provider
model
before
cluster
and
and
and
this
idea
of
batteries
included
kick-in
so
before
we
had,
for
instance,
kcp
or
the
kubernetes
bootstrap
as
a
separated
repository,
okay,
that's
mean
different
lifecycle,
different
versioning
schema
and,
and
so
when,
when
there
was
the
need
to
release
copy,
it
was
kind
of
of
a
game
of
synchronizing
everything.
A
But
this
this
was
kind
of
complex,
because
yeah
a
provider
can
follow
up
with
some
delay,
but
some
things
like
kcp
or
bk
need
to
stay
in
sync
with
the
core
copy
and
the
same
apply
for
cup
d.
So
the
idea
was
okay,
let's
drop
the
separate
repository
from
the
stuff
and
let's
bring
these
other
pieces
together
into
into
cluster
api.
So
what
happening
is
that?
A
Okay,
we
started,
bringing
kcp
cafe,
k
and
kappa
d
into
into
this
repository
so
and,
and
now
things
start
to
get
a
little
bit
complicated
because,
for
instance,
for
kappa
bk,
we
create
a
booster
folder.
A
Okay,
because
it
is
a
booster
provider.
It
is
a
booster
provider
for
kubernetes,
that's
fine,
and
then,
if
you
look
at
it,
we
are
back
at.
Let
me
say
at
the
kappa:
it
is
yet
another
kuba
builder
project,
so
it
has
its
own
api,
its
own
config,
that
gets
generated
from
the
api
and
from
the
controller,
its
own
controllers,
internal.
A
We
will
talk
about
and
yeah
in
this
case,
kubernetes
also
has
the
problem
that
at
the
end,
we
are
generating
kubernetes,
config,
and
so
given
that
kubernetes
is
not
easy
to
be
imported,
we
have
a
copy
of
the
upstream
kuberne
api
version,
but
I
think
that
the
tldr
here
is
that
inside
copy
there
is
a
core
cuba
builder
project,
and
then
there
are
nested
cuba
builder
projects
that
doesn't
make
sense.
A
Okay,
so
the
first
one
is
catbk,
which
is
here.
The
second
one
is
kcp,
which
is
a
control
plane
provider
hubert
mean,
provide
a
control,
plate,
kubernetes
provider
and
again
we
have
api
config
controllers,
and-
and
we
will
talk
about
this
stuff.
A
A
Basically,
we
already
have
four
cuba
builder
project,
one
in
in
in
the
top
level
of
the
directory
three
and
three
other
nested,
okay.
A
B
A
A
This
is
a
copy
and
yeah.
B
A
Faster
cattle
has
been
added,
we
have
a
cmd
folder
for
caster
cattle
cmd
cluster
cattle
cluster
cattle
is.
It
is
not
a
cuba
builder
project,
it
is
a
cli,
but
it
has
yeah.
It
has
its
own
api
types,
because
if
you
know
cluster
cut
or
create
an
api
type,
which
is
called
the
provider
that
is
used
to
manage
the
inventory
of
provider,
so
how
many
providers
do
you
have
in
the
list
of
providers
started
in
your
cluster?
A
But
then
we
have
cmd.
Cluster
cattle
is
designed
to
be
used
as
a
library.
So
we
have
the
cluster
cattle
library,
which
is
actually
the
core
of
cluster
cattle
and
pluses
some
utilities
under
internal
analogue.
So
because
this
catalyst
is
different,
it's
not
a
cuba
builder
project,
it
is
a
cli
but
yeah.
It
is
another
things.
Inside
the
same
repository.
A
A
A
A
A
So
here
they
are,
experiment
are
two
cuba,
bizarre
project,
nested
in
where
the
way,
due
to
historical
reason,
the
only
the
main
difference
is
that
the
types
generated
for
experiment
basically
get
managing
in
in
the
they
get
deployed
together
with
with
with
core
copy.
B
A
And
the
the
the
that's
where
let
me
say
historically,
we
did
on.
A
A
A
Okay,
when
we
started
with
the
second
experiment,
it
was
not
possible
to
merge
them
together
because
they
are
two
different
things:
two
different
api
groups,
so
for
the
second
exp.
So
at
top
level
you
have
the
first
experiment,
machine
pool
and
then
a
folder
that
contains
the
second
one
which
is
custom
resource
sets.
A
It
is
because
cluster
resource
set
at
the
end
are
for
managing
the
domes
and
and
if
you
look
at
the
api,
if
you
look
at
the
api,
the
the
group
is
addons,
sure
sure,
okay,.
B
A
Directly
name
is
basically
the
first
group
name
prefix
exactly
so.
If
there
will
be
a
new
experiment,
there
will
be
a
new,
a
new
folder
and
something
that
we
can
do.
We
can
propose
to
the
community,
but
it
is
a
kind
of
breaking
change.
We
we
can
okay,
let's
sort
out
this
bed
nesting
and
create
x,
machine
pool,
and
so
things
get
a
little
bit
more
clear
because
I
understand
it
is
confusing,
but
I
think.
D
I
think
another
another
problem
you
will
run
into
is:
let's
say
we
that
experiment
is
done
at
some
point
and
we
want
to
promote
add-ons
and
end
up
with
another
top-level
folder
which
is
add-on,
slash
api.
So
we
have
api
and
advanced
api
and
with
each
new
group
we
get
a
new
top-level
folder
and
that's
yeah
something
you
have
to
discuss.
I
get
it
that
point
of
maybe
api,
slash,
api
group
and
then
version
would
be
a
better
pattern.
A
A
E
A
F
Oh
yeah,
so
my
question
was,
like
you
said:
first,
you
had
only
one
experimental
feature
and
then
crs
came
along.
That's
why,
with
nested
folder
like
what
would
have
been
ideal,
if
you
had
like
considered,
you
know,
like
you,
know,
they're
going
to
be
multiple
experimental
features,
like
suppose
like
thinking
about
loudly
for
by,
I
want
to
expose
some
experimental
things.
A
I
I
will
add
this
machine
pool
and
these
api
controllers
egg
internal
would
be
under
machine
pool.
F
A
Yes,
because
you
know
there
are,
there
are
two
concerns
at
play.
One
is
to
keep
the
code
while
we're
well
organized
and,
and
we
kind
of
failed
or
yeah.
Let
me
see
we,
we
cannot
do
it
better.
A
It
is
a
different
image.
It
gets
the
problem
with
a
different
deployment
with
a
different
web
book
and
stuff
stuff
like
that.
So
let
me
say:
cap
bk
and
kcp
are
posted
in
the
kubernetes
config
in
indicate
in
the
capital
base.
A
A
A
One
need
so
or
to
need
so
copy
itself
started
to
be
using
as
a
library
from
the
provider
and
the
provider
starting
asking
okay,
but
in
copy
we
have
this
bit
of
code.
Can
can
you
make
it
public,
so
we
we
we
re,
use
it
and
and
and
the
same,
let
me
say,
and
the
same
need
more
or
less
exist
also
between
cap
itself,
booster
control,
plane,
experiment,
so
yeah.
There
are
some
functions
that
that
we
want
to
to
reuse.
So
people
is
not
forced
to
reinvent
the
wheel.
A
I
don't
know
some
common
error
that
you
get
everywhere
in
copy.
Some,
I
don't
know
feature
gate
management.
A
So
version
management
every
provider
has
to
get
the
version
in
its
own
binary
stuff,
like
that,
when,
when
we
build
the
same
for
kubernetes,
and
so
we
created
a
version
folder
and
everyone
can
reuse
it,
and
so
these
are.
These
are
the
goal
of
these
three
folders
four
folders.
Please.
E
Yeah,
can
we
get
an
example
like
for
the
feature
package
like
you're
saying
when
someone
wants
to
build
the
library
and
want
other
projects
to
use
it
extend
it
so
like
what
features
exactly
you
are
referring
like
you
mentioned,
controllers
use
some.
A
It
it
import
these
and-
and
you
can
do
you
can
add
the
the
flag,
so
the
user,
when,
when
start
this
controller,
can
do
kubernetes,
config
minus
minus
feature,
gate
machine
pulled
through.
E
A
And
and
it
is
also
used
okay,
this
is
a
simple
one,
but
let's
look.
A
E
A
So
yeah
again,
then
there
are
other
stuff.
I
show
you
here,
but
I
don't
know
yeah
then
you
can
do
stuff
like
this
is
if
feature
gates
is
enabled,
then
do
something.
A
And
yeah
and
let
me
say
more
or
less,
the
same
applies
for
utils
in
util.
There
are,
I
don't
know
every
provider
I
have
to
read
the
config
probably
or
manage
it
in
some
way,
every
provider
in
some
cases.
Oh,
it
has
conversion
and
they
want
to
do
to
check
that
conversion
work
properly
conditions
annotation
patch
predicates.
So
there
is
a
lot
of
stuff.
Please
shivani.
E
A
A
We
have
also
this
third
third
party.
This
is
something
that
we
are
trying
to
get
rid
of,
because
yeah
we
were,
we
are
used.
We
are
doing
the
cordon
nodes
when
we
drain
the
machine
we
do
cordon
and
drain,
and
there
was
this
library
already
implemented
this.
I
don't
remember
exactly
the
story
why
we
added
four
kids,
but
now
we
are
trying
to
the
fork
and
and
imported
the
the
directly
kubernetes
drain.
So
we
can
drop
this
folder.
D
A
Yeah
there
was
something
missing
in
the
library
we
forked
to
get
it,
and
then
we
sent
the
upstream
pr
and
now
we
are
just
waiting
release
to
get
in
sync.
So.
D
We
can
at
a
current
point,
this
package
is
already
dedicated
and
I'm
just
about
to
create
an
issue
so
that
someone
drops
that
package
so
we're
not
using
it
anymore.
We
just.
A
A
Okay,
so
last
bit
and
and
then
a
set
of
recent
change,
so
at
certain
point
we
recently
graduated
to
v10
and
we
started
asking
ourselves.
Okay,
being
we
want
zero
means
that
we
need
to
give
better
guarantees
when
we
change
stuff-
and
things
like
that
and
in
order
to
let
me
say
at
the
same
time
give
to
the
people
using
cluster
api
as
a
library,
a
better,
let
me
say,
api
surface
and
in
in
parallel
to
start,
allow
the
project
to
move
faster.
A
Basically,
we
decided
that
okay,
there
are
stuff
that
are
not
not
meant
to
be
imported
by
someone
else.
So
util
is
fine.
It's
designed
to
be
imported
by
someone
else
we
agree,
but
how
do
we
implement
controllers
or
other
stuff?
It
is
an
entire
in
in
internal
detail
of
copy.
We
don't
want
to
expose
and
and
to
keep
comparability
on
every
sub
function
that
we
have
in
the
controllers.
A
E
A
A
A
And
yeah
this
was
driven
by
the
need
of
keeping
the
project
surface
as
as
small
as
possible
and
just
to
expose.
A
D
A
A
Web
book
manifest-
I
don't
remember
who
asked
before
but
okay,
this
drives
the
the
the
generational
bookmark.
Okay,
there
are,
but
there
is
a
thing
that
these
were
books
come
with
a
limitation,
so
these
were
books
only
get
the
object
in
input
or
at
maximum
the
the
current
and
the
old
object.
A
These
web
books
are
more
powerful
because
you
implement
a
different
interface
custom
default
or
custom
validator,
and
these
and
using
these
new
interface.
A
A
They
come
with
consideration
that
we
have
to
do
but
yeah.
E
Yeah,
I
think
you
answered
this
now
so
my
question
is
that
only
why
we
have
like
the
webhook
implementation
at
the
api
level,
but
we
can't
also
move
it
to
the
internal
package
like
we
are
doing
with
the
controllers
and
other
things
to
reduce
the
public
surface.
Basically,
so
I.
A
Yes,
to
be
honest,
we
we
don't.
We
don't
know
if
you
want
to
yet,
if
you
want
to
move
all
the
way
back
to
this
new
format,
we
are
still
to
figure
it
out.
For
now,
we
have
only
moved
the
web
book,
the
need
to
access
different
objects
for
their
validation
and
stuff,
like
that.
This
is
mostly
for
cluster
and
cluster
class.
Now.
E
A
My
for
my
suggestion
is
that,
if
you
are
for
now,
if
you
are
happy
with
the
the,
let
me
say,
all
the
style
books
that
basically
gives
you
only
the
object.
That's
fine!
Why
I'm
telling
this?
Because
when
you
know
where
books
are
processing
during
the
cube
api
server
pipeline,
okay
and
they
should
be
as
fast
as
possible
as
stable
as
possible.
A
D
A
Use
this
default
web
book,
they
are
pretty
simple:
they
got
an
object.
Take
some
decision
answer.
Okay,
if
you
put
an
additional
client
in
the
middle,
it
becomes
slower
and
more
fragile.
So
because
you
have
a
pi
server,
caller
a
web
books,
the
book
itself
create
a
new
client.
The
ips
server
goes
and
read
and
process
and
do
stuff
so
it
becomes
lower
and
it
is
an
overhead
that,
even
if
not
necessary,
we
should
avoid.
E
A
D
But
I
think
we
have
two
different
dimensions
here:
one
is
which
interface
you're,
implementing
the
custom
web
book
thing
or
the
other
one
and
the
other
question
is:
where
do
you
put
that
web
hook?
Because
because
we
could
have
still
put
those
new
web
hooks
into
the
api
package
that
would
have
worked,
but
there's
no
reason
why
you
can
do
that.
But
what
we
did
is
we
had
some
web
books
where
we
wanted
to
use
that
client.
So
we
had
to
implement
a
new
interface
because
then
we
had
a
bunch
of
dependencies
to
new
packages.
E
B
Yeah
and
just
to
just
to
follow
up
on
this
one
like
siwani
was
mentioning.
Should
we
have
like
similar
internal
kind
of
you
know,
implementation
and
providers
like
like.
I
think
it
doesn't
make
sense
for
providers
to
follow
exactly
the
internal
type
of
things.
That's
what
I
understand,
because
finally,
cappy
is
the
one
to
access
and
like
central
point
where
we
try
to
import
stuff.
C
E
C
F
A
A
Okay-
and
you
know
the
controller-
does
not
have
only
reconciled
that.
Then
they
get
a
tray
of
coal,
which
is
complex,
okay,
that
in
many
cases
you
split
up
also,
this
tray
of
color
in
different
packages.
Cope
like
like
stuff,
like
that.
No
okay,
if
you
get
these
before
we
were
having
these
at
this
level,
doesn't
mean
that
everyone
importing
copy
was
worth
seeing
this
method
and
you
don't
have
control
if
they
use
it
or
not
and
technically
you
cannot
change
them
because
you
are
breaking
the
possible
consumers.
A
A
Many
many
controllers
running
in
a
cluster.
They
wanted
to
create
a
bigger
controller
with
copycat
or
whatever,
only
a
single
controller
running
everything
they
can
do
it.
They
just
import
the
reconciler,
they
do
set
up
the
manager
and
they
bring
together
copy
and
everything
they
want.
This
is
the
the
the
level
that
we
want
to
support,
but
how
we
implemented
this
reconciler
loop.
F
A
B
A
Okay,
but
how
this
web
book
is
implemented,
and
if
we
look
at
cluster
we
have
yeah,
we
have
default,
but
inside
default
we
have
a
bunch
of
nested
net
method
default
cluster
variable
stuff,
like
that
default
machine
deployment.
Variable
we
don't
want
to
make
these
methods
public.
We
want
to
be
able
to
change
them
whenever
we
need.
A
A
A
A
A
V04
and
you
look
under
under
construct
control,
you
start
getting
something
more
machine
set
to
deploy
and
you
start
getting
some
internal
stuff,
but
the
problem
is
even
worse
because
you
don't
only
have
this
stuff,
but
you
have
also
a
set
of
sub
packages
that
we
are
exposing
and,
and-
and
these
are
these
are
a
lot
of
details
that
we
we
don't
want
to
to
commit
to
make
a
spot.
They
are
not
designed
to
be
used
by
our.
I
can
go
here
and
show
for
the
kubernetes
control
plane,
yeah.
We
we
have.
A
No,
it
is
no,
it
could
be
yeah.
I
don't
remember,
on
top
of
my
mind
some
example,
but
the
problem
is
that
we
are
exposing
internal
implementation,
detail
of
our
controllers.
We
don't
want
people
to
start
using
them,
rely
on
them
and
we
don't
want
to
break
them
if
we
are
implementing
a
new
feature
and
somehow
we
refactor
a
controller
how
it
is
implemented
internally,
we
we
want
to
as
a
cluster
api.
A
C
So
I'm
not
sure
if
it
is
already
discussed,
but
we
have
the
bootstrap
provider
and
the
kcp
in
cluster
api
repo.
We
could
have
also
have
kept
them
separately
and
deployed
them
separately,
like
we
do
for
providers.
What
was
the
reason
we
did
it
like
this?
C
A
A
A
D
And
I
think
testing
is
also
a
lot
easier
because
now,
as
it
is
with
cluster
api,
we
can
run
end-to-end
tests
and
we
have
core
cluster
api.
We
have
an
infrastructure
provider
with
capti
and
bootstrap
and
controller
and
right,
and
we
can
just
test
if
everything
works
with
each
other
in
the
current
version
and
if
those
were
four
different
repositories,
it
would
be
way
way
more
complicated.
A
Reason,
I
only
would
add
a
things
that
most
probably
you
already
know
that
over
time
under
test,
we
added
another
things
which
is
the
test
and
the
test
framework
and
then
the
twist
the
test
framework
that
all
the
provider
are
using.
Also,
this
library
is
meant
to
be
used
by
by
people,
so
it
is
like
util.
It
is
a
service
for
other.
We
are
the
first
consumer,
but
also
other
and
can
use
it.
A
I
understand
that
there
is
a
lot,
but
yeah
dltr
is
that
in
capi
there
are
a
lot
of
stuff
we
are
trying
to
make
order,
especially
we
are
trying
to
make
it
clear
what
are
the
stuff
that
we
intend
to
give
to
to
be
used
by
other
and
what
are
the
stuff
that
that
are
cap,
internal,
that
that's
the
the
trend
that
we
are
following.
A
D
Just
want
to
ask,
I
think,
probably
we
should
end
the
session
with
the
makefile
targets
and
we
will
move
everything
else
to
the
next
one,
so
just
that
it
doesn't
explode.
A
Yeah,
it
makes
sense,
and
I
try
to
make
it
short.
So
luckily,
recently
we
had
a
pr
merging
and
making
some
ordering
in
the
in
the
make
file
and
the
pr
is
basically
grouping
make
target
by.
Let
me
say,
developer
workflow
cycles,
so
the
first
things
that
a
person
developers
usually
do
does
after
writing.
Code,
especially
writing
api-
is
that
it
has
to
generate.
A
You
have
to
generate
the
manifest
so
the
things
under
config
you
have
to
generate
a
deep
copy
or
you
have
to
generate
conversion.
That
goes
on
the
api
stuff
and,
as
you
can
see,
we
have
uber
generate
that
generate
everything
which
at
the
end
is
super
slow
and
and
then
you
have
oh,
I
want
to
generate
all
the
crds
or
I
want
to
generate
all
this
here
this
only
for
core
or
I
want
to
generate
the
crd
for
kubernetes
bootstrap
and
same
goes
from
the
other,
so
uber
subtask
number
one
and
sub
sub
task.
D
Maybe
maybe
one
small
tip
generate
manifest
is
pretty
fast,
so
you
don't
really
have
to
care
about
just
running
once
the
same
for
deep
copy.
But
you
really
have
to
care
about
conversions
because
they
are.
D
A
That's
it.
This
is
super
soon
we
have
another
generate
stuff
which
is
diagram
from
that
generates
a
plan
to
ml
files
which
are
on
the
book
so
not
use
them.
So,
and
this
is,
let
me
say
first
first
step,
I
write
code
change
an
api,
but
then
I
needed
to
to
generate
so
everything
gets
aligned.
Second
step
is,
is
linked
and
verified,
so
make
modules
basically
runs,
go
mode
tidy
and
ensure
the
module
they
make
linked
link
fix.
Then
we
have
a.
A
A
Okay,
after,
let
me
say
running
this
one,
this
one,
you
should
get
your
code
ready
to
be
built,
and
so
the
next
step
is
build
again.
We
have
built
the
custard
cattle,
we
have
build
managers,
so
the
binary
all
the
binary
or
one
of
them
or
you
can
build
the
docker
images.
So
docker
image
basically
contains
the
the
binary
all
of
them
on
only
one
and
yeah
you,
you
can
build
also
the
end-to-end
framework
that
that
runs
stuff
so
far,
so
good.
So
in
the
developer
also,
I
wrote
code
linkedin,
I
built.
A
I
checked
that
everything
builds
technically
next
step
is
test
or
develop,
so
you
can
run
tests
with
the
either
or
you
can
run
them
with
the
make
target.
A
You
can
run
simple
tests
and
one
test
you
can
run
test
with
generating
juni
report
coverage,
verbose
output.
So
this
is
the
first
way
to
to
test
stuff
locally,
and
then
we
will
talk
about
stefan,
we
will
talk
a
lot
about
these.
These
are
duties,
I'm
a
if
I'm
doing
the
pr
and
changes
stuff
in
the
book.
I
want
to
see
how
it
looks
I
can
use,
make
serve
book
and
spin
ups,
a
local
copy
of
the
book,
and
I
can
navigate
and
check
if
links
work.
A
A
Typically,
you
don't
need
to
use
them.
They
are
called
by
the
release,
annotation
automation,
and
so
they
create
a
risk
manifest
for
copy
release,
manifest
for
kappa
d,
all
the
release.
Binaries
these
are
yeah.
We
sometimes
we
deploy
to
staging
some.
Sometimes
we
deploy
it
on.
First,
the
nightly
release
or
to
be
important
by
the
controllers.
A
And
these
are
all
the
stuff
that
probably
we
can
remove,
because
now
we
publish
images
using
image,
promotion
and
stuff
like
that
before
we
were
doing
stuff
manually.
So
I
don't
think
that
we
need
them
all
unless
you
want
to
push
to
some
of
your
personal
or
rent
or
company.
A
A
Last
bunch
of
of
the
targets
are
clean.
So
all
this
stuff
create
a
folder
or
usually
we
compile
stuff
like
I
don't
know,
customize
or
or
whatever
utility
we
are
using.
We
compiled
them,
but
sometimes
they
got
stale.
So
we
have
methods
to
clean
up
or
this
binary
or
the
book
or
the
release,
artifacts
or
or
whatever
also
clean
up
the
conversion.
So
we
can
regenerate
from
from
scratch
and
yeah.
The
last
group
is
technically
you.
A
You
should
not
care
about
because,
for
instance,
controller
gen
when
you
run
make
generate
behind
the
scene,
it
generates
controller
gen
but
yeah.
If
you
want
to
build
your
own
local
copy,
I
I
don't
know
from
a
for
the
banking
purposes
or
whatever
you
can
do
so
they
are
kind
of
secondary
goals,
make
targets
but
yeah.
They
are
useful
and
I
guess
that's
all.
Hopefully
the
new
structure
helps
and
is
there
some
question
or.
A
F
A
Yeah
a
lot
then
so
I
can
explain
it
if
you
want
focus
yeah,
please
please
go
on
otherwise.
I
in
the
meantime
I
I
can
open
it.
Yeah.
D
Okay,
so
depending
on
which
provider
you
look
we're
publishing
different
images
at
different
stages,
so
in
copy
we
have
that
release
staging
target,
and
that
is
only
pushing
docker
images.
So
release
staging
is
just
pushing
docker
images.
We
are
doing
that
after
each
after
each
merged
commit
on
main.
So
there
are
images
for
each
yeah
each
commit
on
main
and
also
for
each
tag.
When
we
release
when
we
look
at
release
staging
nightly,
it
will
also
publish
docker
images,
but
it
will
also
publish
manifests
additionally.
D
So
in
summary,
we
are
running
release
staging
after
each
merge,
so
we
get
docker
images
and
we're
running
release
staging
nightly
once
per
day.
At
I
don't
know
eight,
I
don't
know
time
zone
and
then
you
get
manifests
and
docker
images,
so
you
can
actually
pick
up
a
nightly
cappy
version,
yeah
yeah.
A
D
Also
publishing
manifests
after
each
commit,
but
not
here.
D
Yeah,
I
think
the
advantage
is
with
what
we're
currently
publishing.
You
can
only
say
I
want
to
have
nightly
versions
from
that
date,
but
you
can
say
I
want
to
have
latest
and
if
you
publish
after
each
commit
on
main,
for
example,
the
new,
then
someone
can
say
I
always
test
against
copy
main.
So
I
always
get
the
new
stuff
and
I
see
when
it
breaks.
A
F
A
E
E
D
A
A
D
I
think
it's
good
to
take
the
time
to
yeah
to
answer
questions
so
that
would.
A
D
Help
folks
so
yeah
as
we
wrote
in
the
chat,
we
figure
out
how
to
continue
and
how
to
make
that
subscribable
and
all
that
stuff.
But
I
post
something
in
sec
channels
where
you
can
see
that
what
yeah,
how
you
continue
with
it.