►
From YouTube: SIG Cluster Lifecycle - Cluster API - Development/debugging with Tilt (EMEA/Americas) - 2022-02-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone,
let's
chat
about
local
development
and
debugging
with
that
planners.
I
just
want
to
show
how
we
can.
B
A
And
develop
locally
in
cluster
api,
if
you
have
any
questions,
feel
free
to
ask
at
any
point
in
time.
If
you
don't
want
to
interrupt
me,
just
raise
your
hands
and
I.
A
Hopefully,
you
see
my
intelligent
chrome.
Can
anyone
confirm.
A
Okay,
perfect
good,
then
yeah.
So,
first
of
all,
if
you
want
to
follow
those
sessions,
we
have
a
discussion
here
on
our
github
repository,
let's
chat
about
with
previous
sessions
of
future
sessions.
If
you
have
any.
A
Future
sessions,
any
feedback
yeah
just
respond
here
that
would
be
great,
but
now
to
the
actual
topic
so
tilt
I'll
start
with
some
context,
so
we're
using
tilt
for
local
development
debugging.
What
does
that
mean
so
with
tilt
we're
setting
up
a
local
management
cluster
based
on
our
current
code,
on
whatever
branch
we
currently
checked
out,
then
we
can
change
the
code,
redeploy,
debug,
etc,
etc.
So
high
level
features
yeah.
A
A
Then
we
can
debug
into
running
controllers
so
yeah.
If
you
have
to
debug
some
bugs
etc,
we
can
just
add
a
breakpoint
and
what
is
what
has
become
more
and
more
useful
lately,
we
can
take
a
look
at
what
metrics
and
what
logs
our
controllers
are
emitting
to
figure
out
if
they
are
already
good
enough
or
if
you
want
to
improve
them
yeah.
So
we
have
one
page
in
our
book
which
describes
hopefully
most
of
it,
which
is
called
rapid
iterative
development
with
tilt.
So
let's
start
with
prerequisites.
A
So
what
do
we
need?
Of
course
you
need
docker
somehow
to
create
basically
to
create
a
kind
cluster
which
is
standard
basis
for
mentioning
clusters,
so
we
also
need
kind,
then
we're
using
tilt,
of
course,
then
customize
and
end
subs.
That
is
mostly
used,
because
when
we
create
a
management
cluster,
we
are
basically
simulating
cluster
cuddle
in
it
and
that
or
let's
say
we're-
simulating
the
combination
of
a
release
and
plastic
cut
liners.
A
So
when
we
make
a
new
release,
we
run
all
our
yaml
files
through
customize,
and
then
we
have
those
manifests
that
we
actually
release.
So
that's
the
first
part
and
then
a
cluster
card
ended
later
on
downloads,
all
those
files
and
runs
nsubs
on
them.
So
we
both
those
binaries
for
development
workflow
and
we
have
make
targets
for
them.
What
could
happen
is
that
you
have
some
outdated
versions
over
time.
Then
you
have
to
run,
make
clean
bin
and
build
them
again.
We
currently
don't
have
any
kind
of
detection.
A
If
those
binaries
have
the
right
version,
they
are
also
start
a
certain
some
yeah.
Let's
say
a
cluster
api
specific
path,
so
we
have
hack
tool,
spin
and
so
you're,
not
using
any
substrate
customize
that
you
have
somewhere
in
your
path.
A
Then,
if
you
want
to
deploy,
we
called
it
observability.
So
something
like
prometheus
loki,
grafana
prom
tail.
Then
you
also
need
helm
because
we
consumed
those
assign
drugs
and
last
steps
clone
cluster
api
and
every
every
provider
that
you
want
to
use
with
ted
yeah.
So
now
let's
actually
use
it.
So,
first
of
all,
you
have
to
create
a
countless
try
to
make
that
a
little
bit
bigger.
A
So
let's
take
a
quick
look,
as
I
said.
If
there
are
any
questions,
just
ask
yeah.
So
what
that
script?
Does
I
don't
go
into
the
details,
but
essentially
it
applies
a
kind
cluster,
with
those
volume
mounts
and
with
a
registry
configured
not
sh,
oh
okay,
the
network
thing
seems
to
be
for
for
the
registry,
so
first
we
create
a
kind
cluster
with
that
configuration.
That's
what
you
want
so
for
cab
d,
as
I
mentioned,
then
we're
doing
some
network
magic
and
then
we're
deploying
a
registry
on
top.
A
So
once
that
script
is
done,
we
have
a
local
kind
cluster,
with
a
registry
running
on
top.
That
tilt
will
use
to
store
and
pull
images
from
yep.
So
that's
where
I
already.
I
already
did
it
before
because
it
takes
like.
I
don't
know
one
two,
three
minutes
or
so,
and
I
didn't
want
to
wait
then
minimal
to
so.
We
have
a
tilt
configuration
file
where
you
can
enable
all
kinds
of
features
that
we've
documented
here,
but
I
would
say,
a
minimal
tilt
settings
looks
roughly
like
this.
A
So
I
think
the
most
popular
providers
like
aws
or
actually
should
just
have
those,
but
if
they
don't,
what
you
have
to
do
is
add
a
config
file
like
this
and
that
config
first
specifies
the
name
of
the
provider.
Then
I
think
that's
just
the
image
name,
which
is
used
by
tilde.
It
probably
has
to
match
the
image
name
that
you
have
in
your
gaming
files
or
something
then
tilt
can
automatically
redeploy.
A
That's
the
configuration
for
which
files
should
be
watched
for
the
redeployment
or
which
yeah
files
and
packages.
Then
we
have
label.
We
see
that
later
until
that's
for
the
ui
and
the
manager
name,
so
yeah
not
sure,
for
what
exactly
does
individual
feeds
are
used
like
management
name
here,
but
should
be
very
straightforward
to
fill
out
yeah
and
once
we
have
that
stuff,
we
can
just
use
setup.
A
C
See
him
correctly
yeah.
Can
you
hear
me
yeah,
yeah,
so
yeah
I
had
a
question
from
the
tilt
file
that
we
were
looking.
If
we
change
the
name
of
the
manager,
will
it
create
a
manager
with
the
name
that
we
change.
C
Yes,
in
the
previous
slide
in
the
tilt
file,
where
we
are
giving
image
name
and
the
aws
manager
name
in
the
provider
file.
So
if
we
change
the
name,
will
it
change
the
actual
container
name
as
well
or
it's
just
a
placeholder.
A
I
think
it
is
used
to
match
something
you
see,
or
maybe
it's
not
even
used
anymore
for
pizza.
Do
you
know
by
any
chance
for
it
yeah
just.
C
A
Yeah,
I'm
not
sure
I
would
have
expected
so
we're
reading
that
json
file
is
somewhere
here,
and
I
would
have
expected
that
we
actually
use
that
field,
but
I'm
pretty
sure
we
don't,
which
would
mean
yeah,
that
that's
probably
legacy.
D
Yeah,
I
I
actually
worked
on
adding
that.
So,
if
you
look
at
the
the
pr
they
post
it
in
chat,
it's
actually
it.
It
has
the
name
when
you,
when
you
see
the
tilt
file,
yeah
go
actually
go
to
the
conversation
section.
A
Yeah
so
so
I
think
I'm
definitely
sure
that
we
need
a
label.
I'm
just
super
confused
if
we
actually
use
the
manager
name
somewhere,
because
they
would
have
assumed
that
my
graph
here
works,
but
it
doesn't.
D
Oh
yeah
yeah
so
step
and
if
you
go
to
conversation
and
go
all
the
way
down
actually
capture
the
screen
image.
D
A
A
But
otherwise
I
mean
the
only
re.
The
only
way
that
we
actually
use
it
is
that
we
use
some
kind
of
string
concatenation
on
the
manager
name
key,
because
otherwise
I
would
have
really
assumed
that
I
find
it
in
a
third
file.
But
let's
see
vinnie,
you
also
raised
hands
before.
Was
it
for
the
same
topic
or
do
if
not.
D
Oh
yeah,
I
actually
have
different
questions
so
can
I
run
multiple
providers
in
tilt?
The
reason
I'm
asking
is,
I
actually
run
capg
and
kappa
at
the
same
time,
but
it's
not
working
for
me.
So
I'm
not
sure
if
I'm
doing
something
wrong
or
it's
not
supported.
A
It
should
work.
I
was
frequently
running
capti
and
kappa
in
the
same
cluster
and
that
worked,
but
it
would
be
interesting
to
see
if
there
are
actually
some
some
incompatibilities
between
those
two
providers
or,
if
yeah,
but
I
would
say
in
general,
it
should
work,
we're
not
aware
of
any
dependencies
that
should
make
it
impossible.
D
Yeah
yeah,
I
actually
do
it
again
and
I
just
saw
an
error
that
I
cannot
find
a
default
provider
but
yeah
yeah.
I
I
can
check
it
again
and
if
it
happens,
I
raise
an
issue.
A
Yeah,
absolutely
please
let
us
know:
okay
good.
Let
me
check
where
I
was
roughly
not
yeah
filled
up
and
yeah.
If
there
are
no
other
questions,
then
I'll
continue,
okay,
but
feel
free
to
ask
at
any
time.
A
Okay,
so
I
ran
into
it
up
and
the
first
thing
that
tilde
does
or
one
of
the
first
things
is,
it's
running
till
prepare
I'll.
A
I
think
I
go
over
that
very
quickly
and
if
you
have
some
time
at
the
end,
then
we
can
go
into
it
further,
but
essentially
we
have
our
own
binary,
which
should
be
so
wrote
and
when
the
tilt
file
here
is
run,
it
first
generates
a
lot
of
resources
and
then
it's
triggering
or
maybe
before
probably
before
I
guess
somewhere
at
the
beginning,
it's
running
till
prepare
passes
in
a
bunch
of
parameters
and
tilt
repair
is
rendering
yaml
files.
A
So
when
you
look
into
tilt
build
yammo,
then
you
see
all
kinds
of
things
that
have
been
generated
by
tilt.
Prepare
tip
repair
is
also
doing
more
things.
It's
also
yeah.
It's
also
deploying
search
mention,
for
example,
did
I
miss
anything
for
it?
So
I
don't
know
we're
generating
yummy
files
and
deploying
search
manager,
I'm
not
sure
if
it's
more
provider
yeah
quite
a
resources.
B
B
So
you
have
to
start
from
the
conflict
file,
apply,
customize
and
subs,
and
so
on
and
so
forth,
and
then
you
have
to
apply
to
the
cluster
and
then
you
have
to
install
a
search
manager,
wait
for
it
stuff,
like
that,
before
all
the
steps
were
executed
basically
were
implemented
in
in
the
tilt
file,
so
that
if
this
file
was
complex
and
long
and
they
were
executing
in
sequentially
does
mean
that
for
starting
your
tilt
environment,
it
took
something
like
four
or
five
minutes
now,
with
these
tilt
prepare
stuff,
we
are
we're
executing
two
set
of
tasks
in
parallel
and
that-
and
this
basically
gives
you-
everything
is
needed
to
start
the
tilt
environment
in
around
20
second
seconds.
B
A
Yeah
exactly
so
mainly
because
it's
way
faster
and
better
to
program,
then
I
think
starlark
should
be
what
tilden
is
using.
So
I
think
eventually,
our
goal
is
that
we
only
keep
in
tilt
file
what
is
either
easier
to
do
there
or
what
has
to
be
done
there.
So
there
are
things
like
resource
definitions,
for
I
don't
know,
docker
build
or
something
that
has
to
stay
here,
but
I
think
otherwise
we'll
try
to
move
everything
that
makes
sense,
I'm
interested
in
binary,
so
there's
one
pr
open,
for
example
too.
A
Oh
I
shouldn't
go
too
deep,
so
we're
doing
some
customization
to
our
entry
points
and
commands,
and
that's
currently
partially
done
in
a
tilt
file
and
partially
in
main.go
and
interpreter,
and
now
we're
trying
to
move
it
entirely
to
prepare
just
make
it
easier
and
testable,
and
all
this
stuff,
okay,
so
yeah.
A
So
when
I
run
filled
up
first,
we
want
to
prepare
which,
as
mentioned,
generates
a
lot
of
yummy
files
and
also
deploy
server,
mention
and
then
tilt
calculate
all
those
resources
and
that's,
for
example,
where
labels
come
in.
So
when
we
look
here,
we
have
different.
So,
that's
just,
I
guess,
a
category
or
a
label
or
group
defined
by
labels,
something
like
that.
So
we
can
see
all
binaries
in
that
type,
which
are
just
the
builds
of
an
individual
binary.
A
A
I
think
it's
a
multi-stage
build
correctly
so
yeah,
definitely
and
once
that
build
is
done,
the
image
is
pushed
into
our
local
registry
and
then
we're
deploying
the
corresponding
yeah
deployment,
which
is
essentially
the
same
which
is
done
by
classical
limit.
So
we
try
to
make
this
as
similar
as
possible
to
class
confident
there
might
be
small
differences
and
let
us
know
if
you
have
some
problems
but
should
be
like,
I
guess
95
the
same
and
it
works
great
for
us
once
the
deployment
is
actually
deployed.
A
Tilt
is
waiting
for
your
pot
to
get
ready
and
then
it's
hitting
the
locks
so
now
we're
seeing
that
if
you
want
to
redeploy,
I'm
still
not
sure
which
buttons
I
have
to
click.
But
let's
say
I
want
to
rebuild
the
binary.
B
A
B
B
So
tiltas
has
this
nice
feature,
which
is
basically
a
live
reload,
so
you
can
change
your
code
and
then
your
code
get
and
then
you
can
redeploy
without
restarting
your
environment,
which
is
super
cool
when,
when,
when
you
develop,
and
so
in
order
to
make
tilt
a
live
road,
basically
we
are
not
using
the
docker
the
same
docker
file
that
is
used
when
we
deploy
copy.
We
are
using
a
slightly
different
one
that
basically
split
the
build
process
into
the
first
one
is
build.
B
We
build
the
binary
locally
and
if
you,
if
you
look
at
the
folders
in
the
field
in
the
tilt,
build
there
is
bin.
So
we
we
build
the
binary
locally
and
and
then
we
embed
the
binary
in
the
local
images.
This
is
much
faster
than
basically
having
a
tilt
mounting
your
local
source
file.
B
So
we
we
have
this
two
stage
build
in
tilt,
and
so,
whenever
you
change
your
code,
if
you
are,
if
your
tilt
file
configured
for
automatic
reload,
everything
happens,
tilt
knows
the
dependencies,
and
so
it
repeats
the
binary
and
then
it
deploy
the
controllers.
B
If
you
go
for
manual
update,
which
is
something
that
I
usually
do
because
I
want,
I
don't
want
it
every
time
I
say,
but
I
save
a
fight
everything
gets
redeployed.
Usually
I
change
235
when
I'm
sure
about
my
change.
I
I
decide
now
it
is
time
to
redeploy.
So
if
you
want
to
control,
you
have
to
click
these
two
buttons.
A
Okay,
so
yeah,
I
I
got
the
automatic
redeploy
feature
if
it's
enabled
I
was
just
assuming
that,
even
if
it's
disabled,
I
only
have
to
click
one
of
those
buttons,
but
okay,
apparently
not
good.
By
the
way.
I
just
had
two
manager
binaries
here.
That
was
just
because
one
was
very
old,
but
I'm
still
wondering
why
we
only
have
one
manager
binary
here.
I
hope
all
that
stuff
is
wrong
sequentially,
but
I
I'll
check.
A
Okay,
good
so
yeah,
we
talked
about
binaries
controllers,
that's
the
same,
just
grouped
by
provider
and
that's
where
the
label
is
used
to
group
them
by.
I
don't
know
captive,
kappa
whatever,
by
the
way
for
the
providers
which
are
in
the
core
repository.
We
essentially
hard
code
that
information,
so
we
have
basically
also
provider
json,
but
it's
just
already
embedded
in
our
tilt
file.
So
it's
not
like.
We
need
different
information,
it's
just
like.
We
already
have
it
here,
yeah
and
then
we
have
observability.
A
I
showed
a
conflict
later
on,
but
essentially
we
can
currently
deploy,
let's
say,
locking
and
metrics,
but
I
showed
the
configuration
later
and
yeah.
So
that's
that
page.
There's
also
that
one
which
shows
a
few
more
things
so
that
other
thing
was
resource
view.
As
far
as
I
know,
that's
the
overview.
So
what
you
can
do
here,
too,
is
for
some
of
our
deployments.
A
We
have
a
port
forward
configured
so,
for
example,
for
grafana
I
have
local
lost
3001
here
I
can
just
click
on
it
and
then
out
of
the
box
with
the
configuration
I
will
show
in
a
minute,
you
already
get
a
deployed.
Loki
stack,
for
example,.
A
So
you
can
roughly
get
the
same
today
on
cluster
api
main
just
the
logs
are
slightly
different.
They
are
not
that
good
yet,
but
hopefully
we'll
be
soon
but
yeah.
So
if
you're
unlucky
and
want
to
see
locks
or
specific
controller,
you
can
just
filter
by
app
yeah.
I
didn't
deploy
a
cluster
yet
so
we
see
that
later
how
to
filter
on
clusters,
and
things
like
that-
you
can
also
switch
data
source
to
previous.
A
B
A
B
Sorry,
the
idea
is
is
that
we
want
to
make
logs
metrics
part
of
the
developer
workflow,
so
we
can
basically
test
look
at
them
like
people
do
in
production,
and
this
is
why
we
are
making
this
tool
part
of
the
our
developer
environment.
B
B
A
That's
roughly
the
idea!
Okay!
So
next
up,
I
would
show
a
little
bit
what
you
can
configure
in
third
setting.
So,
as
I
mentioned
before,
the
full
reference
is
here
so
feel
free
to
just
go
over
it.
A
But
I
just
will
click
overview.
What
is
possible
so
first
thing
a
lot
context:
that's
just
a
security
mechanism
where
you
can
say
that
tilt
should
only
deploy
to
a
specific
cubeconfig
context.
A
But
if
you
are
deploying
some
other
kubernetes
cluster,
and
if
you
want
to
make
sure
that
you're
not
deploying
to
production
which
happened
to
me
once
or
twice
at
another
company,
then
you
should
set
a
lot
of
context
to
be
safe,
because
it
happens
very
quickly
that
you
just
play
with
cube
cd
or
your
contact
switches.
You're
running
you
run
another
tilt
up
and
yeah.
You
deploy
interesting
stuff
on
customer
clusters,
something
next.
A
Next
up,
you
already
mentioned
that
trigger
mode
so
yeah
you
can
like
I'm
not
sure
what
a
default
is,
but
you
can
set
it
to
something
like
automatic.
You
have
to
google
what
the
exact
enum
is,
but
but
you
can
also
set
it
to
manual,
I'm
personally
also
using
manual,
because
I'm
using
auto
save
in
intellij,
which
is
kind
of
the
default
as
far
as
I
know,
so
it
would
deploy
all
the
time.
A
If
I
click
another
window,
the
files
would
be
saved
and
it
starts
deploying-
and
I
just
want
to
have
more
control
but
of
course
both
works.
Then
registry,
I'm
not
using
it,
but
in
theory
you
could
configure
to
which
registry
or
with
which
registry,
until
this
working,
so
you
could
also
push
to.
I
don't
know.
Docker
hub
gcr,
probably
doesn't
make
a
lot
of
sense
because
locally,
it's
just
faster,
but
in
theory
you
can
do
it
then
provider
repositories.
A
That
is
important
if
you
use
providers
which
are
not
part
of
the
core
repository
so
that
we
can
actually
find
them.
So
the
usual
pattern
is
to
just
check
our
providers
at
the
same
level,
and
then
you
can
just
reference
them
like
that.
But
of
course
you
can
place
them
wherever
you
want.
You
just
have
to
figure
out
the
path,
so
I
usually
keep
those
just
always
configured
for
providers
I'm
actually
using
and
then
when
I
want
to
enable
or
just
a
provider,
I'm
just
using
that
tag,
because
that
decides
what
is
actually
deployed.
A
So
in
my
case,
I'm
just
using
docker
in
kubernetes,
but
you
can
also
deploy
aws
or
other
things
talking
about
aws.
Some
providers
require
additional
configuration,
which
is
document
for
some
of
them
here.
So
aws,
for
example,
requires
credentials.
So
if
you
want
to
inject
credentials
into
the
aws
controller
deployment,
you
have
to
set
that
customized
substitution
here.
A
Then,
when
we
are
at
customized
substitutions.
That,
roughly
is
just
yeah,
but
you
can
usually
do
with
end
subs.
So
when
you
look
at
our
kickstart
page
and
all
the
variables
that
you
can
set
when
you
deploy
with
classic
huddle
in
it,
that
should
be
yeah.
It
should
be
that
section.
So
if
you
want
to
customize
something
some
of
that
stuff,
you
could
put
them
all
there.
A
What
I'm
using
is
I'm
usually
enabling
just
all
feature
feature
gates.
We
have
because
I
usually
like
to
use
them,
so
you
can
also
enable
feature
gates
like
that.
They
are
documented
here,
the
names
of
those
feature
gates
but
yeah.
As
far
as
I
know,
we
currently
have
those
three
plus
ignition,
but
oh
yeah.
Probably
providers
have
more
as
far
as
you
know,
then,
deploy
search
manager,
so
you
can
enable
or
disable
if
tilt
up
will
deploy.
Certain
should
not
only
actually
makes
sense.
A
If
you
want
to
deploy
your
own
search
mention,
which
is
probably
not
really
required
for
the
deaf
environment,
then
we
have
some
extra
arguments.
So
if
you
want
to
yeah,
essentially
roughly
append
arguments
to
your
managers,
we
can't
have
pr
open
to
improve
that
a
bit.
So
that
there's
some
small
gap
there,
but
that's
let's
assume
for
not
that
you
just
can
do
that
with
all
flags.
A
So
if
you
want
to
enable
another
logging
format,
for
example,
or
change,
velocity,
etc,
etc,
and
just
set
the
extra
arcs
that
string
here
should
match
to
the
provider
name
here,
so
it
will
yeah
just
correlate
to
each
other.
A
Then
we
have
a
deploy.
Observability,
that's
those
are
our
currently
possible
values,
so
you
can
deploy
grafana
loki,
prometheus
frontend,
so
grafana
is
just
ui.
Look
is
essentially
a
data
store
for
logging.
Prometheus
is
the
data.
Software
metrics
and
prom
tail
will
automatically
tail
the
logs
of
all
pots
and
send
them
to
loki
that's
roughly
what
they
are,
but
we
have
them
also
documented
here
yeah,
and
then
we
have
some
debug
configuration
yeah
so
as
before
those
are
matching
two
specific
providers,
then
that
is
the
debug
port
on
yeah.
A
So
if
you
want
to
debug,
you
usually
compile
something
with
debug
symbols,
then
you
connect
with
delve
against
the
port
opened
by
the
binding.
Oh,
that
was
not
good.
So
let
me
show
you
that's
probably
easier
to
understand
or
to
summarize
so,
when
you
enable
the
port
here
and
when
we
now
take
a
look
at.
A
Manage
report
so,
first
of
all,
we
compile
that
manager
automatically
with
debug
symbols.
So
you
will
see
in
that
view
somewhere
up
there,
that
we
use
the
corresponding,
I'm
not
sure
if
you
actually
see
it,
but
when
we
compile
the
binary,
which
is
maybe
here
yeah.
B
A
A
If
you
use
continue
true,
then
the
binary
binary
starts
immediately.
If
you
use
continued
faults,
then
your
binary
starts
and
waits
for
the
debugger
to
be
connected,
so
I'm
usually
using
continue
to
because
yeah
just
more
convenient.
If,
if
the
controller
doesn't
doesn't
wait
for
you
to
connect,
so
I'm
running
all
the
controllers,
I
have
usually
with
the
debug
configuration,
because
it's
just
yeah,
it
doesn't
really
matter
for
me
and
I
can
connect
to
any
of
those
controllers
at
any
point
in
time.
If
I
want
to
so,
I
don't
have
to
redeploy
anything.
B
Yeah
step
back
so
with
tilt,
let
me
say
out
of
the
box.
What
you
get
is
that
you
can
run
your
container
and
you
can
see
logs,
okay
and
usually
your
development
workflow
is
that
okay,
you
write
some
code,
you
apply
some
yaml
and
you
look
at
locks.
Is
everything
works?
Okay?
Then
you
go.
You
try
to
fix
it,
you're
a
deploy
and
stuff
like
that.
So
it
is
already
a
great
improvement
before
the
the
previous
way
of
working.
B
But
doing
this
way,
basically,
you
don't
have.
Then
all
the
nice
debug
watches
stuff
that
an
ide
can
give
you
so
by
enabling
the
flag
that
that
that
stefan
is
showing
what
we
can
get.
We
can
get
not
only
that
tilt
runs
the
environment,
but
when
something
is
happened
we
can
place
a
breakpoint
and
jump
into
into
the
code
and
understanding
exactly
what
what's
going
on,
and
I
think
that
now
stephanie
is
going
to
show
this
exactly.
A
Okay,
so
yeah
controller
is
running.
What
I'm
doing
is
I
connect
against
one
of
our
controllers,
I'll
just
take
the
cluster
controller
and
in
the
cluster
api
controller
binary.
What
I
need
is
a
remote
debugging
configuration.
We
also
have
the
configuration
for
vs
code
in
our
documentation.
If
someone
uses
vs
code,
but
it's
essentially
very
easy.
We
only
have
to
create
a
go.
Remote
keyboard
configuration
hosts
just
localhost,
that's
the
part
of
our
configuration.
So
in
my
case
I
have
some
yeah
predefined
ones.
A
That's
so
I'm
I
bind
the
main
controller
on
30
000
and
then
I
have
other
ones
for
kcp,
etc,
yeah.
So
what
I
do
now
is
I
connect
against
that,
and
you
also
see
some
kind
of
error
message
here,
which
is
kind
of
fine,
not
sure
why
that
is.
A
But
it's
okay,
another
problem-
and
I
just
set
a
random
breakpoint
here
at
the
start
of
our
reconcile-
I'm
currently
not
hitting
the
breakpoint,
because
I
don't
have
any
clusters
until
so
up
until
now,
we
have
a
full
management
cluster,
but
we
don't
have
any
clusters
deployed.
A
So
what
I
usually
do
is
I
have
a
whole
yeah.
I
have
essentially
examples
for
everything
that
I
want
to
play
around
with
over
time,
yummy
files.
So
in
that
case
I
just
take
a
copy
that
I
might
just
generate
a
new
cluster,
but
usually
for
the
things
I
try
to
play
around
with
for
cluster
class
etc.
I
just
have
some
yummy
files
lying
around
that
I
can
use
and
play
around
with,
so
that
command
will
generate
just
a
regular
cluster
demo,
so
that's
also
without
clustering,
etc.
A
So,
from
here
on,
it's
just
plain
gold
debugging,
which
probably
not
everyone,
is
using
a
lot,
but
it's
not
super
complicated
and
with
some
exercises
it's
kind
of
fun,
at
least
for
me,.
A
A
Okay,
so
let's
drop
that
rapid
here
yeah.
So
that's
the
cube,
radium
reconsigner.
What
you
can
also
do,
which
is
kind
of
interesting.
You
can
also
jump
into
webhooks.
So
let's
just
make
an
example.
If
you
want
to
figure
out
how
that
webhook
works
here,
kcp
and
let's
just
give
an
audio.
So
here
we
have
the
deleted
update
function.
We
can
just
add
a
break
consumer
here
and
I'll.
A
And
if
you're
really
interested
in
how
conversion
works,
you
can
do
the
same
for
conversion
functions.
So
just
a
very
quick
example.
A
If
you
want
to
see
how
the
conversion
works
from
beta
one
to
four
and
I'm
not
going
into
details,
I
have
a
separate
video
for
that,
but
yeah
you
can
just
do
cube
ctrl
get
on
keep
it
in
control
plane,
and
you
say
we
on
iphone
4
here
and
so
because
we're
using
beta1
to
store
the
resource
that
triggers
the
conversion
from
beta1
to
alpha
4..
So
if
I
run
okay,
something
is
wrong.
Probably
that
was
the
wrong
function,
so
we
have
that
convert2
and
that
convert
from
function
yeah.
A
B
You're
debugging
a
live
cluster.
Basically
we,
it
is
something
similar
ocean
production,
but
yeah
it
is
from
it
is
kind
of
tilt.
Is
nice
because
it
gives
the
environment
properly
set
up
doing
this
in
production.
A
Kind
of
bad,
actually,
yes,
okay,
so
one
additional
thing
I
want
to
show
so
what
I
did
up
until
now
was
I
used
tilt
to
deploy
a
management
cluster
and
all
those
controllers.
Then
I
did
something
manually
and
then
I
debugged
those
controllers
or
the
web
books.
A
What
you
can
also
do
is,
if
you
want
to,
if
you
have
to
debug
your
environ
tests,
you
can
run
an
end-to-end
test
against
that
tilt
cluster.
So
usually
an
end-to-end
test
creates
a
new
management
cluster
and
just
uses
that,
but
you
can
also
use
your
existing
tilt
cluster
and
then
you
can
essentially
step
through
your
entrance
and
your
controllers
at
the
same
time,
so
you
can
figure
out
what
is
happening
during
your
end-to-end
test,
and
I
don't
know
if
something
is
not
going
as
planned.
A
You
can
just
pause
your
entrance
debug
through
your
controllers,
play
around
with
cube,
ctrl,
etc
and
just
yeah
take
all
the
time
you
need,
because,
usually
I
think
yeah
I
mean,
and
this
is
kind
of
a
black
box.
You
run
it
with
make
or
so
and
then
it
works
or
not,
and
maybe
it
times
out
for
15
minutes
and
then
it
fails,
but
it's
not
very
easily
locally
debuggable.
In
my
opinion,
please
interrupt
me
if
I
forgot
half
the
context
which
is
useful.
A
B
I
think
that
the
main
point
is
that
yeah,
I
I
don't
know
many
yeah
at
the
beginning
of
kubernetes
or
control
development.
It
was
really
hard.
It
was
the
developer's
period
was
just
oh,
let's
add
a
log
line,
recompile
everything
test
and
then
iterate
and
it
took
longer
now
we
can
do
a
lot
of
fun
stuff.
We
can
debug
live
and
stuff
like
that.
B
That's
not
mean
that
you
have
to
do
this
always,
but
if
you,
if
you
need
it,
it
is
a
chance.
It's
pretty
simple
to
do
it.
Tilt
is
really
great
and
we
can
do
this
for
normal
tilt
cluster,
so
local
developer,
environment
or
foreign,
so
yeah.
It
is
some
a
really
nice
tool
to
have
and
to
understand
how
it
works.
A
Exactly
yeah,
but
it's
very
important
to
keep
in
mind
that
debugging
is
not
always
the
best
solution
and
you
can
spend
hours
going
into.
I
don't
know
going
for
going
for
all
kinds
of
libraries
that
we're
using
and
it
won't
really
help.
So
for
me
it's
it's
constantly
thinking
about
if
debugging
is
actually
the
right
way
or
how
deep
you
want
to
debug
and
all
that
stuff
or
maybe
just
a
lock
line
somewhere
is
actually
better.
But
just
one
comment
about
clock
lines.
A
So
when
you
are
connected
with
debugger-
and
I
can
only
show
it
to
you
in
intellij
because
I
don't
know
if
this
code
has
that
feature-
I
mean
you
can
add
breakpoints
in
a
sense
of
that
the
debugger
will
actually
break
here
and-
and
you
can
step
through
the
code,
but
you
can
also
add
things
like
non-breaking
break
points.
So
you
can
say
here
that
the
execution
should
not
be
suspended
and
then
you
can.
A
A
And
this
will
add
a
dynamic
log
statement.
Essentially
so,
whenever
that
breakpoint
is
set,
it
doesn't
stop,
but
it
locks
that
line,
and
you
can
do
that
in
not
only
in
our
own
code
but
also
in
third-party
libraries
somewhere
down
in
controller
on
time,
etc,
and
that
would
be
relatively
hard
to
do
manually,
but
that's
yeah.
I
barely
need
stuff
like
that,
but
it
can
be
useful,
sometimes
okay,
so
going
back
to
debugging
and
to
end
tests,
so
we
also
have
documentation
for
that
part.
So
I'm
always
not
sure
yeah.
A
A
Just
I
just
give
a
quick
overview,
but
it's
all
documented
here,
so
we
have
a
test
environment
working
directory.
We
have
an
artifacts
folder
is
where
the
resulting
artifacts
of
the
environment
has
to
be
stored.
A
We
have
a
pattern
that
it
actually
yeah.
That's
just
a
test
function
that
the
binary
digital
binaries
actually
start,
but
yeah
all
document
not
really
important.
Then
you
have
an
entry
and
config
file
in
our
case
docker
yammer.
That
should
be
your
depending
on
the
provider.
Another
one,
then
think
of
focus
is
kind
of
important
that
you
don't
run
all
the
tests
locally.
So
I
would
only
really
recommend
that
running
one
test,
also
in
that
style.
A
So
I'm
just
filtering
on
pr
blocking
which,
in
core
cabby
case
is
just
a
quick
start
and
to
interest
and
I'm
using
verbosity
true
which,
as
far
as
I
know,
enables
also
the
streaming
feature
so
ginkgo,
usually
just
per
default
or
nci.
At
least
it
runs
all
the
tests
and
then
prints
out
log
output
for
of
each
individual
test
at
the
end
of
the
test,
so
that
you
get
blocks
of
tests
and
that
will
enable
streaming,
which
essentially
means
that
every
lock
line
is
printed
out
immediately,
which
in
my
case,
just
means.
A
If
I
run
that
configuration,
I
don't
have
to
wait
for
the
locks
until
the
end
of
the
test.
I
will
see
it
as
far
as
the
lock
line
is
actually
executed.
Yeah.
So
that's
the
configuration
or
I
missed
something,
sorry,
so
that
is
the
regular
configuration
if
you
just
want
to
run
the
tests
against
a
new
cluster,
but
in
core
capi
you
have
an
additional
config
flag,
which
is
used,
which
is
called
end
to
end
use
existing
cluster.
A
That
probably
doesn't
exist
in
providers,
but
should
be
pretty
easy
to
add.
If
you
want
to
or
yeah
just
ping
me
somewhere,
then
we
can
make
it
happen
and
that
flag
will
will
tell
the
inventors
that
it
shouldn't
create
a
new
cluster.
It
should
just
essentially
use
the
current
context,
and
if
the
cluster
already
exists,
then
that
one
assumes
perfect
happy.
Also
nicely.
A
So,
actually,
you
won't
really
see
it
in
the
lock
lines
that
something
is
different.
It
will
be
just
faster.
Oh
one,
second
somehow
wants
a
password
for
me:
yeah
okay,
but
it
works
yeah.
So
the
end-to-end
has
it
reads
into
a
config
file.
It
creates
a
local
repository
which
is
yeah
all
those
manifests
of
the
provider
deployments
and
then
what
it
tells
us-
and
I
just
made
it
to
do
for
the
locals.
A
A
If
that's
doable,
but
I
take
a
closer
look
yeah,
then
it
access
the
already
existing
bootstrap
cluster
and
it
tails
all
the
knocks
and
then
we're
getting
to
the
actual
test
case,
which
is
here
and
now
we
can
just
set
through
it
and
at
the
same
time,
also
step
through
the
controllers
and
see
what
I'm
doing
if
I
want
to
so,
let's
just
yeah.
That
is
selective,
that's
okay!
A
So
if
you're
confused
about
that
icon,
that
only
means
that,
with
the
currently
selected
debugger,
that
code
is
not
in
the
binary
date
and
debugging.
So
currently
I
have
that
and
to
enter
0,
which
means.
A
Which
means
that
it's
connected
against
my
local
inventors,
so
I
see
those
red
buttons
here,
but
of
course
that
debugger
is
not
connected
against
the
controller.
So
if
I
switch
the
control
here,
then
I
see
oh,
it's
active
and
just
if
I
okay
so
be
here,
let's
see,
I
think,
essentially
we're
deploying
the
cluster
here
somewhere
and
folks,
just
a
very
simple
example.
So
let's
say
for
some
reason:
the
cluster
creation
fails
and
usually
the
test
oops.
Okay,
that
other
controller
here
added
other
debugger.
A
So
usually
I
would
apply
that
file
and
then
it
would
wait
for
the
cluster
to
exist
and
potentially
it
would
time
out,
but
one
technique.
My
technique
is
a
little
bit
much,
but
one
thing
which
is
very
useful
in
entertainers.
You
can
just
set
your
breakpoint
before
so
essentially,
oh
and
I
really
have
to
continue
here
so
I
shouldn't
I
shouldn't
block
the
controllers
when
the
inventor
is
deploying
something
because
then
the
web
hooks
don't
work
yeah.
So
I
essentially
just
wait
here
before
any
kind
of
weight
is
started.
A
A
A
It
can
be
very
helpful
to
just
pause
it
somewhere
in
the
middle
and
then
yeah
just
just
continue
with
cube,
ctl
and
debugger
to
figure
out
what
your.
What
your
issue
is,
if
you
have
some
some
broken
tests
or
so
otherwise,
it's
just
rerunning
your
tests
over
and
over
again
opening.
So
that's
for
me,
the
big
improvement
that
I
can
just
take
at
the
logs
of
the
failed
test
somewhere.
A
Then
I
can
start
it
locally
set
a
breakpoint
somewhere
before
the
problem
happens
and
then
just
continue
and
find
the
issue
yeah
and
usually
I
only
have
to
run
the
test
once
or
twice
compared
to
trial
and
error.
Essentially,
okay,
I
think
that's
roughly
everything
I
had
any
other
questions
comments.
B
A
Okay,
good
yeah,
alright.
E
I
don't
think
this
is
really
related
to
this
particular
session,
but
like
if
you
look
in
the
e2e
logs,
I
always
see
those
ss's
continuously
coming
like
what
is
its
significance,
I
actually
tried
to
find
it
in
the
logs
of
the
test
as
well.
I
went
on
a
rabbit
hole,
but
I
never
found
it
so
like
while
we
are
at
it
if
in
case
you
know,
I
have.
B
B
E
A
E
E
A
Okay,
perfect
done,
yeah
have
fun,
please
let
us
know
if
there
are
any
issues
or
if
you
want
to
have
some
other
features
here.