►
From YouTube: SIG Cluster Lifecycle - Cluster API - Development/debugging with Tilt (APAC/EMEA) - 2022-02-24
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
recording
started
share
my
screen.
A
Okay,
so
let's
start
yeah,
hello,
everyone
yeah!
That's
that's
the
let's
chat
about
session
about
local
development
with
tilt
just
a
little
bit
of
context,
so
we
have
a
discussion
here
with
the
current
and
previous
sessions.
So
if
you
want
to
yeah
get
updates,
you
can
just
subscribe
to
the
discussion
and
follow
along
there.
If
you
have
any
feedback
or
ideas
for
further
topics
just
feel
free
to
comment
in
this
discussion.
A
Okay,
good,
I
just
read
the
chat.
I
hope
you
cannot
hear
me
good
yeah
feel
free
to
just
interrupt
or
if
yeah,
if
it
might
be
hard,
just
raise
your
hand
I'll
try
to
see
it
on
my
second
screen:
okay,
good,
so
just
sorting
some
stuff,
okay
good.
So
what
is
what
is
tilt
or
or
till
the
f
environment?
A
So
the
idea
is
that
when
we
develop
cluster
api,
we
can
run
essentially
a
local
management
cluster,
with
all
our
controllers
and
also
with
intro
providers
if
they
are
compliant
at
a
little
bit
of
configuration
and
then
yeah
you
have
your
local
development
cluster,
your
local
management
cluster.
You
can
run
your
current
code.
You
can
modify
your
code,
you
can
debug
it.
You
can
take.
A
look
at
logs,
run,
enter
and
test
against
that
cluster.
A
If
you
want
and
just
yeah
develop
play
with
the
code
figure
out
how
everything
works,
that's
the
rough
idea
why
we
have
it
and
for
what
we're
using
it.
It
comes
very,
very
close
to
what
happens
if
you
create
a
bootstrap
cluster
and
then
run
classic
cuddle
energy
providers.
The
difference
is
only
that
it
uses
your
current
local
code
essentially,
but
apart
from
that,
it's
yeah
it
essentially
simulates
cluster,
catalan,
partially
high
level
features
just
to
give
some
high
level
overview.
So
yeah.
A
Of
course
you
deploy
and
can
redeploy
your
local
code
changes.
You
can
debug
providers
by
just
connecting
with
a
go
debugger.
We
have
grafana
deployed
there,
so
you
can
take
a
look
at
metrics
and
logs
all
that
stuff
yeah.
A
I
think
that's
roughly
it
for
my
personal
take
when
I'm
using
it
so
usually
when
I'm
developing,
I'm
of
course,
if
possible,
just
using
unit
tests
or
so,
but
it
becomes
more
relevant
if
you
want
to
see
how
everything
works
and
to
end
how
multiple
controls
work
with
each
other
and
and
stuff
like
that.
So
for
me
it
wouldn't
really
replace
something
like
yeah
using
your
test
for
development
and
stuff
like
that
yeah
and
finally,
I
would
say
it's
it's
our
equivalent
to
to
prod.
A
So
a
lot
of
folks
are
mainly
working
on
an
upstream
cluster
api,
so
we
don't
really
running
all
that
stuff
in
production
or
with
a
locking,
stack
and
all
that
stuff.
So
essentially,
that's
our
that's
our
feedback
loop
to
hopefully
use
cluster
api
similar
to
users
are
using
it
and
to
improve
it
accordingly,
if
the
locks
are
not
that
great
or
if
metrics
are
missing,
etc,
cetera,
so,
okay,
but
now
concrete
yeah,
so
first
of
all
I'll
start
with
how
to
set
it
all
up.
A
So
we
have
this
documentation
here
in
our
book
and
I'm
intentionally
on
the
main
page
of
our
book,
because
I
cleaned
it
up
a
little
bit
recently.
A
So
as
you
what
you
need
to
run,
it
is
docker,
of
course,
to
run
program,
which
is
obviously
then
kind,
because
tilde
is
only
deploying
onto
a
computer's
cluster
and
we're
using
kind
to
actually
create
that
cluster
or
management,
cluster
yeah
and
then
yeah
tilde
obvious,
then
we're
using
customize
and
end
subs.
A
We
need
them
because,
as
we
are
assimilating
cluster
cutter
in
it,
essentially
we
have
to
render
our
all
our
yama
files
so
we're
able
to
deploy
the
providers
and
stuff
and
then
stop
this
for
yeah
replacing
the
variables
we
have
in
our
humans.
A
Then
you
don't
need
it
for
all,
but
we're
using
harm
to
deploy
grafana
bromtail
loki
stuff,
like
that.
So
we
have.
We
have
an
option
to
deploy
some
observability
tools
and
if
you
want
to
use
that
you
also
need
help
in
your
path
yeah,
and
apart
from
that
clone
the
class
api
repository
clone
all
the
providers
you
want
to
deploy
and
that's
it
then
the
first
thing
you
do
is
you
deploy.
A
You
create
a
kubernetes
cluster.
We
are
only
actually
using
it
with
client.
I
I
assume
you
can
use
any
kubernetes
cluster
but
yeah
for
us.
It's
just
the
easiest
way
to
set
it
up.
We
have
one
shell
script
here
in
our
repository
to
make
it
easier.
So
in
theory,
you
can
just
run
kind
great
cluster,
but
if
you
want
to
use
cap
d,
then
you
have
to
create
it
with
certain
flags,
because
cap
d
requires
some
additional
volume
mounts,
so
I'm
usually
just
creating
the
kind
cluster
with
that
script
so
that
captive
works.
A
B
Actually,
my
question
is
regarding
the
customize,
so
I
have
tried
this
may
customize,
but
it
is,
it
says,
like
nothing,
no
rule
is
defined
for
the
customize
option.
So
where
do
we
need
to
run
it
exactly
from
the
root
or
somewhere
else.
A
Good
point:
hey
nice:
finding!
I
know
that
for
later,
okay.
B
A
I
think
we
can
figure
that
out,
so
I'm
not
sure
where
the
target
is,
but
essentially
as
part
of
our
tilt
up
we're
using
go
binary
that
go
binary
is
using
customize
from
that
folder.
So
what
I
probably
did
incorrectly.
I
would
assume
that
we
at
least
have
that
hack
tools
when
customized
target,
which
should
build
it.
A
A
Okay,
to
be
honest,
I
don't
know,
but
I'll
follow
up
and
figure
it
out.
Apparently
I
did
run
it
in
the
past,
so
it
just
works.
Maybe.
B
B
Is
it
possible
that
you
have
like
some
other
installation
of
customized
elsewhere
in
your
system
here
also,
I
think
it's
not
working
or
like
with
stephen
also.
I
tried
today
also
it's
not
working
for
me
with
the
latest.
A
No
problem
I'll,
I
said
I
follow
up
and
figure
out
how
to
program
yours,
but
it
doesn't
really
help
if
you
have
customized
somewhere
in
your
path
because
we're
using
that
path
hardcoded.
A
Actually,
only
pick
up
that
one
slightly
confused
where
the
target
doesn't
work.
A
Just
checking
with
some
other
thing.
Yeah
I
mean
even
the
auto
completion
works
because,
oh
sorry,
I
mis
interpreted
my
lock
output,
so
that
isn't
actually
saying
the
target
doesn't
exist.
It
just
tells
me
that
there's
nothing
to
do
so.
If
it
doesn't
work
for
you,
because
you
have
an
old
customized
version,
you
have
to
run,
make
clean
bin
and
then
make
customize
and
then
then
you're
getting
the
targets
not
defined
or
is
it
then
reinstalling
customers.
B
Actually,
it
was
not
working,
so
I
installed
it
with
blue
and
then
it
worked
fine
for
me,
okay
at
least
I
tried
with
the
clean
bin
and
then
it
worked.
Okay,
good.
A
Problems
yeah,
but
what
we
currently
definitely
don't
have
is
we're
not
verifying
that
that
we
have
to
write
customized
version
and
just
running
the
target.
I'm
I
think
we've
customized
current
state
of
main
yeah
we're
not
actually
updating
the
binary.
A
So
there's
a
pr
out
which
I
think
either
that
pr
or
with
follow-up
should
fix
that
situation,
but
currently
we're
just
installing
customize
and
if
it's
there,
then
it's
there
we're
not
changing
the
version,
but
that
will
be
gone
so
yeah,
but
we
what
we
could
definitely
implement
if
someone
wants
to
follow
up
or
open
the
issue
or
something
we
have
that
binary
here,
which
uses
customize
and
it
could
check
if
customers
have
the
right
version
before
actually
using
it
and
then
just
print
out
an
error.
A
A
Let's
see
okay,
then
we
need
tilt
settings
fine
for
since
the
reason
pr
you
can
either
write
it
in
yummy
or
json.
Yum
might
be
easier
because
you
can
just
comment
out
stuff
a
little
bit
less
hacking
to
do
for
now.
I
only
show
the
basic
settings,
so
I
didn't
really
try
it
again,
but
I
think
the
minimal
config
you
need
is
if
you
want
to
deploy
some
additional
providers
which
are
outside
of
the
core
repository.
A
You
need
a
provider
repositories
and
you
have
to
enable
providers,
so
I
guess
strictly
the
absolutely
minimal
config
should
be
that
and
as
far
as
I'm
aware,
everything
that
should
be
optional.
I
go
over
that
file
later
on
just
to
explain
what
you,
what
you
can
do:
okay,
but
now,
let's
actually
run
till
up.
B
I
have
one
different,
so
initially
there
was
no
constraint
on
using
the
cluster
name,
but
I
think
now
we
have
started
using
capi
test
and
otherwise
the
build
up
won't
work.
So
any
specific
reason
for
the
same.
A
So
the
reason
is
that
we
we
have
to
attach
script.
I'm
so
kind
cluster
named
install
here
and
that,
but
what
we
didn't
want
to
do.
Is
we
just
with
that
jascope
we
would.
We
didn't
want
to
use
the
default
name
in
kind
just
wanted
to
make
it
copy
specific
to
not,
I
don't
know,
run
into
issues
with
another
cluster
which
might
be
there.
A
A
Okay,
so
yeah
just
a
quick
overview
or
tilt,
I'm
not
sure
who
already
used
it
or
not.
Just
look
at
the
notes
that
I
don't
miss
anything
important.
A
So
that
page
is
essentially
an
overview
and
what
till
just
deployed
so
yeah?
Let's
just
go
over
it.
So
first
of
all,
we
were
building
a
bunch
of
wineries,
so
in
my
case
I'm
just
using
the
core
controllers
plus
cap
e.
So
those
are
the
binary
nodes,
then
the
corresponding
deployments
for
all
those
controllers,
and
then
we
we
just
labeled
all
that
stuff
that
you
can
just
yeah
drill
into
it.
A
However,
you
want
but
yeah,
then
that's
the
same
stuff
again
just
grouped
by
the
provider,
so
captcha
capti
cap
bk.
What's
the
name
capi
cabi
and
kcp,
and
then
we
have
some
observability
tools,
as
already
mentioned,
so
we're
currently
deploying
prom
tile
to
ship
locks,
low
key
to
store
locks
grafana
to
yeah,
to
to
look
at
locks
and
metrics
at
the
moment
and
permissions
to
scrape
and
store
metrics.
A
I'm
not
super
sure
about
that,
but
I
think
everything
else
which
is
not
in
one
of
those
categories
is
automatically
uncategorized
until
and
we
have
one
additional
thing
which
are
providers
so
to
make
the
local
management
cluster
as
realistic
as
possible.
We
are
also
deploying
provider
series,
so
usually,
if
you
would
just
just
deploy
your
local
ammo
files,
you
wouldn't
get
the
provider
series
which,
because
they're
usually
just
created
by
classic
cuddle
but
yeah.
We
we
also
implemented
until
there
are
some
things
where
you
actually
need
those.
A
Usually
that's
not
a
problem,
but
just
to
mention
it
yeah
and
then
finally,
you
have
the
tilt
file
here,
which
is
just,
I
would
say,
the
top
little
little
thing
here.
Yeah
so
as
far
as
tilt
just
runs
through
third
file
generates
all
those
resources,
and
here
they
are.
If
something
in
the
tool
file
changes,
then
it
just
recalculates
everything,
redeploys
everything.
So,
for
example,
you
can
enable
or
disable
reviews,
and
then
it's
redeployed
or
undeployed,
depending
on
your
config.
A
A
D
Stephan,
I
I
didn't
get
this
provider
crd
part
like
is
there?
Is
this
a
different
crd?
I
think
what
is
this
about.
A
So
classic
cattle
has
a
dedicated
crd,
which
represents
deployed
providers,
that's
essentially
what
it
is
and
they
are
not
defined
anywhere
in
our
yama
files.
They
are
just
when
you
use
classical
init
or
upgrade,
etc.
Then
it
manages
to
let's
say
it
represents
the
currently
deployed
providers.
By
keeping
those
series
up
to
date,
I
can
show
you
one
of
them.
A
So
yeah
quadratic
ciphers
crd
in
that
case
that
one
represents
the
where's,
the
name
cluster
api
itself,
the
core
controller
and
yeah
provider.
Name
then
the
type
so
core
provider
and.
D
A
A
Okay.
Yes,
yes,.
C
Yes,
one
query
like
on
this
grouping
of
providers:
it
is
done
automatically
or
we
have
to
do
something
until
settings
that
is
done
automatically.
A
So
those
are
those
are
built-in
providers.
They
just
have
a
hard
coded
definition
there,
but
as
far
as
I'm
aware
for
external
providers,
which
are
not
in
the
core
repository
and
not
hardcoded,
here
it's
just
calculated
based
on
the
provider
name
or
something.
As
far
as
I
know,.
C
Okay
and
another
very
basic
question,
so
this
tool
that
is
called
tilt,
it
is
very
specific
to
like
kubernetes
ecosystem
or
does
it
have
other
use
cases
as
well.
A
I'm
not
100
sure,
so
I
I
assume
not.
I
only
saw
it
really
used
with
grenades
classes,
because
yeah
tilt
was
used
to
deploy
stuff
on
top
of
group
matters,
but
it
could
be
that
you
can
also
deploy
to
other
targets.
But
I
don't
know:
okay,
okay,.
A
A
Okay,
any
other
questions,
otherwise,
okay,
yep.
So
then,
when
you
click
at
yeah
or
resources,
then
you
just
see
your
initial
providers.
You
can
take
a
look
at
logs,
one
example:
here
you
can
see
how
the
binary
is
built
and
if
there
are
errors
I
mean
sometimes
there
are
compilers
and
stuff.
A
But
apart
from
that,
that's
pretty
much
not
spectacular
in
any
way,
and
when
you
look
at
a
specific
controller,
then
you
see
how
it
built
the
docker
image
of
the
controller
and
later
on,
I
think
somewhere
here
it
deployed
the
provider,
so
that
gets
the
service
and
the
deployment
itself
and
the
provider
resource
and
then
we're
striking
the
rollout
and
as
soon
as
the
support
comes
up,
it's
telling
the
locks
of
that
controller.
A
So
that's
what
you
can
see
here.
If
yeah,
if
your
controller
crashes
or
something,
then
it
will
automatically
attach
the
tail
to
the
new
yeah
new
container.
If
you
redeploy
by
clicking
here,
for
example,
that
was
a
manually
triggered
redeployment
and
it
just
builds
again.
The
price
again
pays
again:
yeah
pretty
straightforward.
A
Yep
some
additional
stuff
here
on
that
ui.
So,
as
I
mentioned
before,
we
are
deploying
grafana
provisions
and
loki
and
we
are
also
configuring
local
port
forwards.
So
those
things
are
obviously
running
in
kubernetes,
but
they
have
portfolio
to
your
local
machine.
So
you
can
just
access
them
here
and
I
would
say
the
most
useful
things
right
now
is.
If
you
want
to
look
at
some
metrics,
I
don't
know
controllers
whatever.
A
Or
probably,
even
more
useful,
if
you
want
to
look
at
logs
yeah,
you
can
just
use
it
immediately
and
try
to
figure
out.
What's
going
on
I'll,
go
a
little
bit
more
detail
later
on
on
what
you
can
do
here,
but
for
us
it's
currently
very
useful
because
we're
trying
to
improve
the
locks.
So
we
have
an
entire
environment
here,
essentially
to
evaluate
good.
B
So
how
can
we
add
any
custom
metrics
here
like
because
these
are
only
specific
to
like
the
crds
or
the
controllers
that
we
are
running
suppose
if
any
provider
we
configure
the
same
thing
then?
Can
we
track
the
api
level
matrices.
A
So
you
don't
really
have
to
do
anything
until
so
what
is
happening
until
this?
Only
that
we
deploy
prometheus
and
configure
it
to
to
pull
the
metrics
from
your
controller.
But
the
question
is:
how
can
you
add
a
metrics
in
your
controller?
So
essentially
you
have
to
make
sure
that
you
expose
the
metrics
in
your
controller,
I'm
not
sure
how
it's
currently
implemented,
but
there
should
be
something
like
a
metric
registry
and
then
you
can
register
your
new
metric.
There.
B
A
Code,
the
default
ones
that
should
be
already
there
should
be,
let's
say,
automatically
registered
by
a
controller
on
time.
A
Yeah
by
using
a
bunch
of
other
libraries
but
yeah
controller,
is
the
one
that
provides
them.
So,
let's,
let's
take
a
look.
What
we
do
in
our
code,
just
a
quick
one,
but
so
there
should
be
something
like
a
metric
endpoint
configure
somewhere
a
metric
bind
address
yep.
A
So
the
only
thing
that
we
have
is
we
configure
somewhere
here
our
matrix,
find
address
with
a
default
value
and
then
we
hand
that,
over
to
controller
runtime
and
control
runtime
internally
registers
a
bunch
of
metrics,
like
those
controller
underscore
metrics,
for
example,
that
you
see
here
and
they
are
already
there
and
yeah.
The
question
is
how
to
add
more,
I
don't
know
how
it's
currently
implemented,
because,
as
far
as
I
know,
we
don't
have
additional
custom
metrics
in
core
capi
right
now,
which
is
not
super
great,
but
the
current
state.
A
I
think
I,
if
I,
if
you
remember
me
or
if
I
don't
forget
when
we
later
debug
into
the
code
kind,
I
can
quickly
show
how
controller
runtime
counts,
those
metrics
and
regular
system
okay.
So
that
was
the
rough
overview
yeah
now.
What
I
would
do
is
essentially
create
a
cluster
and
yeah
just
show
how
we
can
use
that
stuff
to
you
know,
develop
figure
out
how
things
work
stuff
like
that
yeah.
So
what
I'm
just
doing
is
I
generate
a.
A
Based
on
capti,
so
just
any
version
in
that
case
capti,
because
yeah
just
easier
for
me,
but
no
voice,
I
mean.
Obviously
it
just
also
works
for
other
providers.
I
personally
only
tried
it
with
aws,
but
I
I
don't
see
any
problems
there,
just
using
any
provider,
it's
just
probably
a
little
bit
slower
because
it's
really
infrastructure,
but
apart
from
that,
okay.
So
let's
open
watch
here
to
see.
A
Is
it
all
big
enough,
yep,
okay,
good
yeah,
so
I'm
just
what
you
would
expect
sheets
are
provisioning
and
scaling
up
and
all
that
stuff
yeah
and
let's
take
a
look
at
loki.
So
I
have
to
say
what
I
currently
have
here
is
not
yet,
on
main,
so
I
mean
from
a
tilt
perspective,
almost
everything
is
there,
but
we
already
made
some
small
adjustments
to
locking
which
we
try
to
bring
up
stream.
A
A
But
I
assume
that
moving
forward,
that's
something
that
that
folks
can
just
use
okay,
so
I
deployed
that
stuff.
We
can
use
the
lock
browser,
so
I
don't
really
want
to
go
too
much
into
graphing
our
local
details,
but
just
the
very
easy
stuff
that
you
can
do.
A
You
can
filter
on
different
labels.
So
every
doc
line
gets
a
bunch
of
labels
and
those
are
added
by
prom
tail
for
those
who
are
interested.
So
when
prompt
retrieves
the
logs
and
push
them
into
logi,
it
adds
metadata
like
from
which
part
are
those
logs
and
which
names
business,
that
port
yeah
and
some
additional
metadata
and
which
node
is
running.
A
We
have
a
additional
config
there,
so
in
lock
lines
where
we
also
have
the
data
for
which
sorry
for
which
cluster
and
for
which
machine
that
name
sp
the
current
lock
line
is
we
also
have
labels
there.
So,
let's
see
so
first
thing
I
just
filled
on
a
controller.
So
if
I
do
that,
I
just
see
the
logs
for
that
controller.
A
One
very
useful
thing
to
do
is
probably
first
filter
on
some
stuff,
and
then
you
can
click
here
on
that
button
and
then
it
just
shows
you
only
the
message.
A
That's
bad!
Okay,
yeah!
There
should
be
a
controller
label
so
that
you
can
see
twitch
control
integrates,
but
oh
yeah.
I
know
why
it's
not
there
so
because
that
is
json
logging.
A
Loki
will
automatically
has
some
labels
and
those
are
the
ones
from
from
tail.
But
if
you
want
to
filter
on
any
of
those
labels
here,
which
are
not
already
pre-configured
bars,
you
have
to
do
pipe
jason
and
after
you
did
that,
then
you
will
have
a
bunch
more
and
now.
Actually
there
should
be
a
control
label
yeah,
but
it's
still
not
there.
Oh.
A
Not
not
every
lock
line
is
is
coming
from
a
controller
yeah.
So
now
I
got
a
lock
light
from
a
controller,
so
I
can
also
just
say:
hey.
First
of
all,
let's
print
the
controller
and
the
message.
So
then
you
can
see,
oh
that
controller,
submitting
that
message.
Of
course
you
can
also
filter.
If
you
just
want
to
know
what
the
machine
controller
is
doing
stuff
like
that,
so
that's
more
or
less.
B
A
To
drill
into
a
specific
yeah
controller
manager
and
then
into
a
controller,
what
might
be
even
more
useful
is
if
you
want
to
know
what
is
happening
related
to
a
specific
cluster,
then
you
can
just
filter
on
the
cluster
and
then
you
see
every
lock
line
related
to
that
cluster
yeah.
We
have
similar
stuff,
I
don't
show
all
of
them,
but
let's
see
so
yeah,
you
can
also
look
at
api
server
logs.
If
you
want
or
yeah
containers,
probably
not
that
useful.
A
A
A
Okay,
good,
then
I
would
make
one
example
how
you
can
actually
change
code
and
oh
yeah,
I
slightly
screwed
up
the
order,
so
what
I
actually
want
to
do
is
I
want
to
show
what
you
can
configure
in
the
third
settings
file.
So
then
it
makes
sense
how
to
actually
debug
into
stuff.
A
So,
let's
just
a
quick
overview,
what
you
can
configure,
I
mean
feel
free
to
ask
if,
if
you
want
to
know
more
so
there
is
something
like
allowed
context,
you
can
use
that
to
make
sure
that
you're
not
actually
deploying
to
prod
or
something
so
an
issue
I
had
from
time
to
time
is
you're
working
with
cube
ctrl.
You
have
just
some
random
current
context
and
then
you
start
up
tilt,
and
then
you
deploy
all
that
nice
stuff
into
some
production
cluster.
A
I
think
tilt
has
some
automatic
handling,
so
even
if
you
don't
set
it,
it
automatically
enables
deployments
to
kind
clusters.
There
is
some
some
built-in
handling
there
to
detect
that,
but
if
you
want
to
deploy
to
some
other
cluster,
you
have
to
allow
this
it
here,
because
otherwise
tilt
will
fail
on
startup.
A
Then
there
is
a
trigger
mode,
so
I'm
using
manual
I'm
not
sure
what
the
other
value
is,
but
essentially
there's
something
like
automatic
and
if
you
use
automatic,
then
tilt
will
automatically
redeploy,
rebuild
binaries
and
redeploy
a
controller
if
it
detects
changes
to
the
source
files
so
yeah.
A
Okay,
then
registry,
I'm
just
using
the
empty
one.
If
I
remember
correctly
so
with
our
shell
script,
we
are
automatically
deploying
so
with
that
we're
automatically
deploying
a
docker
registry
and
as
far
as
I'm
aware
till
it
automatically
discovers
that
so
when
tilt
builds
and
deploys
an
image,
it
loads
it
into
that
registry,
and
then
it's
used
from
there.
You
can
also
push
to-
I
don't
know
google
container
registry
or
somewhere
else,
if
you
want
to.
A
If
you
just
want
to
work
locally,
I
think
there's
no
real
reason
to
use
an
external
registry
yep
then,
as
mentioned
before,
if
you
have
some
additional
providers
you
want
to
deploy
which
are
not
in
core,
you
have
to
add
them
here
and
yeah
just
shortly
mentioned
that
before.
So,
if
you
want
to
use
tilde
of
other
providers,
you
have
to
check
them
out.
A
A
So
that
part
is,
as
far
as
I
know,
about
how
we
in
detail
file
how
we
discover
other
providers.
A
So
if
you,
if
you
want
to
enable
another
provider
to
be
used
with
with
our
tilt
environment,
you
have
to
add
a
file
to
repos
to
your
repository
in
let's
say
in
the
providers
like
like
aws
or
actually
they
are
of
course
already
there.
But
if
you
have
your
custom
provider
or
something
so
those
providers
have
that
tilt
provider
chase
and
that's
consumed
by
our
tilt
file.
And
what
is
defined
here
is
what
is
the
name
of
the
provider,
then
the
image
name
and
that's
important.
A
So
that's
just
a
configuration
for
that
and
that,
of
course,
can
be
different
in
each
provider
depending
on
what
code
it
has.
Yeah,
oh
and
here
is
actually
a
label
by
the
way,
so
I'm
not
sure
what
a
default
behavior
is,
if
you
don't
specify
the
label
here,
but
that's
where
it's
taken
from
or
yeah
and
that's
the
manager
name.
A
So
it
should
be
relatively
simple
to
add
that
yep,
so
that
part
should
be
about
how
the
tilt
file
just
looks
through
folders
and
finds
those
json
files,
and
that
part,
as
far
as
I
know,
is
then
about
using
that
and
actually
deploying
it.
So,
based
on
that,
we
will
yeah
generate
those
tilt.
A
Resources
here
and
deploy
them
probably
sounds
a
little
bit
strange
or
complicated,
but
essentially
it's
very
easy
to
configure
and
if
you
want
to
know
how
it
actually
works,
you
have
to
reach
forward
file
and
of
course,
it's
also
documented
here.
So
we
have
a
bunch
of
documentation.
Please
let
us
know
if
something's
not
clear
but
should
be
audio
now.
A
One
interesting
thing
is:
if
you
want
to
deploy,
for
example,
aws
we
have
some
provider
specific
documentation
here
somewhere,
so
in
some
cases
provider
needs
additional
yeah,
usually
credentials
as
far
as
I
know,
but
maybe
also
other
configuration.
So
if
you
want
to
use
aws-
and
if
you
want
to
actually
deploy
something,
then
you
have
to
add
your
credentials
here.
A
Okay,
then
we
have
the
kind
cluster
name.
To
be
honest,
I
looked
at
that
yesterday
and
I'm
not
sure
if
we
still
need
it
or
if
it's
actually
used,
so
you
can
just
try
leaving
the
outdoor
setting
and
then
play
around
could
be
that
we
have
some
legacy
properties
here
then
customize
substitutions.
So
let's
see
how
that
works,
or
when
I
mean
first
of
all,
of
course,
if
you
want
to
provide
credentials,
then
you
need
it.
But
another
use
case
is
if
you.
A
Yeah,
if
you
want
to
pass
in
some
configuration
in
all
our
customized
stuff,
so
just
to
explain
that
explain
that
when
you
are
running
the
regular
quick
start,
then
we
have
something
like
hey.
You
might
want
to
enable
feature
gates
here
or
you
have
to
set
all
those
variables
before
you
actually
run
cluster
cut
line,
and
here
it
works.
A
Similarly,
so
first
you
set
whatever
you
want
to
export
and
then
we're
using
that
for
customize,
so
in
case
of
feature
gates,
the
either
exporting
the
environment
variable
or
setting
it
at
customize
substitutions
makes
it
available,
and
then,
when
you
render
that
stuff,
in
that
case
it
controls
if
those
feature
gates
are
on
or
off
and
as
they
are
off
per
default,
you
have
to
set
that
stuff
to
actually
enable
them.
So
I
just
enable
all
of
them
or
yeah.
A
A
Okay,
next
up,
we
have
property
for
sort
manager,
so
as
part
of
our
regular
classical
unit,
we
also
deploy
a
search
measure
and
that's
just
a
corresponding
flag.
Here,
I
think,
usually
you
should
set
it
to
true
and
as
far
as
I
know,
it
should
be
true
per
default
and
just
to
show
it,
you
can
look
and
look
up
stuff
like
that
by
just
scrapping
for
it
now
third
file-
and
here
you
see
that
default-
we're
used
to
yeah
only
makes
sense
to
set
it
to
false.
A
A
Okay,
then,
by
the
way,
if
there
are
any
questions,
I
can
possible
it's
fine.
Otherwise
I
just
go
for
it:
okay,
yep!
So
next
up,
if
you
want
to
customize
sorry,
if
you
want
to
deploy
your
controllers
with
cus
with
additional
flags,
you
can
just
set
them
here.
A
So,
in
my
case,
so
we
merged
support
for
json
log
format,
one
or
two
weeks
ago,
but
we
didn't
enable
per
default,
but
as
I
want
to
have
json
logs
because
then
locally
everything
becomes
easier,
I'm
just
using
login4.json
and
similarly
I'm
using
minus
minus
v2.
I
think
zero
is
default
currently
and
yeah.
That
was
some
other
pc
of
tracing
but
yeah
then
deploy
observability.
A
A
The
possible
values
are
documented
here.
Essentially,
you
can
just
configure
whatever
you
want.
If
you
comment
out
a
line,
it's
automatically
undeployed
by
tilt
and
that's
all
it
is
yeah,
that's
not
actually
working
on
main
right
now,
just
piercing
and
last
but
not
least,
we
have
a
bunch
of
debug
settings.
So
if
you
want
to
debug
your
controllers,
you
can
configure
some
some
stuff
and
those
are
all
so.
A
Those
port
properties
are
all
port
forwards,
so
tilt
will
create
potholes
from
that
port
to
whatever
it
is
in
your
in
your
controller.
So
that
part
is
the
part
for
delve
debugging,
provided
port
is
for
the
regular
go
profiling,
endpoint
and
matrix
is
just
a
metric,
some
point.
A
So
for
me,
the
most
useful
thing
is
actually
that
part,
because
yeah
I
mean
the
metrics.
I
can
also
look
at
them
in
previous,
and
I'm
not
really
using
profiling
at
the
moment
continue
to
false
that's
the
detail
of
go
debugging,
so
you
can
start
controllers
with
deaf
and
you
can
decide
if
the
controller
should
wait
for
debugger
to
be
connected
or
if
it
should
immediately
continue
and
just
start,
and
I'm
usually
using
continuo
true,
so
that
the
controller
just
starts
and
doesn't
have
to
wait.
A
For
me,
a
huge
advantage
for
me
is:
I
can
just
keep
that
configuration
here.
I
just
use
setup.
All
controls
are
running
and
I
can
just
connect
the
debugger
to
the
running
controller
at
any
point
in
time.
If
I
want
to,
but
I
don't
have
to
it-
just
also
works
without
actually
doing
anything.
So
I
would
recommend
something
like
that
if
you
want
to
use
debugging
okay,
so
that's
that
part.
What
I
would
show
now
is
how
you
can
actually
debug
something
here,
so
I
think
I
still
oh
yeah.
A
Okay,
so
I
deployed
the
cluster
before
and
those
controls
are
all
running
when
we
look
at
the
deeper
chords
which
we
can
do
either
here,
there's.
Okay,
that's
pretty
hard
to
look
at.
So
when
you
look
at
the
currently
yep.
D
A
Yep
yep,
you
have
to
connect
debugger,
but
then
you
can
set
breakdowns
and
just
deeper.
D
D
A
Yeah,
so
it's
one
problem
you
definitely
have
is
that
when
you
have
a
break
point
which
is
currently
yeah
in
break,
I
don't
know
what
the
name
is
and
then
it
just
substantial,
run,
runtime
and
all
gorgeous,
but
that's
limitation
of
go
itself
and
there's
an
issue
and
github
gold
and
go
and
that's
that
can't
really
be
improved
for
me
it.
It
wasn't
really
that
much
of
an
issue,
because
I
know
it's
there
and
in
most
cases
you
can,
you
can
work
around
it.
A
I
would
say
sure,
but
let
me
show
you,
maybe
maybe
just
a
chance
to
show
why
it's
not
really
an
issue.
For
me
I
mean
what
would
be
an
issue
if
you
want
to
essentially
pause
at
some
point
and
you
if
you
really
need
every
other,
go
routine
to
work
at
the
same
time,
while
you're
just
waiting
yeah,
you
won't
get
that
to
work.
A
But
if
it's
fine
for
you
that
you
just
essentially
continue
when
you
debug,
then
you
you
shouldn't
use
step
by
step.
You
can
just
use
continue
and
other
breakpoints
and
in
between
of
those
steps,
other
goal
routines
will
just
continue
to
run.
I
hope
it
makes
sense
yeah
yeah,
but.
A
Let
me
know
if
you
have
specific
issues,
and
maybe
I
can
give
some
tips
on
how
to
yeah
work
around
that
specific
acceleration,
but
just.
A
A
little
bit
more
how
those
flags
actually
work
and
what
is
actually
running
in
our
containers,
so
those
deeper
configurations
will
lead
to
this
command
and
I
think
I
use
the
cappy
controller
here.
So
what
we're
actually
doing
is
we're
using
that
startus
h,
I'm
not
sure
what
that's
just,
but
you
should
be
able
to
grab
for
it,
but
then
we're
running
dev
dev
listens
on
port
3000,
that's
just
some
dev
stuff
to
use
the
right
dell
deficit
server
to
noise,
api
version
and
run
a
headless
and
stuff.
A
An
important
thing
is
continue
so
for
us
essentially
based
on
our
configuration.
We
have
that
port,
which
is
also
port,
forwarded
to
the
same
port
on
your
machine,
and
we
have
continue.
Those
configurations
are
passed
through
essentially
and
yeah.
I
also
have
a
profiler
address,
but
yeah,
I'm
not
using
it.
A
Okay,
so
that's
what's
running
there
and
when
you
look
at
the
controller
here,
let's
see
how
to
get
there
yeah
it's
just
running
right
now,
but
I
want
to
step
into
it.
So,
of
course,
I
have
to
know
which
parts
I
I
have
actually
configured
just
have
to
move
to
something
here
and
yeah,
I'm
using
taiji,
but
it
should
work
similarly
with
vs
code
and
we
documented
it
for
use
code
and
intellij
on
that
tilt
page.
A
Now,
let's
make
an
example,
so
I
just
take
a
look
at
the
machine
diploma
controller.
So
and
for
me
it's
usually
good.
If
you
I
want
to
understand,
what's
going
on
in
in
a
controller
to
debug
an
issue
or
yeah.
A
How
it
works
so
I
added
a
breakpoint
to
the
machine
deployment
controller
and
let's
make
a
change
to
actually
trigger
reconciles.
So
I
think
I
should
have
something
in
my
history
and
yeah.
Let's
get
it
up
to
three
and
now
we
are
immediately
inside
of
machine
deployment,
control
and
yeah,
just
the
regular
stuff,
and
as
I
said
so,
you
have
a
bunch
of
options
how
to
step.
A
I
think
if
you
just
use
individual
steps,
then
you
essentially
yeah
pause
your
whole
controller
and
it
won't
really
do
anything
until
you're
done,
but
if
you
click
here
and
then
use
continue,
and
that's
that
should
be
the
button
yeah
resume
program.
Okay,
the
arena
button,
then
in
between
when
it
jumps
from
here
to
here
then,
as
far
as
I
know,
other
guardians
can
make
some
progress,
yeah
and
essentially
from
here
it's
just
regular,
debug
stuff.
A
You
can
take
a
look
at
variables
at
breakpoints,
jump
back
and
forth,
etc,
etc.
D
B
A
So
the
breakpoint
will
stop
all
of
those
workers
because
of
course
I
mean
it's.
It's
based
on
a
on
a
line
of
code
from
a
control
runtime
perspective.
It
abstracts
the
way
for
me
to
work
a
concept
so
for
what
is
relevant
for
me
is
actually
on
which
and
which
object.
Does
it
reconcile?
So
the
important
thing,
I
would
say
is
hey
what
what
machinima
are
we
currently
debugging
and
what
I
can
do.
D
A
Yep
yeah,
so
so
what
can.
A
Really
confusing
is
let's
say
you're
in
that
breakpoint,
and
you
want
just
to
continue
from
here
to
here
and
then
you're
hitting
that
breakpoint
again
right.
It
could
happen
because
actually
yeah
currently
just
another
worker
and
you
don't
really
know
in
which
correlation
you
are,
and
if
you
just
step
over
here
line
by
line,
then
you
can
be
sure
that
you're
still
in
the
same
go
routine.
But
once
you
click
resume,
you
could
jump
to
something
else
yeah
and
that
can
make
it
complicated
and
confusing.
A
Yeah,
I
think
it's
so
for
me
it
was
mostly
a
matter
of
using
it
all
over
and
figuring
out.
What
features
are
there?
So
just
one
or
two
hints,
so
I
don't
know
what
he's
got
sorry,
but
with
a
tele-chair
you
can
do
stuff
like.
Maybe
I
don't
want
to
break
just
break
on
that
place.
Maybe
I
want
to
break
if
the
that's
only,
I
don't
know
copy
quick
start
minus
mp0,
and
then
it
will
actually
only
break
if
that
variable
has
that
content,
and
otherwise
it
will
just
don't
do
anything.
A
So
those
are
things
you
can
do.
What
you
can
also
do
is
something
like,
please
don't
break
at
all,
just
lock
something,
and
then
you
get
an
additional
lock
line
and
what
is
really
great
about
that.
A
A
Okay,
any
questions
about
that
part.
A
Okay,
good
now,
let's
continue
here
yeah,
of
course
you
can
do
the
same
to
web
hooks
and
stuff.
I
think
that
books
are
probably
yeah
are
even
even
easier.
If
you
want
to
understand,
what's
going
on
there
because
yeah
you
just
set
a
breakpoint
and
you're
validating
web
hook,
then
you
do
something
with
cube
ctl
and
you
immediately
see
what's
happening
on
the
server
side.
It's
not
like
the
controller,
which
is
running
all
the
time.
It's
you
can
trigger
it
more.
A
Yeah,
at
a
more
precise,
let's
say
that
at
a
specific
point
in
time-
and
you
don't
have
to
deal
with
the
currency
here-
not
that
not
usually,
if
you
use
validating
or
defaulting
another
thing,
just
an
example.
A
If
you
want
to
learn
about
so
you
can
do
all
kinds
of
things,
I'm
just
making
examples
for
some
specific
things
that
you
can
do.
So
if
you
want
to
know
how
conversion
works,
let's
see
if
I
have
some
command
here
in
my
history,
so
let's
say
I
want
to
currently
my
my
default
version
is
vivobita1,
because
that's
the
current
thing,
and
now
I
try
to
retrieve
let's
do
that:
a
cluster
based
as
a
v1
iphone
3
version.
So
then
I
can
trigger
that
here
and
nice.
A
A
Such
and
if
you
already
you
already
have
cluster,
so
it's
probably
just
that
yep
okay,
so
that's
just
a
request
to
get
those
version,
and
now
you
can
step
through
all
the
conversion
code
and
see
how
that
actually
works
and
see
stuff.
Like
hey,
you're,
adding
an
annotation
here
and
you
look
at
the
result.
You
see.
A
A
So
I
used
that
a
few
times
in
the
past.
It's
also
interesting
to
see
how
often
conversion
is
called,
because
it
can
be
quite
a
lot,
yeah,
okay,
so
so
far
so
good.
The
last
thing
I
have
on
my
list
is
showing
how
you
combine
that
with
inventors,
so
essentially
how
you
can
not
only
debug
when
you
just
deploy
clusters
manually
and
and
play
around
with
it,
but
also
how
you
can
use
the
tilt
management
cluster
and
your
our
environ
tests
and
just
okay.
A
A
But
you
can
essentially,
instead
of
just
creating
that
cluster
on
the
fly
you
can
use
your
existing
tilt,
managed
cluster,
and
then
you
can
do
things
that
can
do
two
things
on
one
side.
You
can
step
through
your
inventories
and
on
the
other
side,
you
can
step
through
the
controllers
to
see
what
they
are
doing.
A
So,
if
you
have
some
weird
issue
in
your
enchanters,
you
can
just
run
your
entrance
locally
against
your
tie
class,
a
tilt
cluster,
and
then
you
can
step
through
both
things
through
your
controllers
and
through
your
antenna
test
and
figure
out
yeah
how
your
controls
react
to
which
steps
in
your
entire
test
and
see
where
it's
going
wrong
and
the
nice
thing
is.
A
If
you
just
pause
your
end-to-end
test,
you
have
all
the
time
in
the
world
to
figure
out
what
the
problem
is,
while
the
controller
is
not
reconsidering
something
without
any
kind
of
time.
Also,
so
for
me,
it's
it's
a
lot
easier
than
just
running
some
make
end-to-end
test
target
and
just
seeing
it
fail,
and
then
I
don't
know
just
pretty
annoying
to
figure
out
things
that
way.
A
So
for
me,
that's
mostly
around
if
I
know
that
an
end-to-end
test
is
broken
in
some
way
to
figure
out
details
on
how
it
is
broken,
but
let
me
show
you
how
that
works.
So,
first
of
all
I
just
oh,
I
already
dropped
a
breakpoint,
that's
good,
so
that
was
the
configuration
to
connect
against
the
running
cappy.
A
And
now
I
show
the
configuration
how
to
run
an
end-to-end
test
against
that
cluster
one
disclaimer.
I
think
it
currently
only
works
in
core
cluster
api
because
we
we
added
an
additional
flag
to
our
npn
test
to
make
that
possible.
It
should
be.
In
my
opinion,
it
should
be
very
easy
to
implement
the
same
flag
in
other
providers,
but
as
far
as
I
know,
nobody
did
yet
and
probably
nobody
knows
that
we
have
that
flag,
but
yeah.
A
I
just
show
how
it
works
in
cold
cappy
and
if
you
are
interested
just
I
don't
know
pick
me
later
open
issues.
I
can
help
you
get
that
added
to
other
providers
if
it's
not
already
there,
but
I'm
pretty
sure
it's
not
already
there.
I
added
that
a
few
months
back,
okay,
so,
first
of
all
just
some
some
information.
We
have
a
bunch
of
documentation
around
testing
too.
So
if
you
want
to
see
how
you
can,
I
think
either
testing
or
developing
inventories
yeah.
A
A
The
only
difference
from
here
to
here
is
that
I'm
using
an
additional
flag-
and
let's
see
yeah-
it's
even
documented
here
so
but
feel
free
to
ping
me
somewhere
if
it
doesn't
work
or
so
okay.
So
what
am
I
doing?
So?
I
have
a
test
config
here,
yeah,
I'm
using
a
test
end
to
end
as
work
directory.
I
set
that
environment
variable,
so
that's
the
folder,
where
the
log
files
and
other
stuff
are
stored
by
the
entrance,
and
then
I
pass
in
a
bunch
of
arguments.
A
So,
first
of
all,
I
get
that
end
to
end
configuration
file
which
in
our
case
is
docker.yaml.
Then
I'm
focusing
on
a
specific
test,
because
I
don't
want
to
run
yeah.
I
don't
know
ten
tests.
Concurrently
when
I
debug
stuff,
I
think
that's
to
improve
the
bug
output
of
ginkgo
itself.
I
think
it
also
enables
live
streaming.
A
But
in
our
case
we
deployed
everything
with
tilt,
so
you're
fine.
So
let's
see
I
started
kickstart,
but
I
will
ate
at
a
breakpoint
at
the
beginning
if
I
find
the
file
yep.
So
that's
the
actual
test.
Spec,
of
course
there's
a
bunch
of
stuff
before.
But
I
want
to
skip
over
that.
A
A
Then
it
was
oh,
it
still
prints
initializing
a
bootstrap
cluster,
but
it
actually
just
skipped
that
step
because
it's
already
there,
then
it
is
checking
which
providers
are
your
providers
or
controls
are
there
and
it
is
start
to
tailor
the
logs,
because
that's
what
our
interest
endpin
test
is
also
doing
and
then
we're
actually
in
the
test
spec.
A
So
and
as
I
mentioned
before,
we
now
have
two
devices
running
one
in
that
one
controller
and
one
here
and
what
you
can
do
here
is
essentially,
I
think,
that's
what
I
would
probably
do
so.
I
know
that
the
test
fails
at
a
certain
point.
I
mean
here.
We
just
have
that
quick
start
thing.
So
there's
not
a
lot
of
logic,
but
let's
say
we're
just
going
to
here.
A
Okay,
yep
one
mistake:
our
controllers
are
currently
written
in
a
way
that
you
can
only
have
that
you
should
only
deploy
one
infrastructure
provider
because
otherwise
the
test
doesn't
know
which
provider
it
should
use
so
with
cluster
cuddle.
That
should
be
the
generate:
a
config
cluster
with
the
classical
config
cluster
command
in
tessa.com
that
it
automatically
detects
your
infrastructure
provider,
but
it
only
works.
A
If
you
only
have
one,
if
you
have
two,
then
you
have
to
set
which
one
is
the
one
that
you
actually
want
to
use
to
generate
your
cluster
yammer
so
and
because
in
our
entering
tests,
we're
not
setting
that.
So
I
think
that's
just
printed
like
that.
It
actually
is
just
empty
yeah,
because
that's
how
they
can't
be
written.
We
have
to
make
sure
that
until
we
only
have
one
info
provider,
if
you
want
to
use
the
test
like
that
or
we
have
to
fix
that
stuff.
A
D
A
E
A
Yeah,
nice
one:
let's
just
assume
that
that
sounds
okay,
so
I
so
it
took
a
while
for
for
till
just
to
regenerate
all
that
stuff,
but
eventually
it
undeployed
kappa.
And
now
we
see
here
if
I
just
list
all
providers
that
copper
is
now
gone
so
now
I
can
actually
rerun
the
test.
When
I
find
my
id,
I
was
actually
hitting
that
yesterday,
when
I
just
walked
through
all
that
stuff,
but
yeah
just
did
it
wrong
here
again.
A
One
nice
side
effect
is
that
if
you
want
to
or
have
to
run
a
test
repeatedly,
it
should
be
faster
because
it
doesn't
have
to
create
a
kind
cluster
and
creating
a
kind
cluster
usually
takes
like.
I
don't
know
one
two
three
minutes
or
something.
The
only
thing
that
you
have
to
wait
for
is
actually
that
sort
of
the
most
or
the
slowest
part
is
generating
that
local
repository
here.
But
apart
from
that
you're
instantly
in
your
test
and
there's
a
flag
to
work
around
that.
But
let's
ignore
that
for
now.
A
Okay,
so
now
we
now
test.
Let's
see,
I
hopefully
have
still
the
other
breakpoint
somewhere
here
yep.
It
should
be
okay,
so
that's!
Actually,
if
we
generate
our
template,
we
can
take
a
look
at
all
that
stuff
if
we
want-
and
that's
actually,
let's
just
call
it
cubesat
apply.
A
So
after
that
break
point,
we
have
all
our
resources
in
the
cluster
and
our
controls
are
actually
creating
that
cluster
and
now
we're
here
at
wait.
So
one
interesting
thing:
let's
just
assume
you
have
an
issue
somewhere
in
your
template
and
that
here
would
actually
fail
fail.
So
your
cluster
doesn't
come
up
or
your
machine
deployment
doesn't
come
up
or
all
that
stuff.
A
Now
what
you
can
do
is,
you
can
just
add
a
breakpoint
when
something
is
triggered,
and
you
can
wait
before
you
run
into
that
code,
which
would
actually
time
out
your
test.
So
now,
instead
of
oh,
what
is
the
time
on
five
minutes?
I
have
five
minutes
time
to
debug
that
you
can
just
take
all
the
time
you
want
to
figure
out
what's
going
on
and
then
just
continue
your
test.
A
Yeah-
and
I
think
probably
the
rest
is
pretty
much
obvious,
so
just
the
same
as
before,
if
all
your
control
is
running,
you
can
add
breakpoints
debug
for
stuff
look.
What's
up
take
a
look
at
what's
happening,
you
can
even
play
around
with
the
results
you
just
deployed
to
simulate
some
stuff
and
yeah.
Probably
I'll
leave
it
at
that.
A
Any
questions
I
think
I
otherwise
would
be
at
the
end.
Essentially,
so
if
you
have
any
questions
about
anything
now,
it's
the
time.
E
A
Yeah
absolutely
so
yeah
we
have,
I
would
say,
two
main
documentations
which
are
relevant
for
what
I
showed
today.
So
the
the
one,
the
actual
main
thing
is
our
default
developer.
Tilt
page,
that's
something
under
rapid
iterative
deployment
tilt
one
additional
hint.
Please
take
a
look
at
the
main
page,
so
we
know
when
you
go
to
introduction,
you
see
that
we
have
different
pages
of
a
book
and
just
yet
the
current
state
of
maine
is
obviously
on
main.
A
If
you
just
go
to
cluster
api
here,
then
you
will
have
the
version
of
the
1.1
release
so
and
the
tilt
page
was
slightly
improved
but
yeah.
So
that's
the
page
which
just
documents
still
the
prerequisites:
how
to
actually
start
the
cluster
minimal
tilt
settings
and
all
the
other
fields
that
I
showed
plus
provider,
specific
stuff
and
yeah.
I
think
that
should
cover
that
part
and
including
let
me
check
yeah,
including
how
to
debug.
So
that's
the
configuration
for
this
code.
A
Here's
the
link
to
get
a
intellij
documentation
on
on
the
page
of
chat
brains
or
yeah
chatbots
documentation
for
this
stuff
and
the
other
stuff
is
essentially
around
testing.
How
you
can
then
use
your
tilt
managed
cluster
to
run
an
entire
test
against
that.
So
that's
our
testing
page,
which
is
here
on
the
developer
guide
and
the
really
important
part.
Is
we
don't
end
to
enter
it's
running,
entrepreneurs
locally
and
yeah.
C
A
C
Okay,
okay
and
stephen
in
the
tilt
file
is
also
say
tracing.
Do
we
have
something
related
to
tracing
as
well
or.
A
A
E
Yeah,
but
we
have,
we
already
have
grafana
rocky
parameters
so
have
you
shown
how
to
enable
this
yep.
A
Okay,
good
so
far,
so
good
thanks
everyone
for
trying
yeah,
as
I
said,
if
there's
any
feedback
or
ideas
for
future
topics,
so
we
have
a
few
more
ideas.
I
mean
we
will
hold
the
same
session
for
for
us
and
we
will
do
another
one
around
all
the
test
configuration
stuff
intro,
but
then
we
are,
let's
say
we
would
have
a
a
bunch
of
options,
but
what
we
really
want
to
know
is
what
peeps
are
what
people
are
interested
in?
What
would
actually
help?