►
From YouTube: 20191218 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
December,
18
2019.
This
is
the
cluster
API
office
hours.
Meeting
cluster
API
is
a
sub
project
of
sig
cluster
lifecycle.
We
do
have
meeting
etiquette,
so
please
take
a
look
at
what
we
have
here.
Basically
use
the
raise
hand
feature
of
zoom
if
you'd
like
to
discuss
something
and
I
will
call
on
you
and
in
general,
please
be
nice
to
each
other.
This
meeting
is
being
recorded
and
will
be
posting
the
recording
on
YouTube
later,
please
add
yourself
to
the
attending
list.
A
If
you
haven't
done
so
already
and
we
will
get
started
here
so
first
up
on
the
agenda.
If
anybody
is
new
and
interested
in
introducing
themselves,
we
want
to
take
a
minute
or
so
to
do
that
this
is
entirely
optional.
So
if
you
don't
want
to
do
it
or
don't
feel
comfortable,
that's
totally
fine,
but
if
anybody
wants
to
say
hi
who's
new,
please
raise
your
hand,
and
let
me
know
you.
B
C
A
Ok,
so
I
do
have
a
demo
for
tilt
recently
spent
several
days
last
week
and
a
little
bit
this
week
working
on
adding
a
tilt
file
to
cluster
API.
If
you're
not
familiar
with
tilt,
it
is
a
very
nice
way
to
do
rapid
iterative
development
for
kubernetes
based
projects.
We
do
see
if
the
docs
made
it
in.
We
do
have
a
document
here.
If
you
go
to
the
master
version
of
our
documentation
and
it
does
walk
you
through
what
you
need
to
do
to
get
this
going.
So
basically
you
need
docker.
A
You
need
a
fairly
recent
version
of
kind.
You
need
tilt
and
then
you'll
need
to
do
git
clones
of
at
least
cluster
API,
and
if
you
have
specific
providers
beyond
cluster
API
and
cap
D
the
docker
provider,
you
would
also
want
to
clone
those
locally
to
your
system
as
well
so
I'm
going
to
demo
this,
but
before
I
do
it
I
think
it's
probably
worth
walking
through
the
documentation.
So
you
can
see.
What's
here,
you
start
and
you
just
create
a
kind
cluster.
I
will
also
note
you
don't
have
to
use
kind
for
this.
A
I
know
that
nadir
tends
to
use
a
different
cluster,
just
a
local
one.
So
there
is
a
setting
in
here
where
you
can
turn
off
the
some
of
the
stuff
that
it
does
with
kind
once
we
get
that
pull
request
merge
from
nadir
after
you
create
your
cluster.
There's
a
tilt
settings
file
that
you
would
stick
in
the
cluster
API
repository.
This
is
in
git
ignore,
so
anything
that
you
put
in
here
will
not
be
committed
to
the
repository.
There
are
basically
three
required
fields
that
right,
I,
guess
I
should
say
allowed
contexts.
A
This
is
a
tilt,
a
tilt
command
that
lets.
You
tell
it
which
kubernetes
context
in
your
cute
config.
You
want
tilt
to
work
with.
If
you
leave
this
blank,
then
tilt
will
not
deploy
anything
because
it
wants
to
be
certain
and
it
wants
you
to
be
deliberate
in
specifying
which
which
queue
config
contexts
are
appropriate
to
deploy
in
to
the
default
registry.
Setting
is,
if
you're,
building
images
and
you
need
to
push
them
to
a
registry
because
you're
not
using
kind
locally.
A
For
example,
then,
a
lot
of
the
images
for
whatever
you're
building,
especially
close
to
your
API,
will
go
into
an
upstream
registry
so
for
cluster
API
we
push
to
the
Google
container
registry
and
you
most
likely
won't
have
permission
and
won't
want
to
push
into
the
cluster
API
registry.
So
you
would
set
this
to
something
on
the
docker
hub
or
any
anyplace
that
you
have
push
access
to
and
tilt
is
smart
enough
to
retag
the
images
into
whatever
you
have
and
then
push
them
into
your
registry.
A
The
next
up
is
provider
repos.
This
lets
you
bring
in
other
providers
outside
of
just
cluster
api,
and
typically
these
are
going
to
be
in
the
they'll,
be
relative,
siblings
to
your
cluster
api,
checkout
and
then
the
last
one
is
enable
providers-
and
this
tells
you
specifically,
which
ones
you
want
till
to
load
and
deploy
out
of
the
box.
A
It
will
do
the
docker
provider
and
then
that's
if
you
don't
specify
anything
for
enable
providers,
and
then
you
additionally
could
could
set
this
and
say
I'd
like
a
table,
yes
or
any
of
the
other
ones
that
support
support
it
and
then
once
we
do
that
we'll
just
do
tilt
up.
So
let
me
go
ahead
and
get
my
terminal
over
here
and
get
my
share
set
up
to
show
that
so
give
me
just
a
second
to
do
that.
A
A
So
let
me
get
this
up
get
into
the
right
branch,
all
right,
so
I
have
the
master
branch
of
cluster
API
checked
out.
This
is
the
AWS
provider.
This
has
the
master
branch
checked
out
and
if
we
take
a
look
at
my
tilt,
actually
I'm
not
going
to
show
you
my
tilt
settings
because
they
have
some
AWS
credentials
in
there,
but
I
can
grep
for
enable
providers
and.
D
A
A
So
this
will
take
just
a
little
bit
kind
is
pretty
fast,
and
once
you
have
kind
up
and
running,
then
you
can
do
tilts
up.
I
did
want
to
show
this
from
scratch,
so
I'm
glad
I
didn't
have
it
up
to
begin
with,
so
what
the
tilt
file
is
going
to
do
while
we're
waiting
is
it
will
pre,
pull
and
preload
the
cert
manager
images
in
to
clock
into
kind?
This
is
an
optimization
just
because
if
you
don't
do
this,
then
every
time
you
reset
create
a
new
kind.
A
Cluster
you've
got
to
wait
for
it
to
pull
multiple
megabytes
over
the
network.
So
this
just
speeds
things
up
a
little
bit
and
then
it
will
build
your
cluster
API
manager,
binary
any
of
your
provider
manager,
binaries
and
deploy
all
of
them,
and
the
best
part
is
that
it
supports
optimized
live
reloading.
A
So
when
you
are
working
on
your
go
code
or
really
any
other
files
that
it's
watching
then
tilt
will
build
your
changes
to
the
managers
locally
on
your
system,
it
will
copy
decompiled
manager
binary
into
the
running
pod
and
running
container
and
then
it'll
issue
a
restart
on
on
the
process.
Rather
than
so,
it
doesn't
have
to
build
the
docker
image
and
load
it
and
kill
the
pod
and
update
the
deployment
and
whatnot.
This
is
just
a
live
reload.
A
A
Alright,
it's
up
and
ready.
So
now
you
see
a
whole
lot
more
stuff
here.
This
has
taken
all
of
the
customized
Yambol
from
cluster
API
itself
and
because
I
have
the
AWS
provider
enabled
from
that,
and
so
you
can
see
we
have
a
couple
of
namespaces
going
in.
We
have
all
of
the
custom
resource
definitions
for
cluster
API,
the
bootstrap
provider
for
cube,
ATM
and
AWS
provider,
and
then
all
of
the
our
bag
is
going
in
the
for
AWS
I
have
a
secret.
That's
got
my
base64
encoded
AWS
credentials.
A
We
have
some
certain
manager
bits
and
all
of
this
goes
in
first,
then,
what
you
see
is
it's
going
to
build
the
cluster
API
manager,
also
known
as
core
manager,
the
AWS
manager,
and
you
can
see
that
it's
just
running
go
build.
So
this
is
happening
on
my
laptop
right
now
and
once
it
does
those
two
it
will
build
a
Tilt
specific
docker
file,
that's
very,
very
minimal.
So
this
is
not
the
production
file
that
we're
using
to
build
the
images
that
we
release.
A
This
literally
just
will
take
a
couple
of
started:
restart
helper
shell,
scripts
and
it'll
copy
in
the
binary
that
we
just
built
in
the
previous
step,
and
then
so
it's
doing
this.
This
was
for
cluster
API.
This
is
for
the
AWS
provider
and
then
you
can
start.
You
can
see
it's
actually
getting
some
logs
out
and
at
this
point
everything
is
up
and
running.
So
what's
cool
is
I,
can
take
all
my
editor
over
here
and
we'll
go.
A
Open
up,
say,
the
cluster
controller
for
cluster
API
and
I
can
come
in
here
and
we'll
add
a
large
statement.
So
I
can
say
hello,
Kathy,
meeting,
I'm,
gonna
save
this,
and
if
we
come
back
over
here,
you'll
see
that
it
noticed
this
file
changed
and
it's
rebuilding
the
manager
for
cluster
API,
and
this
shouldn't
take
too
long,
especially
thanks
to
nadir
who
fixed
the
full
build
issue.
And
now
what
you
see
is.
It
is
whoops
a
lot
of
blog
output.
A
So
what
you'll
see
is
that
it's
it's
rebuilding
the
docker
image
for
copy
controller
manager,
but
instead
of
doing
a
full
docker
build
it
copies
the
file
that
we
just
built
and
invokes
this
restart
script
that
the
Tilt
folks
put
together-
and
here
you
can
see
our
log
message
is
showing
up.
So
we
have
a
couple
of
outstanding
PRS
that
we
need
to
get
merged
before.
This
is
fully
ready
to
go,
but
don't
get
in
today,
and
this
is
the
basics
for
it.
A
I
do
just
want
to
cover
one
more
thing,
which
is
if
you
are
working
on
a
provider,
and
you
want
to
support
this
tilt
based
flow.
All
you
have
to
do
is
add
a
tilt
provider
JSON
file
to
your
repository,
and
you
have
to
put
in
the
name
that
you
want
to
use
to
reference
it.
So
this
saying
AWS
here
means
that
I
can
reference
it
in
enable
providers
as
AWS
and
then
there's
a
config.
A
All
right,
if
you
try
it
out,
and
you
have
problems,
please
feel
free
to
open
github
issues
reach
out
on
slack.
We
are
super
super
happy
that
this
is
in
and
that
everybody
can
take
advantage
of
it.
There
are
I
mentioned
a
couple
of
pull
requests
that
are
outstanding.
One
of
them
from
nadir
makes
it
so
that
the
docker
provider
actually
works.
A
A
I'm
gonna
move
on
to
what
is
currently
the
only
discussion
topic,
and
this
is
from
Michael
Gugino,
who
was
not
able
to
make
it
today,
but
he
did
want
to
point
out
that
a
follow
up
to
what
we
had
discussed
I
believe
last
week
was
he
created
a
cap
for
a
node
maintenance
lease?
This
is
to
avoid
having
multiple
components
that
are
all
trying
to
do
things
with
a
node
stomp
on
each
other's
toes.
A
A
I
did
want
to
check
in
on
the
he
sees
I.
Have
it
hold
on
just
a
second,
the
I
don't
have
it
so
we'll
have
to
go
get
it
the
machine,
remediation
proposal,
machine
health
checking
proposal
Alberto.
We
had
said
that
today
would
be
the
lazy
consensus.
End
I
did
see
that
there
were
still
some
small
comments
that
were
outstanding.
I,
don't
believe
that
there's
anything
that
is
blocking
this
going
in
and
that
we
can
continue
to
iterate
as
needed,
but
did
you
have
any
last-minute
changes
you
wanted
to
make
before
we
merge
this.
D
D
D
A
A
A
All
right,
if
you
think,
is
something
let
me
know,
but
I'm
gonna
go
over
to
the
list
of
issues
that
don't
have
milestones
so
I
know.
We
talked
about
this
ongoing
bottom
up.
I
know
we
talked
about
this
one
before
about
supporting
clusters
that
don't
have
a
load,
balancer
and
I
need
to
double-check
with
you
seeing
an
Andrew
about,
because
I
know
we
had
talked
about
what
to
do
with
this
one.
A
So
I'll
come
back
to
that
and
I
know
that
the
project
roadmap
think
it
is
fair
to
say
that
this
is
going
to
be
documentation.
It
is
going
to
come
in
and
we
will
put
it
for
our
milestone.
If
you
all
have
not
seen
it,
there
is
a
Google
Doc
that
is
very,
very
rough
seated
with
some
initial
thoughts,
but
nothing
is
set
in
stone.
So
I
would
encourage
you
to
take
a
look
at
this.
Add
your
own
ideas.
A
Have
some
discussion
around
when
things
should
come
in
and
Tim
had
a
good
suggestion
that
we
should
probably
try
to
figure
out?
What
do
we
consider
a
1.0
beta
and
what's
above
and
below
the
line?
So
please
take
a
look
and
I
think
you
know
it's
mid-december,
we're
getting
into
the
holiday
season,
so
I
may
open
up
a
pull
request.
A
Let's
see
next
up
is
automate
as
much
of
the
release
as
possible
and
I
know
that
we
I
was
supposed
to
follow
up
from
last
week
to
at
least
show
a
couple
of
folks
how
we
do
releases
and
I
was
not
able
to
do
that.
So
this
is
probably
gonna
end
up
being
another
New
Year
thing.
We
did
get
a
change
to
our
prowl
image.
A
Building
jobs
to
build
tagged,
get
tags
in
an
unsupported
way,
at
least
with,
because
prowl
doesn't
officially
support
it,
but
the
next
time
we
cut
a
release
of
cluster
API
EQ,
medium
bootstrapper,
AWS
provider
or
the
GCP
provider
will
be
able
to
try
out
the
automated
image
building.
But
I'm
going
to
put
this
in
next
for
now,
mainly
because
our
alpha
3
milestone
is
big
enough
and
if
we
can
get
around
to
this
great
but
I
I
don't
want
to
have
this,
be
a
release.
Blocker.
D
E
Yeah
I
think
as
part
of
this
release
and
even
previous
releases,
we
should
submit
certification
or
CN
CF
certification.
I
think
this
is
probably
a
reasonable
expectation
for
consumers
of
Gloucester
API.
So
we
do
this
for
comedian
as
part
of
the
release,
and
we
have
the
instructions
listed
there.
So
I
think
it's
worthwhile
to
do
it.
For
cap
e4v,
1,
alpha
2
plus
who.
E
F
A
G
Was
just
gonna
say
that
we
we're
doing
this
on
the
teller
side,
but
every
released
you
we're
using
cabbage
stand
up
the
conformance
cluster,
that
we
runs
the
tests
and
whatever.
So,
if
you
guys
I'm
happy
to
chip
in
no
okay.
H
Just
gonna
say
at
least
four
kappa
for
the
release:
your
l4
branch.
We
have
automated
conformance
jobs
that
run
using
sonobuoy,
so
what
we
could
potentially
do.
There
is
update
that
with
the
tagging
that
we're
doing
for
automating
the
image
builds,
and
we
should
be
able
to
generate
an
automated
conformance
report
that
we
could
use
for
submitting
every
time
the
tag
is
pushed
that.