►
From YouTube: 20200205 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
today
is
Wednesday
February
5th
2020.
This
is
the
cluster
API
office
hours.
Meeting
cluster
API
is
a
sub
project
of
siga
cluster
lifecycle,
and
if
you
need
access
to
this
document,
please
join
the
sig
cluster
lifecycle
mailing
list.
We
do
have
meeting
etiquette
for
our
meetings,
which
essentially
is
the
Knights
to
each
other
and
please
use
the
raise
hand
feature
of
zoom.
A
If
you
would
like
to
comment
on
something,
please
add
your
name
to
the
attending
list
and
if
you
have
psays
demos
or
discussion
topics,
please
fill
in
the
appropriate
sections
and
we
will
go
through
them.
So
before
we
get
started
with,
the
PSA
is
something
that
we
like
to
do
is
give
time
for
new
attendees
to
say
hi
if
they're
interested.
So
if
this
is
your
first
time
joining
us
welcome
and
if
you
would
like
to
introduce
yourself,
I
will
pause
now
and
see
if
anybody
is
interested.
A
B
A
C
C
We
don't
have
a
location
yet,
but
depending
on
how
many
people
we
have
respond
and
how
many
people
are
interested,
we're
going
to
lock
down
a
location
and
I'll
send
out
a
meeting
invite
to
anybody
who
signs
up
as
a
reminder
as
we
get
closer,
it
will
be
on
April
1st.
So
it's
that
Wednesday
of
cube
con.
The
second
day
of
the
actual
conference
I.
A
D
D
Well,
it
works
for
me,
okay,
so
one
thing
that
we
have
been
doing
like
in
the
past
few
days,
and
this
is
mostly
like
relevant
for
infrastructure
and
food
truck
providers.
We
have
been
talking
a
lot
about
how
to
convert
from
four
three
and
we
started
like
we
just
like
I
generated
the
conversions
which,
like
you,
can
see
here
now
like
in
the
past
few.
D
Like
weeks,
we
found
like
a
few
issues
with
that
at
which,
like,
for
example,
one,
if
you
add
a
new
field,
a
one
when
you
convert
back
to
be
1
for
2
and
now
back
on
for
3,
you
actually
lose
the
data,
so
in
glossary
bi.
That
is
a
util
conversion
package
that
will
help
you
with
that.
So,
for
example,
here
and
we
want
to
convert
from
view
on
I-43,
which
is
the
source.
D
In
this
case,
we
make
sure
to
preserve
the
data
into
an
annotation
in
JSON
format,
and
then
we
we're
going
to
convert
back
from
the
1
for
2
to
3.
Again,
we
make
sure
to
restore
the
data
turns
out
like
this
is
like
not
like.
Well,
it
was
enough
to
kind
of
like
preserve
the
data,
but
for
feel
that
you
need
to
manually
convert
like
then
you
need
to
do
things
like
this.
D
We
added
a
new
new
function
that
lets
you
test,
like
the
conversion
from
3
to
Q
and
back
to
3.
This
is
a
very
generic
function.
Select
you
can
use
any
cub
version,
it's
the
first
parameter
and
any
convert
is
the
second
one,
and
we
found
a
lot
of
issues
and
like
Olivia,
that
we
were
missing
when
we
added
these
fuzzy
conversions
so
for
any
infrastructure
of
bootstrap
provider.
I
would
really
suggest
to
do
the
same
and
I've
been
working
on
yes
today
and
like
fun.
A
Would
add,
we
also
initially
went
down
the
wrong
path
of
returning
an
error
if
we
couldn't
convert
from
three
to
two
from
b1
alpha
3
2,
B
1,
alpha
2,
because
we've
added
a
new
field
in
v1,
alpha
3
and
it
wasn't
available
in
d1,
alpha
2,
and
so
we
just
said:
nope
not
gonna
convert
return,
an
error,
and
in
talking
to
sick
API
machinery,
we
learned
that
you
basically
can
never
return
an
error.
If.
A
Is
encountering
some
issues
like?
Basically,
it
needs
to
be
the
equivalent
of
I
need
to
panic
if
you're
gonna
return
a
compare
from
a
conversion
function,
and
so
that
was
why
Vince
added
the
approach
to
save
all
the
data
from
the
newer
version
as
an
annotation
in
the
older
version
when
converting
down
so
that
when
we
go
back
up
and
do
a
full
round-trip,
we
can
restore
anything
that
doesn't
exist
in
the
older
API
versions.
A
D
C
A
A
No
okay,
Tim.
D
F
I
had
a
quick
question
for
Vince
here:
I
haven't
seen
these
like
fuzzy
conversion
tests
before,
could
you
talk
a
little
bit
about
the
fuzzy
aspect
of
it?
Are
you?
Are
you
actually
fuzzing
inputs
to
the
conversion
or
how
does
that
work?
So.
D
D
G
Yes,
yes,
so
traditionally
that
once
it's
been
stored
and
given
up
version,
you
can't
necessarily
always
down
convert
so
having
a
round-trip
to
be
a
down.
Conversion
should
be
like
a
non-sequitur.
If
it's
been
up
converted,
then
you
can't
ground
trip
to
get
back
to
the
previous
versions,
because
there's
no
guarantee
that
you'll
be
able
to
do
that.
G
D
G
A
A
Okay,
if
you
do
have
questions
or
problems
trying
to
get
them
to
work,
please
come
find
us
in
slack
and
we
will
help
out.
Okay,
Chuck
I
will
stop
sharing
here
and
let
you
go
over
the
demo.
You've
got
very
cool.
Thank
you,
cool
everyone,
so
I've
been
working
on,
or
this
has
been
kind
of
a
community
effort
to
build
out
cement
and
just
frustrate
guy
using
the
doctor
provider
and
infrastructure
provider
and
I
just
wanted
to
show
everyone
here
how
to
run
those
tests
locally.
A
So
you
don't
have
to
wait
for
prow
and
it
helps
give
you
some
nice
signal
as
with
what
you're,
with
what
you're
developing.
So
let
me
find
terminal
great.
Does
this
look?
Okay
good,
for
me,
ultimate
is
great
yep.
So
if
you
want
to
run
these
tests
locally
you're
in
your
cluster
API
directory,
you
can
see
that
we've
got
a
couple
of
make
targets
here.
There's
three
of
them:
there's
there's
tests
full
test
images
and
just
regular
tests,
and
it's
sort
of
it's
sort
of
a
cascading
series
of
completeness
of
things,
you're
testing.
A
The
difference
between
full
in
images
is
that
full
will
just
we'll
also
rebuild
the
manifests,
so
that
images
will
rebuild
just
the
images
and
not
manifest
and
then
test
cap
Det
without
any
suffix
will
test,
will
just
rebuild
the
docker
provider
for
your
tests,
and
so
it's
it
should.
It
should
be
as
easy
as
test
cap
key.
You
know,
ete
and
if
you
have
trouble
with
this,
please
file
an
issue
if
this
doesn't
work
on
your
machine
or
something
please
file
an
issue,
because
we
like
to
make
this
as
widely
as
possible.
A
These
tests
are
generally
good
for
things
that
span
controllers.
So
if
you
are
making
a
change
that
so
you've
got
to
add
something
to
an
infrastructure
provider
that
going
to
get
copied
into
the
cluster
the
core
cluster
object.
This
would
be
a
great
tool
to
do
that.
To
test
that
change
without
having
to
manually,
rebuild
all
your
things
and
spin
up
clusters
and
all
that,
so
it's
kind
of
a
nice
and
I
smoke
test
for
those
changes.
You
also
don't
have
to
run
it
locally.
A
If
you
don't
feel
like
it,
you
can,
it
is
set
up
to
run
on
prowl,
so
every
PR
that
you
open
will
run
this
job.
But
there
are
a
couple
of
caveats
to
that:
one:
it
takes
about
15
to
18
minutes
if
prowl
is
having
a
mediocre
day.
Sometimes
it
takes
longer.
Sometimes
it
takes
less
time
can
take
as
little
as
nine
minutes,
and
it
does
not
take
that
long
on
my
machine,
but
I
have
a
pretty
beefy
MacBook
Pro.
A
The
other
caveat
is
that
it
does
flake
in
prowl
about
90%
of
the
time.
I'm.
Sorry
it
does.
It
passes
90%
of
the
time
it's
about
10%
of
the
time
I'm
still
working
on
those
flakes,
that's
kind
of
an
ongoing
process.
They
keep
squashing
one
flake
and
moving
on
to
the
next
flake,
but
we'll
get
there.
Eventually,
it
is
not
required
for
your
PR,
so
your
PR
will
still
your
PR
could
still
get
merged
and
these
ETS
could
fail.
But
unless
the
failure
looks
familiar,
it
may
actually
be
real
signal.
A
Well,
not
what
a
success
looks
like
is
we
bright
green,
but
what
a
failure
looks
like
and
to
know
if
you're
looking
at
a
flake
or
if
you're,
looking
at
an
actual
failure
that
you
may
have
introduced
so
yeah
I
just
wanted
to
share
that.
If
you
have
trouble
with
it,
I'm
not
gonna.
Wait
for
this
to
run
because
you
don't
want
to
sit
here
and
just
watch
doing
stuff,
but
if
you
do
opens
or
interested
in
working
on
this,
there
are
a
number
of
open
issues
that
are
marked
help-wanted
for
the
next
month.
A
Oh
sorry
about
that,
the
the
zoo
do
exits,
cutting
off
the
bottom,
but
you
there's
nothing
really
so
I'm,
just
gonna
stop
sharing
it's
it's
just
it's
just
running
the
tests.
You
don't
need
to
see
it
right.
You
can
run
it
yourself
in
a
shot.
If
there
are
any
questions,
please
feel
free
to
ping
me
or
open
an
issue
and
do
my
best
to
respond
thanks,
Chuck
all
right.
Let
me
go
get
your
hand
up.
Yeah.
F
A
So
this
does
all
run
in
docker
and
my
docker
is
not
using
all
of
my
resources.
Let
me
just
let
me
just
open
up
my
okay
that
completely
changed
the
doctor
desktop
UI,
so
I
would
guess
my
I
think
I
think
minds
using
six
gigabytes
of
memory
and
four
CPUs
and
that
runs
in
you
know
five
minutes
less
than
five
minutes.
A
So
similar
sort
of
tests,
but
using
the
kappa
infrastructure
provider
yeah,
so
so
those
those
tests
do
exist
in
kappa
and
we
are
looking
I
mean
I
would
like
to
convert
those
tests
to
use
this
the
framework
tool
that
we've
built
out
in
cap
p,
but
that's
just
a
matter
of
priority
right
now
and
there's
nobody
working
on
that
at
the
moment,
Jason
I
think
it's
I
think
it's
possible
to
do.
I
think
Jason
probably
can
give
a
little
bit
an
update
on
cap
down
here.
Yeah.
C
We're
just
going
to
say
we
do
already
have
a
set
of
tests
that
run.
They
are
intended
to
run
under
prowl
right
now,
and
the
infrastructure
there
that
takes
advantage
of
what
we
call
a
janitor
process
to
clean
up
any
potentially
stranded
resources.
There
was
some
work
done
to
make
that
janitor
work
on
a
local
environment.
C
We
just
haven't
really
documented
that
very
well.
That
said,
we're
in
the
process
of
having
some
tests
that
were
originally
created
for
the
B
1
alpha
2
version
of
Kappa
ported
over
to
the
current
master
branch.
As
soon
as
that
works
done,
we
do
plan
on
converting
everything
to
use
the
et
framework
that
chucks
that
mentioned
about,
and
then
we'll
we'll
be
in
a
much
more
similar
process
to
the
docker
et's.
A
D
D
A
D
So
the
other
thing
that
I
had
was
a
chat
about
2140,
which
I'm
gonna
link
in
chat,
which
we
were
discussing
before
you
know.
It's
pretty
much
like
an
issue.
Bed
like
covers
again
conversions
and
one
of
the
issues
that
we
have
with
conversion
is
that
the
reference
is
we
need
also
to
convert
those
and,
for
example,
if
you
have
a
cluster,
you
have
an
infrastructure
ref
that
reference.
When
it's.
When
the
clusters
going
to
be
compared
to
P,
1
or
3,
the
reference
you
can
say
to
be
monster,
so
we're
trying
to
work
towards.
D
Is
that,
like
we
automatically
convert
those?
But
now
there
is
like
kind
of
like
the
questions
like
what,
if
an
external
provider
doesn't
follow
the
same,
if
you
have
urchin,
as
we
do
so
like,
for
example,
I'd
be
one
on
for
one
version
of
provider
cloud
whatever
could
be
for
p1
off
a
few
of
costs
for
API
and
at
the
bottom
like
we're
discussing
about
like
either
annotation
or
a
label
to
apply
on
each
CD.
So
they're
like
when
you
build
this
here,
these
you
say:
hey
the
storage
version
for
for
this
kind.
D
There
is
like,
for
example,
providers
that
are
not
following
our
like
API
versions,
so
we
can't
just
assume
and
upgrade
the
references
to
be
one
for
three
just
because
we
are
viewing
off
the
three.
G
D
We
don't
know
pretty
much
which
contract
it
will
adhere
to,
and
we
saw
check
saw
this
like
with
data
secret
name.
We're
like
the
machine
was
referencing
the
v1
on
for
to
version
of
data
secret
of
the
secret,
but
data
secret
name
is
not
available
in
v1,
alpha
2,
and
so
we
we
have
to
update
that
could
be
one
alpha
3.
E
A
Yes,
so
I
know,
Vince
and
I
had
been
going
back
and
forth
or
about
half
an
hour
ago
with
some
scenarios
and
what
we
need
to
do.
I
think
it
probably
would
be
worth
walking
through
some
more
detailed
examples
and
try
and
come
up
with
a
design
either
it's
what
we
talked
about
a
couple
weeks
ago
or
we
need
to
refine
a
little
bit
I,
don't
know
that
we
need
to
take
everybody's
time
to
do
it.
But
if
you
are
interested
I
think
we
can
set
up
a
zoom
after
this.
If
that
works.
D
Yes,
you
can,
you
can
just
say,
like
I
won
I
want
to
know
about
the
CR
D
to
the
API
server,
and
there
can
be
only
one
stored
version
and
so
like
why
you
walk
through
the
versions
like
you
can
actually
know
which
one
is
the
stored
one.
The
problem
is
that,
like
we
don't
know
if
the
stored
version
like
matches
the
contract
of
cluster,
yet
it's
running
like
on
the
cluster.
So
let's
say,
for
example,
you
have
a
big
enough
for
three
closer
API
running.
A
And
it
sort
of
goes
back
to
Tim's
question
a
few
minutes
ago
about,
if
you've
upgraded
the
stored
version,
there's
no
path
backwards
and
I
think
I'll
echo
what
I
said
before
we'll
just
talk
through
it
and
see
what
we
can
figure
out
sounds
good.
Thank
you
any
other
comments
on
this
topic
before
we
go
back
to
the
BTS
question
about
the
release
tag.
E
D
D
A
H
H
So
this
is
there's
an
issue
with
some
discussion
on
it.
I
would
like
to
you
know,
maybe
direct
some
attention
there
so
to
see.
If,
like
is
this
something
we
can
come
to
a
conclusion
on
how
it
should
work
and
then
like
I'll,
I'm,
happy
to
take
the
issue
and
then
run
with
it.
I
just
wanted
to
get
some
better
consensus
around
how
this
might
work.
A
C
So
I
think
we
had
a
discussion
on
that
issue
and
you
mentioned
trying
to
diagram
up
some
of
the
ideas
that
you
had.
I
think
that
would
definitely
be
a
great
first
step
because
you
know,
as
you
can
see
through
talking
through
the
issues,
we
were
talking
a
lot
of
abstracts,
so
putting
some
type
of
visualization
to
the
process.
I
think
would
help
out
quite
a
bit
and
I
look
forward
to
seeing
what
you
can
come
up
with
cool.
A
I
think
is
also
an
area
where,
once
you
get
a
little
bit
of
agreement
on
an
approach
it
can
get
coated,
we
can
merge
it,
and
if
we
find
out
that
there's
issues
we
could
just
flip
it
off.
You
know,
have
the
options
disabled
or
require
you
to
enable
it
in
some
way.
While
we
work
through
the
issues,
but
in
the
spirit
of
you
know,
iterating
fast
and
often
I
think
trying
to
get
it
coated,
merged
and
tested
warts
and
all
is
better
than
waiting
for
the
perfect
design
cool.
H
D
H
So
I
would
say,
probably
not
release
blocking
only
because
there's,
like
potentially
label
hacking
that
you
could
do
to
get
this
behavior
today.
It's
just
like
not
the
best
user
experience
have
not
tested
that
and
someone
can
check
me
on
it,
but
and
I
think
you
I
think
somewhere
in
here
we
mentioned.
We
discussed
opt-in
opt-out
in
here
with
like
flags
and
things
like
that.
I
don't
know
where
we
land
it,
but
I
think
there's
a
good
argument
for
either.
H
One
is
next
sure,
so
we
have
had
close
to
API
deployed
in
the
wild
for
a
bit
now
and
different
kind
of
engineering
teams
here
have
been
like
interacting
with
the
api's
and
one
of
the
things
that
has
been
at
least
a
little
bit
confusing
for
folks.
Is
this
whole
like
immutable
template
make
a
new
one
kind
of
process
and
I
kind
of
just
wanted
to
raise
this
again.
Just
to
see
like
like.
H
Are
we
sure
that
we're
getting
the
value
of
having
those
templates
be
immutable
that
we
think
that
we're
getting
because,
for
example,
right
like
nothing
right
now,
is
stopping
me
from
editing
the
template
and
then
adding
an
annotation
into
my
machine
deployments?
Template
like
it
will
add
new
machines
that,
like
don't
match
the
original
template
so
I'm
thinking
like
is
this
something
that
we
intend
to
web.
You
know
like
in
validating
webhook
out
of
in
you
know,
future
releases
or
things
just
said,
winky
face.
We
have
a
bug,
question
mark
yeah,.
A
We
had
intended
to
add
validating
web
hooks
to
reject
updates
to
templates
outside
of
maybe
labels.
Although
I
don't
know
that
that's
it's
reasonable
label,
but
yeah
it
could
be
a
bad
UX
and
maybe
we
need
to
come
up
with
something
else,
but
I
don't
think
it's
going
to
change
an
alpha,
3
timeframe,
I!
Just
don't!
There's
time
then
hang
on
Vince
Jason!
You
got
your
hand
up
yeah.
C
So
the
biggest
challenge
that
we
have,
with
the
templates
and
and
the
reason
why
we
decided
to
go
down
the
immutable
path
was,
is
we
don't
necessarily
have
a
way
to
get
what
the
old
data
was?
We
don't
have
a
good
way
to
be
able
to
revert
that
in
the
case
of
like
a
rollback,
type,
operation
and
I.
Think
that
will
be
the
challenge
if
we
want
to
remove
that
immutability
is
how
do
we
or
what
the
contents
of
that
were
to
enable
kind
of
that
rollback.
A
H
D
A
So
I
know:
we've
had
a
few
different
ideas,
like
there's
been
a
lot
of
discussion
last
year
on
how
to
possibly
do
what
you're
asking
there's
also
another
issue
that
doesn't
necessarily
look
like
it's
exactly
the
same
thing,
but
this
one
right
here
about
supporting
machines
that
have
the
container
images
baked
into
them.
That
aren't
from
the
kubernetes
docker
registry,
and
some
of
the
ideas
that
are
in
here
are
I.
I
A
Think
we
as
a
community
need
to
try
and
brainstorm
on
on
these
couple
of
issues
and
see
if
we
can
come
up
with
something
that
works,
whether
it's
web
hooks
or
some
first-class
configuration
element
that
says
I'm
a
bootstrap
config
but
I
actually
need
to
wait
until
another
participant
fills
in
some
information,
and
then
you
have
another
controller.
That
is
that
participant.
That
knows.
Oh
the
infrastructures,
AWS
and
the
bootstrapper
is
cube
idiom.
So
let
me
go
weave
in
my
AWS
specific
stuff
to
cube
atm.
A
H
Yeah,
that's
fine,
I
think
what
we'll
probably
do
is
like
look
at
the
issues
that
already
exist
and
then
I
think
we're
gonna
implement
something
just
like
just,
but
we
have
a
good
understanding
of
what
default
we
want,
even
beyond
like
the
kind
of
basic
kappa
stuff
so
like
we
have
a
clear
path
to
getting
some
code
together
and
then
I
think.
What
we'll
do
is
just
share
that
with
the
group
and
see
kind
of
how
it
works
and
iterate
from
there.
If
that's
cool.
F
Amigo,
yes,
so
I
guess:
I
have
kind
of
a
I
guess
a
project
culture
question,
perhaps
I'm
relatively
new
to
cluster
API
and
I'm
looking
for
ways
to
get
involved,
so
I've
been
looking
at
the
issues
in
the
upstream
repository
and
I'm
noticing,
you
know,
there's
a
lot
of
the
good
for
newcomers.
Issues
are
kind
of
assigned
and
looking
back
through
some
of
the
older
issues,
I
see
discussion,
you
know,
like
I've,
got
a
patch
ready,
but
maybe
the
issues
assigned,
but
nobody
is
doing
anything
I'm
kind
of
curious.
What
I
guess?
F
G
F
A
You
and
if
you
can't
find
anything
that
is,
you
know
that
looks
like
a
good
thing
to
work
on,
feel
free
to
reach
out
and
slack,
and
there
may
be
stuff
floating
around
in
our
heads
that
we
just
forget
to
file
and
we
need
to
do
a
huff.
F
A
Sure,
and
just
to
follow
up
on
like
good
first
issues.
I
definitely
think
documentation,
like
he
said,
is
a
great
place
to
start
I
and
Warren
says
documentation
and
tests.
So
we
do
have
this
master
branch
documentation
that
has
things
that
you
won't
see
in
the
the
main
published
book
for
P
1,
alpha
2
such
as
cluster
cuddle.
This
has
been
and
is
being
completely
rewritten
from
what
we
had
in
alpha
1
and
alpha
2,
and
we
could
definitely
use
testing
here.
A
Let's
get
to
it,
but
that's
a
great
place
to
start
and
Warren
also
says
make
serve
book,
will
help
you
test
the
docks
locally.
There
is
also
kind
of
a
I,
don't
know
why
they're
not
showing
up
anymore,
but
if
you
go
into
the
pull
requests
it
used
to
be
that
you
could
go
into
one
of
these
PRS
and
down
at
the
bottom
under
the
status.
No
now
it's
showing
up,
of
course,
make
me
a
liar.
A
There's
a
deployed
NetFlow
fie
thing
in
here,
and
if
you
open
that,
what
you
get
is
the
the
rendered
version
of
the
book
based
on
that
pull
request.
What
I
was
going
to
say
is
that
you
could
just
go
to
net
both
I
there's
a
certain
URL
you
can
use
which
I
can
go
dig
up
50
if
these
links
are
not
showing
up
where
it
says,
deployed
net
will
fine.
There
is
a
way
to
get
to
it,
even
if
it's
not
there,
but
now
that
it's
showing
up,
you
can
just
use
that.
D
E
A
A
The
Machine
health
check
has
the
initial
skeleton
or
foundation
that
was
merged
and
there's
an
open
pull
request
for
the
implementation,
which
is
great
so
thanks
to
Joel
for
getting
that
in
and
then
the
machine
pool
work.
I
know
that
wanly
is
trying
to
get
the
unit
tests
added
sometime
today
or
tomorrow.
So
I
think
that
things
are
moving
along
nicely,
but
there's
still
a
lot
of
work
to
do
so.