►
From YouTube: 20190807 scl capi office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Of
sig
cluster
lifecycle,
let
me
go
ahead
and
share
my
screen
so
that
we
can
get
the
whoops
right
now.
So
we
get
the
meeting
agenda
up
here
to
the
right
account
and
bear
with
me,
while
it
logs
me
back
in
okay
now
that
I
have
the
right
one,
let's
see
as
usual.
Please
add
yourself
to
the
attending
list.
If
you
have
not
done
so
already
and
let's
get
through
the
agenda
Jason,
you
have
the
first
item.
Please,
yes,.
C
I
just
wanted
to
let
everybody
know
that
the
versions
here
at
1.9
release
has
been
cut,
added,
support
for
customized
versions,
3.0
dot,
three
and
later
added
some
logic
to
improve
the
cluster
deletion
workflow
in
particular,
if
you're
using
queue
cuddle
to
delete
clusters.
Now
the
cascading
deletion
doesn't
require,
isn't
relying
on
the
kubernetes
garbage
collection
and
there
was
a
fix
for
how
we
set
the
owner.
References
for
machines,
machine
sets
and
machine
deployments.
A
A
Thanks
to
some
hard
work
from
Chuck
confer
BTO,
we
have
the
basic
reconcile
flow
working,
so
we
can
generate
some
sub
cloud,
init
data
and
hopefully
soon
we'll
be
able
to
actually
generate
cloud
in
it
or
sorry.
Control
plan
in
it
control,
plane,
join
and
regular.
Note
join
data
as
well
before
I
move
on
any
questions
on
this
provider
status.
A
So
maybe
we
can
get
checked
to
come
back
and
fill
this
in
with
more
of
an
update
later
for
the
AWS
provider,
we
released
the
0.37
I
meant
to
bring
in
some
release
notes
for
this,
but
got
sidetracked,
so
I
will
come
back
and
fill
them
in
in
a
minute
or
you
can
go
back
or
go.
Look
at
the
release
to
see
what's
there
and
then,
in
terms
of
alpha
to
work,
we
are
porting.
A
D
Again,
right
now,
I
have
a
local
branson
sack
together.
That's
working
with
pretty
much
everything
all
together
to
make
clusters
which
is
really
cool.
There
is
local.
Pr
is
going
to
improve
Kathy,
nothing
interesting
to
go
into
detail,
mostly
just
making
cloud
and
it
work
with
the
kindest
nodes
but
yeah.
It's
it's
more
or
less
all
come
together
in
the
past
couple
of
days,.
F
Yeah,
so
thanks
indeed
so
we
released
v-0
for
one
last
week
or
that
the
team
did
I
was
on
pto,
and
so
that
was
really
cool
to
see
them
come
together
and
do
that
without
my
interference,
it's
probably
actually
easier.
Neither
there
was
a
bug
fix
included
as
well
as
zero
one
eight
Cappy,
so
people
should
take
a
look
at
that
I've
linked
to
release,
notes
v1
alpha
to
work
started
about
two
weeks
ago,
due
to
time
box
issue.
F
Multi
NIC,
multi
device,
multi
IP,
address
setups,
but
we've
been
dealing
at
a
bunch
with
cat,
be
due
to
some
use
cases,
and
one
concern
we
have
is
how
we're
going
to
end
up
ending
that
when
we
relinquish
some
of
that
control
to
the
bootstrapper
for
the
ginger
templates
and
all
that
good
stuff.
So
I
am
going
to
be
touching
base
with
I
guess
Chuck
in
Fabrizio
on
that,
as
well
as
filing
some
issues
to
sort
of
outline.
F
G
H
B
I
think
for
the
other
providers,
it's
gonna
be
on
a
per
provider
basis,
so
I
know
for
OpenStack
and
be
secured.
They
have
relatively
mature
adaption
providers,
so
they
will
be.
They
should
be
using
the
attitude
providers
for
the
for
be
one
out
of
two,
probably
also
a
short
slow,
but
if
I
know
for
folks
at
Google
and
AWS
they're
still
developing
the
adapter
providers
at
an
earlier
stage,
so
I
wouldn't
expect
them
to
be
adopting
that
for
v1,
alpha
2.
B
C
Yeah
I
just
want
to
echo
that
the
last
time
I
inquired
in
what
was
previously
Sega
WS
about
the
status
of
the
AWS
provider.
It
was
highly
recommended
not
to
rely
on
that
because
it
wasn't
as
fully
functional
as
the
entry
provider
was
and
that's
the
main
reason
why
we've
avoided
it
today
with
the
AWS
provider.
But
I
do
like
the
idea
of
blocking
beta
on
having
all
of
the
cloud
providers
or
all
of
the
say,
owned.
Cig
project
managed
cloud
providers,
provider,
implementations
using
the
external
provider.
B
A
Okay,
next
up
is
the
UX
of
the
one
alpha
two,
so
I
opened
up
an
issue
and
we
had
some
back
and
forth
what
I
was
suggesting
or
pointing
out
is
that
with
V
1
alpha
1,
because
we
have
the
provider
specific
details
embedded
in
cluster
and
machine
and
there's
no
delineation
between
bootstrapping
and
infrastructure.
You
literally
can
create
a
cluster
object
at
a
machine
object
and
you're
done.
A
C
And
I
rates
us
in
the
issue,
but
one
of
the
concerns
that
I
potentially
have
is
embedding
actually
part
of
the
API
into
the
annotations.
Instead
of
just
using
the
annotations
to
kind
of
track.
Internal
state
I
worry
that
where
we
could
potentially
be
creating
kind
of
a
shadow,
API
and
I
brought
up
the
idea
of
anything
that
could
be
a
top-level
field
like
the
default
provider
to
use
if
one's
not
specified,
we
can
potentially
just
make
those
top-level
fields
directly
and
then
it's
baked
into
the
API
itself.
C
Instead
of
having
to
deal
with
potential
issues
of
providing
API
guarantee
consistencies
around
these
annotations
as
well,
that
doesn't
necessarily
solve
the
issue
for
defaults
within
for
one
of
those
specified
providers
for
the
configuration,
though,
and
I
think
Vince
had
some
suggestions
there,
but
he's
not
here
today.
I.
A
H
I
mean
I'd
like
to
do
it
for
alpha
2,
just
from
a
user
experience
standpoint
and
if
we
need
to
have
annotations
as
a
base
station
for
the
time
being,
I
think
that's.
Okay,
I
think
one
of
the
things
that
this
particular
sub
project
I
think
struggles
from
is
user
experience
and
I
think
modifying
that
to
make
it
simpler
for
people
who
are
developing
their
specs
or
have
an
auto
generation
tool
to
do
that.
You
really
just
make
it
that
much
simpler
right.
D
A
A
I
J
So,
for
example,
if
you
in
an
enterprise
with
a
platform
team
that
manages
the
platform
being
kubernetes
and
the
clusters
of
kubernetes,
and
you
have
feature
development
teams
that
want
to
consume
that
platform
and
create
clusters.
So
if
I'm
a
consumer
of
of
this
being
the
trust,
the
API,
then
I
want
to
use
the
defaults
that
have
been
provided
by
the
platform
team,
managing
the
platform,
so
I
wouldn't
know
to
use
a
default
AWS
and
a
default
cube.
Atm
I
would
just
want
to
declare
my
intent
to
have
a
trust
of
X
number
of
nodes.
J
F
Make
that
makes
sense
to
me
where
ya.
The
motion
that
that
makes
sense
would
be
like
this
is
very
broad
but
like
having
an
environment
variable
or
something.
So
if,
if
there
isn't
a
default
defined
on
the
cluster,
it's
assumed
to
be
a
thing
and
that
assumption
can
be
configured
outside
that
cluster
spec
correct.
H
I
think
you
can
do
this
outside
the
scope
of
the
change
modification
that
Andy
has
proposed
Andy
and
Jason.
If
suppose,
I
think
that
you
know
you
could
set
policy
external
to
this,
that
autofills
the
fields
based
upon
the
settings
like
Auto
generation.
Your
stuff
can
do
this
for
you,
you
could
do
this
with
the
the
clostridium
proposal
could
have
a
default
setting
that
could
be
configured
so.
K
I
mean
when
I
what
I
understand
most
of
you
describing
is
something
like
like
a
cluster
is
analogous
to
a
a
pod
and
and
that
pod
needs
to
have
some.
You
know
some
of
some
dynamic
storage,
the
provision
volume
claim,
and,
and
how
do
you
create
that
provision
volume
claim?
You
have
a
storage
class
unless
you
so
that
would
there
might
be
a
default
storage
class.
So
in
the
same
way
there
might
be
something
like
a
you
know,
a
cluster
class
or
provider
class
right
that
that
is
outside
of
the
the
cluster
object.
K
J
A
I
think
there's
two
user
stories
here:
there's
one
which
is
I'm
a
user
I,
don't
necessarily
belong
to
a
platform
team
and
I
just
want
a
more
simplified
user
experience
and
I.
Think
that's
what
chase
and
I
put
together
that
can
address
that.
There's.
Also
the
other
user
story,
which
is
I'm
a
user
I
want
a
kubernetes
cluster
I.
A
H
I
also
think,
like
you
can
do
book
separation
of
concerns.
I
think
you
can
get
policy
enforcement
through
several
mechanisms
like
you
could
have
an
admission
control.
One
book
you
know
to
do
force,
policy
and
stuff
like
that,
so
I,
don't
think
I
think
we
can
address
your
concern
mush
in
a
number
of
different
ways.
That
I
think
would
still
allow
the
generic
UX
to
move
forwards
in
policy
enforcement
to
be
a
separate
thing.
Okay,.
A
A
J
J
A
K
What's
the
alternative
to
this
also,
granted
we
want
to
improve
user
experience.
We
have,
there
are
more
objects,
more
sort
of
field
so
to
provide
to
default
are,
you
know,
is
how
is
doing
this
in
a
CLI,
or
you
know
in
a
shell
script
and
I'll
in
alternative
and
and
then,
if
it
is,
why
is
it?
Why
is
that
a
worse
place?
To
start,
then,
just
going
straight
to
you
know,
annotations
or
fields
and
doing
these
server-side
default
I.
A
Think
that's
a
good
question
worth
recording
as
a
comment
on
the
issue.
If
you
wouldn't
mind,
I
also
would
say
that
anything
you
do.
Client-Side
has
to
be
like
its
implemented
in
a
client.
But
if
you
don't
want
to
use
that
client,
then
you
know
just
like.
If
you
don't,
if
you
aren't
using
cute
control
to
do
everything,
you
may
end
up
losing
out
on
some
functionality
if
it's
only
written
in
acute
control,
so
doing
it
as
an
annotation
or
a
spec
field.
I
think
helps
guarantee
that
it
it
works
everywhere.
I
I
mean
my
vision
like,
for
this
would
be
from
from
from
a
user
standpoint,
is
I
would
like
to
create
I
guess,
CRD
I
just
requests
a
cluster
in
GCP,
and
then
that
thing
stamps
out
all
the
things
that
that
I
need
to
happen
and
the
administrator
on
that
master
cluster
has
already
set
up
all
defaults
or
whatever
configured
minimal.
So
like
the
cluster
object
and
all
the
other
objects
aren't
necessarily
have
to
be
like
in
user
consumable.
There
could
be
another
thing.
A
A
H
Right,
let's
get
on
the
boxing
gloves
and
talk
about
version
schema
so
the
currently,
if
you
look
across
providers
and
look
across
Kathy,
it's
unintelligible
to
the
average
consumer,
what
version
of
Kathy
applies
to
this
provider?
Eccentric,
Center
cetera?
So
in
that
issue
we
there's
a
couple
of
different
proposed
ideas:
I,
don't
I,
don't
really
have
strong
opinions
of
a
versus
B
other
than
I
really
have
strong
opinions
that
we
need
to
pick.
H
Something
and
I
also
have
strong
opinions
in
that
that
we
should
follow
upstream,
where
possible,
and
we
don't
need
to
biddy-
be
pedantic
about
certain
things
with
semver
right,
because
almost
every
implementation,
I've
ever
seen,
never
truly
follow
semper
to
the
tee
right.
So,
with
those
two
in
mind,
maybe
you
could
have
a
little
bit
of
a
discussion.
H
H
I
think
I
want
to
have
this
enforced
for
v1
alpha
2,
because
any
person
who's
trying
to
look
at
this
and
look
across
providers.
It's
unintelligible,
you
can't
make
any
sense
out
of
it
all
right.
You
can't
see
it
look
at
any
provider
and
say
in
route
back
just
easily
to
where
it
came
from.
You'd
have
to
go
through
the
shah's
and
then
figure
out
from
the
vendors,
what
it
actually
means
and
where
it
came
from
and
and
the
average
consumer
should
not
have
to
do
something
like
that.
That's
that's
too
much.
Michael.
A
I
This
this
reminds
me
of
the
point
I've
been
trying
to
make.
If
the
future
is
having
all
these
different
providers
all
deployed
into
the
same
namespace
on
one
deployment
of
closer
API,
then
we
need
something
more
than
just
a
written
mechanism
to
ensure
version
components.
We
need
something
more
dynamic.
C
So
it
kind
of
follows
on
to
Michael's
comment
I.
Think
to
me
the
problem
is
less
around
the
actual
versions
of
the
components
and
more
around
knowing
kind
of
how
things
work
and
I
think
with
the
simplified
management
experience
that
we've
been
talking
about
with
the
cluster
a
DM
operator.
It
would
be
more
about
publishing
and
managing
metadata
between
the
components,
to
have
a
machine
readable
way
to
determine
compatibility
between
the
components
rather
than
trying
to
strictly
tie
in.
You
know,
actual
release,
binary
versions
and
trying
to
get
those
to
align.
C
It's
the
thing
that
I
fear
there
is
that
we're
gonna
have
issues
where
something
could
be
a
breaking
change
on
a
provider
side
that
isn't
tied
to
a
breaking
change
on
the
cluster
API
core
side
and
trying
to
manage
those
with
aligned
versions
and
get
across
the
users
that
some
actions
needs
to
be
taken.
I
worry
will
get
lost
in
you
know,
just
publishing
patch
updates.
H
Hear
what
you're
saying,
but
the
same
time
like
it's
been
intelligible
today,
like
what
we
have
today
is
super
bad,
like
it
is
unintelligible
to
any
average
user,
especially
if
they
have
a
management
cluster
right.
So
if
they
want
to
do
a
bug
report
right
for
a
specific
version
dating
back,
you
have
there's
no
means
by
which
we
can
report
information
to
be
able
to
truly
diagnose
the
state
space
of
what
the
issue
was
and
where
it
came
from.
You
have
to
have
a
developer,
Michael.
I
Yeah
I
think
I
guess
what
I'm
getting
at
and
it
sound
like
Jason's
getting.
That
is.
We
need
to
have
that
end
goal
in
mind
for
whatever
we
come
up
with
here,
whatever
this
versioning
looks
like
and
wherever
it
lives
that
it
needs
to
be
plugged
right
into
whatever
I've
discovered
mechanism
is
I,
guess
just
to
inform
the
process
not
really
start
there.
That's
what
I'm
saying.
H
A
M
Yeah,
so
the
only
additional
info
is
that
you
know,
since
that
meeting
we've
been
focused
on
a
jumping
in
and
getting
the
bootstrap
provider
into
a
place
where
it's
all
testable
and
working
as
kind
of
one
big
unit
with
the
docker
provider
and
Kathy
itself.
So
we
haven't
started
any
actual
work
testing
the
pivot
because
we'd,
like
all
of
those
things
to
work
so
that
we
can,
you
know,
just
have
a
more
I
guess,
comprehensive.
Look
at
how
pivoting
will
work
with
both
providers
functioning.
A
Yeah
and
there's
other
things
that
are
in
the
ecosystem,
like
writing:
conversion
tools
to
be
able
to
take
Kappa,
V
1,
alpha,
1
clustering
machines
and
convert
them
to
V,
1,
alpha
2
and
then
similar
tools
for
the
other
providers.
There's
also
the
upgrade
tool
for
managing
cluster
upgrades
that
would
need
to
get
upgraded.
Alpha,
2
I
think
there's
enough
work
left
that
not
all
of
the
tooling
and
the
ecosystem
is
going
to
be
done
by
the
end
of
August
in
terms
of
getting
it
updated
for
alpha
2.
A
H
Had
proposed
a
couple
of
times
to
four
as
PSAs
in
the
last
couple
meetings,
there's
plenty
of
comments
there
I
was
just
wondering
if
anyone
had
any
conversation
pieces
could
been
different
to
discuss
with
the
broader
group.
With
regards
to
the
the
proposal.
That's
written
up
there
I
know
Daniel
and
Moshe
comets
and
pant
Jason
did
too
as
well.
N
Actually,
a
little
bit
curious
Tim
about
what
your
vision
for
the
bootstrap
portion
of
cluster
idiom
would
look
like
I
mean
it
said
that
it
sounded
like
you
were
kind
of
previously
leaning
towards
using
kind
as
a
cluster,
and
people
would
need
to
bring
their
own
bootstrap
cluster,
but
I
saw
that
you
did
have
a
bootstrap
user
story
in
cluster
ADM.
Did
you
just
speak
a
little
bit
of
that?
The.
K
H
N
H
I
would
like
to
just
move
forwards.
Like
people
have
comments,
please
add
them.
You
know
I've
given
like
three
weeks.
I
can
do
a
kept
PR,
probably,
but
now
most
of
this
I
would
expect
to
refine
over
time
too
as
well.
This
is
kind
of
like
the
proposal
for
tool
to
solve
the
cross
provider
problem,
as
well
as
the
cluster
cuddle
problem,
because
I
as
the
beginning,
this
cycle,
we
wanted
to
evaluate
whether
or
not
cluster
code
all
it
would
be.
H
So
my
plan
tentatively
is
to
probably
push
up
a
cap.
What
I
could
easily
do
it
this
week
and
try
to
address
comments?
We
already
have
a
repo
started
and
have
a
you
know
the
studs
of
what
we
I
ideally
would
like
to
make
in
place,
or
at
least
the
beginnings
of
it,
and
then
maybe
move
from
merged
by
the
end
of
the
cycle
or
something
I.
A
A
E
A
A
E
C
I
can
confirm
that
it
all
appears
to
be
working.
Great
I've
been
working
on
updating
our
intent
tests
and,
as
far
as
I
can
tell,
we
haven't,
had
any
conflicts
related
to
running,
within
the
same
account
that
we
were
previously
saying
and
I
haven't,
heard
any
complaints
from
cops
since
we've
renamed
the
janitor
work
as
well.
So
I
think
we
are
now
in
the
clear.
Thank
you
very
much.
Justin
yeah.
E
A
A
A
A
H
Think
it's
up
to
the
community,
like
is
the
is
the
user
experience
or
the
end
user
experience
important
enough
to
sort
of
slip
the
clutch
and
try
to
get
this
interview?
1
alpha
2
or
do
are
people
more
pressed
to
want
to
get
a
new
cut
and
rebase
their
changes,
so
I
think
I'll
defer
to
the
everyone
else
here.
I
have
opinions,
but
I
don't
want
to
sway
other
people's
decisions.
O
C
This
is
where
I
worry
about
creating
a
shadow,
API
I
think
if
we
wait
to
ship
it
after
V
1,
alpha
2
and
we
end
up
going
annotation
based
we're
essentially
just
extending
the
API
via
the
annotations
and
we're
basically
just
giving
ourselves
an
inground
around.
You
know
not
modifying
the
the
actual
API
schema
I
prefer.
We
just
go
ahead
and
bite.
The
bullet
take
on
the
schema
changes
and
just
ship
it
as
part
of
U
1,
alpha
2.
A
A
C
A
O
I
did
sorry
I
just
hit
the
wrong
button.
Well,
I
feel
like
we
spent
a
lot
of
time
a
lot,
a
lot
of
very
many
people,
ironing
out
API
changes
for
V,
1,
alpha,
2
and
so
I.
Don't
think
we'll
be
able
to
dedicate
the
same
amount
of
time
and
effort
before
the
end
of
the
cycle
to
do
that
and
I'm
not
still
a
hundred
percent
clear
on
exactly
what
this
should
look
like.
This
is
the
first
time
I've
seen
this
issue,
so
I
think
still
needs.
A
O
C
A
A
D
A
C
Yeah
I
was
just
saying
that
adding
support
for
this
might
be
easier
once
we
have
control
playing
Management
primitives.
Otherwise
it's
going
to
potentially
create
some
complications.
For
you
know
how
do
users
manually
consume
this
right.
O
A
So
this
is
definitely
something
that
at
least
on
the
VMware
side.
We've
had
a
request
to
be
able
to
do
this,
so
whether
the
like
I,
don't
think
that
the
key
video
bootstrapper
would
be
operating
the
external
ed
CD
it
just
needs
to.
We
need
to
make
sure
that
it's
possible
to
point
at
it,
which
it
should
be
today
with
the
cube,
ATM
config.
So
I
think
this
probably
belongs,
maybe
in
the
cluster
API
repository
or
some
other
one.
A
D
This
is
this
is
how
to
make
how
to
make
this
debuggable
pretty
much
I
just
haven't
used
it
enough
to
know
what
a
good
should
we
wrap
things
it
seems
like
when
I'm
using
clogged
or
what
I'm
using
zap
it
kind
of
produces
useless,
stack,
traces,
I,
don't
know
what
clogged
are
does
so
I,
just
if
anybody
is
using
this
a
bootstrap
provider
and
has
thoughts
on
how
they
would
like
to
see
errors
and
stack
traces
handled.
Please,
file
comments
on
this
issue.
H
L
D
A
A
And
this
one
is
mine,
which
is
we
have
an
additional
user
data
files
filled
in
the
top
level,
config
spec,
but
it
is
not
wired
up
to
anything.
So
this
is
just
something
that
needs
to
be
done.
So
I
think
I
need
to
get
priority
on
here,
and
there
is
code
in
the
abbs
provider
that
has
this.
If
anyone
is
interested
in
working
on
it,
we
could
look
at
that
as
an
example
and
I
can
I'll
paste
a
link
in
here,
and
that
is
I
believe
the
end
of
yeah
the
issues
here.
A
A
You
thank
you
for
bringing
that
up,
so
we
are
still
trying
to
find
a
location.
We've
been
working
to
try
and
get
something
in
San
Francisco,
but
are
having
trouble.
So
we
may
be
able
to
get
something
in
Mountain.
View
18t
has
offered
up
their
facility
in
San
Ramon,
which
is
about
45
minutes
to
an
hour
and
in
east
of
San
Francisco.
A
A
E
A
A
Are
right
now
trying
to
do
the
week
of
September
16th,
preferably
Tuesday,
to
Wednesday,
which
would
be
the
17th
and
18th
or
Wednesday
to
Thursday,
which
would
be
the
18th
19th
if
that
week
doesn't
work?
A
large
number
of
people
are
also
available
the
following
week,
which
would
either
be
the
24th
and
25th
or
25th
and
26th.