►
From YouTube: 20200513 scl cluster api
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
This
meeting
adheres
to
the
kubernetes
etiquette
guidelines,
which
I
guess
is
kind
of
be
nice
to
everybody.
This
meeting
is
also
being
recorded
and,
if
you'd
like
to
speak,
please
use
the
raise
hand,
feature
and
I'll
call
on
you,
I
guess.
At
the
beginning,
we
like
to
give
a
little
time
for
newcomers
to
the
project
to
kind
of
introduce
themselves.
So
we'll
take
a
few
seconds
here
and
if
anyone
would
like
to
unmute
and
introduce
yourself,
please
do.
A
A
A
C
C
If
you
want
to
take
a
look,
there
is
a
long
explanation
on
like
why
this
happened
and
what
we
did
to
fix
it,
and
this
it
probably
will
become
the
new
minimum
version
when
you
want
to
upgrade
or
ran
in
parallel,
alpha
2
and
alpha
3
controllers,
so
yeah
like
that,
let's
take
a
look
and
if
you're
interested
in
upgrading
from
alpha
2
to
alpha
3
in
the
future,
make
sure
to
update
to
0
to
11.
First
any
questions
on
that.
D
A
E
E
D
Yeah
for
sure
I
would
love
to
see
adoption
and
other
providers
as
well
I
think,
first
that
goes
to
open
issues
and
see
if
anyone
is
interested
for
each
provider
and
helping
out
I
think
my
team
can
also
help
with
that.
But
I
don't
know.
If
we
can
go
in
implemented
in
every
provider,
you
know
probably
need
support
from
each
provider.
Let's
do
that.
Yes,
the
question.
The
answer
is
yes,.
C
When
one
Li
and
I
we
kind
of
got
together
at
the
face-to-face
in
September
last
year,
one
go
for
machine
pool
was
to
actually
like
do
everything
out
of
band
and
I
cannot
use
underneath
machine.
So
like
machine
hell
check
today,
because
it
looks
at
machines
like
you
actually
cannot
work
on
like
wood
machine
full.
But
we
also
like
thought
that
this
is
our
like
wiles.
It's
an
experiment
that
we
could
have
like
a
certain
some
sort
of
like
shim
between
the
machine
and
machine
pool.
C
That
actually
is
like
because,
like
at
the
end
of
the
day,
we
just
need
the
note
reference
to
actually
help
check
the
nodes,
so
it
could
be
extended.
But
then
machine
health
check
will
need
to
know
about
how
machine
will
operate.
So
he
needs
a
little
bit
of
redesign
an
alternative.
We
can
have
machine
full
health
check
in
I
mean.
A
G
Yeah,
thank
you.
I
I
think
some
of
those
options
on
machine
health
check
like,
for
example,
unhealthy
and
no
it's
kind
of
timeout
may
actually
be
platform.
Features
that
are
enabled
will
able
to
be
enabled
as
well
for
machine
pool.
So,
for
example,
like
we
could
say
that
in
machine
pool
a
I
think
we
had
the
ability
to
say
how
long
we
wait
for
a
node
to
start
up
or,
for
example,
how
many
unhealthy
notes
were
willing
to
tolerate,
and
these
these
might
be.
G
You
know,
auto-scale
groups,
probably
a
similar
right,
I,
don't
know
offhand
but
probably
have
similar
constructs
as
well.
So
these
these
might
be
things
that
we
need
to
take
into
account
when
provisioning,
the
machine
pool
and
possibly
listen
for
changes
to
those
and
then
update
the
mission
pool
in
the
provider
itself.
H
It's
the
comment
to
David
like
at
the
moment
all
the
Machine
health
check.
You
define
conditions
on
those
within
the
in
communities
like
it,
wouldn't
really
translate
to
the
the
kind
of
health
checks
you
get
on
a
scale
sale
or
a
what
skating
group
I.
Don't
think,
but
it's
an
interesting
idea
that
we
should.
We
should
have
an
explorer.
A
A
F
Sorry,
I'm
not
turning
off
the
camera,
because
I
have
some
national
program
today,
but
I
want
to
give
a
quick
update
on
the
conditional
type
at
the
beginning
of
this
week.
We
push
the
an
updated
version
that
now
contains
use
case
and
an
example
that
were
added
following
some
comments,
and
another
important
point
is
that
we
are
mostly
and
I
know.
If
a
kubernetes
cap
for
standardizing
condition,
we
have
only
up
our
proposal,
introduced
an
additional
field
for
improving
accessibility
on
a
longer
running
tasks
and
thanks
to
beans.
F
We
put
a
big
effort
in
in
defining
some
some
guidelines
in
order
to
ensure
consistency
in
that
option
of
condition
in
custard
API,
for
instance,
we
and
we
define
rules
around
condition,
semantics
or
condition
polarity,
so
every
condition
should
ever
state
was
true.
That
means
code
and,
at
the
end,
good
means
that
the
component
or
the
object
is
ready
to
to
serve
application
workload,
which
is
the
end
goal
of
creating
a
cluster.
We
also
added
in
the
proposal.
A
I
So
we,
like
two
weeks
ago,
I,
think
updated
the
Machine
healthcheck
proposal
to
use
annotations.
This
is
part
of
like
trying
to
get
support
for
qadian,
well
machine
I'll
check,
support
into
cube,
ATM
control,
plane
machines,
so
we,
but
then,
as
we
were
implementing
it,
we
realized
that
this
would
effectively
give
users
who
have
access
to
edit
annotations,
which
is
just
that
edit
permission
on
machines.
I
The
ability
to
you
know
effectively
delete
pause,
the
deletion
of
the
machine
by
just
adding
the
unhealthy
annotation,
so
we
went
back
to
the
drawing
board
and
we're
now
thinking
to
use
this
new
conditions
feature
to
kind
of
achieve
the
same
thing
just
with
a
more
restrictive.
It's
it's
in
the
status
of
the
machine.
So
it's
a
more
restrictive
access,
but
I
just
open
the
PR
to
update
the
proposal.
This
morning's
looking
for
the
feedback.
A
J
Thank
you,
so
I
created
the
PR
for
the
proposal
for
the
extension
of
template
processing
for
classical
I
kind
of
started
out
sort
of
the
lazy
consensus
countdown
clock
yesterday,
I'm
looking
for
any
feedback.
If
the
community
is
concerned
about
anything-
and
there
are
a
few
open
questions
that
he's
willing
to
discuss
on
primarily
like
formatting
conventions
but
yeah,
if
anybody
wants
to
take
a
look
at
the
proposal,
though
get
up
links
in
the
dark,
any
feedback
is
much
appreciated.
Thank
you.
A
K
Thanks
I'm
not
exactly
sure
if
this
would
be
the
right
place
to
do
this,
but
basically
there's
been
some
where
I
could
do
it
donning
in
meter
cube
and
some
discussion
that
it
could
be
somehow
related
to
what
is
done
in
Cathy
and
specifically
what
yes
in
has
been
doing
with
the
proposal.
So
I
don't
know.
Yes,
it
is
on
the
call
or
what.
D
L
So
things
like
BAM,
L
or
vSphere
and
I
don't
see
specific
use
cases
for
cloud
provider
because,
like
usually
cloud
provider
and
users
that
rely
on
any
other
solution
of
IBM
than
the
one
provided
by
the
cloud
provider,
is
usually
relying
on
something
they
have
on
Prem
and
they're,
also
using
it
as
an
extension
on
their
cloud
providers.
So
this
is
usually
set
up
by
an
admin.
So
after
giving
it
some
thought,
I'm
not
sure
if
it's
something
that
we'd
want
to
have
a
dekappa
level.
M
I
think
that
discussion
we
had
at
the
Academy
meeting
was
more
Moyes
there.
Is
there
sufficient
commonality
between
metal,
cube
and
perhaps
even
open
status
requirements
categories
such
that
there
should
be
a
common
effort
and
deduplication
I'm,
not
sure
if
anyone's
got
far
enough
into
it
to
figure
that
out.
K
A
H
So
this
is
the
external
reiation
topic
that
has
been
coming
from
the
metal
key
folks
for
a
while.
Now
it's
gone
through
several
revisions,
and
recently
it
was
decided
that
we
should
look
at
having
AIDS
or
the
external
remediation
CRD
to
solve
the
problem.
I've
been
working
with
Andrew
and
near
who
I
notice
on
the
calls
that
feel
free
to
button
on,
listen,
I,
think
it's
looking
pretty
good
now
so
yeah
I
think
they're
looking
for
some
reviews
from
people
near
if
you
got
anything
to
add
to
that.
N
N
A
A
M
M
So
this
proposal
is
to
eventually
replace
that
mechanism
to
use
JSON
patches
right
now.
That
proposal
is
going
to
be
based
around
pointing
at
a
directory
and
then
those
directory
there
patches
in
that
directory
will
get
applied.
I
think
is
pretty
worth
looking.
If
people
will
do
have
requirements
around
changing
those
pods
that
delivered
by
cube
ATM
that
they
look
at
the
proposal
and
see
if
that's
going
to
meet
their
requirements
in
terms
of
how
they
might
consume
it
wire
cluster
api.
If
we
eventually
put
that
in
as
a
feature.
A
M
M
We
I
think
there
was
some
interest
from
as
yours
to
how
it
was
going
about
that
and
based
on
their
donor,
provides
for
basil
as
a
PR
and
a
base
basically
copied
the
service
API
evolutions
away.
We
have
cluster
scoped
account
resources
and
I
on
this
part
of
the
spec
is
saying
which
namespaces
are
allowed
to
consume.
That
particular
account
if
I
think
you'd
be
great
if
we
do
start
doing
this
cross
borders,
that
we
have
a
common
enough
approach.
A
O
The
context
for
this
was
from
cap
Z
I
found
that
we
store
in
the
cube
ADM
config
secrets.
As
plain
text,
I
opened
an
issue
for
this
upstream,
because
the
fix
probably
needs
to
be
in
the
bootstrapper
writer
and
then
I
found
out.
There
are
T
two
other
open
issues
and
P
are
in
progress,
but
it
looked
like
they'd
gone
a
little
bit
stale,
so
I
kind
of
picked
that
pack
up
refresh
the
PR
and
depending
what
people
think
how
controversial
it
gets.
O
A
Okay,
great
thanks,
ace
I,
don't
see
any
hands
up
about
that.
So
I
guess
at
this
point
we'll
move
on
to
the
new
issue
triage
of
us.
Anyone
has
any
general
questions,
they've
thought
of
since
the
beginning,
they'd
like
to
add-
and
this
is
the
one
part
I
did
not
have
set
up
so
I'll-
have
to
get
a
might
need
Vince's
help
doing
some
of
the
triage
here.
A
I
So
this
is
something
that
we
it's
an
existing
issue
in
machine
health
check.
So
it
doesn't
matter
that
much
because
today
it's
like,
if
you
have
to
machine
health
checks
that
applied
like
let's
say
you
have
one
machine
and
two
machine
health
checks
that
both
apply
to
it.
Then
first
one
that
fails
will
delete
the
machine
today,
but
going
forward.
I
This
is
gonna,
be
more
problematic
because
then
it'll
be
like
both
machine,
no
strikes
will
run.
One
of
them
might
say
it's
unhealthy,
the
other
might
say
it's
healthy
and
it's
kind
of
gonna
get
really
complicated,
so
we
at
least
want
to
document
to
users
that
they
probably
shouldn't
do
this.
They
should
just
have
one
that's
yeah,
all-encompassing
for
the
machines
that
it's
intended
to
target,
but
we
might
want
to
do
some
automatic
detection
of
this
too.
It's
just
hard.
N
A
C
I
would
say,
like
let's,
either
open
a
differential
for
documentation
or
tackle
documentation
as
part
of
this
effort,
but
this
could
either
go
into
a
free,
X
or
next,
even
the
like.
We
don't
have
any
good
solution,
yet
we
should
definitely
100%
document
a
separate.
A
A
F
And
so
this
was
derived
by
discussion
and
in
another
issue
and
now
in
custom,
cut
a
very
supportive
for
multi-tenancy
and
I
open
an
asset
of
issue
in
trial
for
trying
to
understand
that
what
will
be
the
future
for
the
support
windows
so
for
keeping
track
then
now
the
because
the
cutter
support
for
multi-tenancy
as
some
limitation.
This
is
one
of
these
limitation
and
yeah,
it's
just
a
matter,
and
then
this
cross.
F
So
they
they
issue
that
Nadia
discussed
before
what
we
are
doing
for
multi
tenants
in
term
of
we
are
going
with
one
instance:
the
support,
many
tentative
or
many
instance
installation
in
the
cluster.
Now
to
be
honest
in
cluster
couple.
Most
of
the
complexity
is
due
to
multi-tenancy
and
also
in
manifest
generation.
We
have
most
of
the
complexities
due
to
multi-tenancy,
if
you
think,
for
instance,
to
the
copybook,
namespace
and
and
the
fact
that
we
are
going
to,
we
have
to
install
separate
copy
of
the
controller
for
running
web
mix.
A
A
F
A
F
The
aux
is
not
nice,
you
have
to
call
caster
cutter,
upgrade
basically
for
every
tenant
and,
and
it
is
required
in
order
to
use
instant,
the
instant
specific
variables
that
each
tenant
has.
So
it
is
not
documented
yet,
and
we
had
a
user
eating
the
problem,
so
it
will
be
nice
at
least
where
this
documented
and
15
output
works.
Now.
A
O
Atp
was
there's
some
mutable
fields
that
it
knows
how
to
reconcile
on
the
like
infrastructure,
specific
machine
objects,
but
we
kind
of
lost
the
ability
to
manage
those
from
like
the
Machine
deployment
level
in
the
v1
alpha
to
sort
break
out,
we
could
recover
that
ability
kind
of
by
walking
back
up
from
the
Machine
database
machine
through
the
machine
to
the
machine
set
which
has
an
immutable
reference
to
the
template
and
reconciling
along
that
path
inside
of
Kappa,
but
that
doesn't
work
for
continued
ATM
control
plans
might
not
work
for
machine
pools
depending
on
details
that
not
familiar
with,
and
it
sounds
like
they
still
be
undecided
and
it's
certainly
like
there's
a
lot
of
steps
involved
in
doing
it.
O
That
way,
and
so
this
was
sort
of
a
surfacing,
the
idea
of
adding
a
field
to
the
machine
to
say
this
was
created
from
this
template
to
short-circuit
that
link
to
be
reliable.
Both
anytime,
you
have
a
machine
and
also
not
have
to
have
no
specifics
of
like
what
machines
that
is
versus
a
cube,
ATM
control
plan
versus
a
machine
pool
reimplemented
in
many
different
providers.
B
O
B
So
like,
if
you're
doing
machine
deployment
based
machines,
then
the
only
way
to
change
any
of
the
characteristics
about
any
new
machine.
Do
you
want
to
roll
out
is
to
change
the
reference
to
an
AWS
machine
template
or
whatever
and
I
think
the
same
is
true
for
machine
sets.
I
mean
I,
know
that
machine
deployments
create
new
machine
sets
whenever
something
changes,
but
I
think
if
you
were
just
working
with
the
machines
that
I
believe
that
that
reference
is
mutable.
B
C
O
C
I
see
so
I'd
say
the
like,
for
example,
you
wanna
change
security.
To
then,
like
you
change
on
the
template
and
then
them
she
eventually
at
the
SS
machine,
reckon
salary
would
go
and
reconcile
and
say,
like
oh
I,
have
this
machine
from
a
template.
There's
a
mismatch.
So
I
should
update
these
fields
on
the
machine
as
well.
C
Okay,
I
mean
I,
wouldn't
be
a
post
that
I
I
just
pointed
out
late
like
a
maybe
the
templating
feel
like
object.
Reference
could
go
into
the
infrastructure
provider
because
it
is
infrastructure
specific,
so
the
best
machine
you
can
have
template
from
and
whoever
creates
that
machine
for
a
couple
machine
set
on
the
machine
deployment
like
can
fill
in
that
if
the
CRT
habit
that
make
sense
in
doing
stretcher
something
yeah.
O
A
C
Think
I
mean,
from
my
perspective,
like
this
seems
like
a
backward
compatible
change.
I,
don't
have
strong
objections
if
it's
mostly
adding
a
new,
informative
field,
the
wedge
should
probably
be
right
once
I
I
think,
and
we
can
make
sure
that
there
is
like
web
books
in
place.
So
I
would
give
my
plus
one
to
go
forward
for
it.
Yeah.
A
A
B
I
forgot
one
thing
so
with
all
of
the
proposals
that
we
discussed,
or
that
were
mentioned
earlier
today,
and
that
we've
been
talking
about
for
the
past
couple
of
meetings.
We
have
gotten
to
the
point
with
at
least
two
or
three
of
them,
where
the
comments
on
the
Google
Doc
were
slowing
down,
which
was
why
we
had
folks
create
pull
requests
with
the
actual
kemp's.
So
we've
done
or
started
the
clock
for
lazy
consensus
on
a
couple
of
them.
You'll
see
that
in
the
comments.
B
So
just
echoing
what
everybody
said
before.
Please
take
a
look
at
all
the
proposals
and
if
there's
any
showstoppers
or
serious
problems,
probably
please
bring
them
up.
Otherwise
we're
doing
the
standard
one
week
for
lazy
consensus
and
when
that
expires,
will
merge
the
proposals
and
start
working
on
them.