►
Description
Meeting minutes https://docs.google.com/document/d/1LdooNTbb9PZMFWy3_F-XAsl7Og5F2lvG3tCgQvoB5e4/edit#
A
So
hello,
everyone,
I'm
fabrizio
pandini-
and
this
meeting
is
cluster
pi
of
this
hour
before
starting
there
is
a.
We
have
a
sharing
the
document
with
meeting
minutes.
If
you
want
to
get
access
to
the
document,
you
have
to
join
the
sig
cluster
lifecycle
meeting
list
and
just
a
few
rules.
If
you
want
to
say
something,
use
the
resent
feature
of
zoom
and
that
is
below
near
reactions.
A
B
This
is
just
a
first
trial
of
doing
the
patches
with
the
new
model,
which
is
only
bug
fixes,
so
it
should
not
have
any
behavior
changes
or
features,
or
anything
like
that.
So
please
adopt
it
as
soon.
A
Maybe,
before
moving
to
open
proposal,
I
just
pause
a
second
and
I
give
space
for
people
joining
this
meeting
for
the
first
time
to
introduce
themselves.
C
Hi
so
yeah
hi,
I'm
jago,
I'm
a
work
from
microsoft
in
the
team
that
got
acquired
the
kinfolk
team
and
today
I'm
covering
fortilo,
who
has
been
joining
these
meetings
because
he's
out
sick,
so
yeah
hi
everyone
good
to
see
you.
A
A
A
A
F
Yeah
thanks
fabrizio
I've
made
a
few
small
updates
and
I
got
a
few
good
comments
last
week,
I'm
not
quite
ready
to
say
it
needs
review
because
there's
a
couple
things
there's
a
couple
to-do's
remaining,
but
I'm
really
hoping
to
clean
it
up
this
week
because
I
know
next
week
will
be
kind
of
a
down
week
so
that
we
can
put
it
formally
in
needs
review
and
maybe
get
it
done
before.
B
Yeah
just
wondering:
should
this
become
a
pr
soon
I
mean
it's
been
in
a
duck
for
a
while,
but
if
we're
hoping
to
move
forward
with
the
design,
it
should
probably
become
a
pr
at
some
point.
A
F
G
One
today
I
shared
some
feedback
there,
so
I
just
wanted
to
encourage
everyone
to
have
a
look
at
so
we
can
keep
the
discussion
going
and
hopefully
we
can
translate
that
into
a
formal
proposal
during
the
next
few
weeks
or
so.
A
C
Yeah
hi
so
yeah,
so
I'm
mainly
many
wanted
to
talk
about
the
pr
to
get
ignition
v2
support.
C
Yeah,
it's
been,
it's
been
a
while
since
we've
been
working
on
this
and
I
think
there's
only
one
pending
or
open
issue
and
we
replied
with
some
reasoning
and
yeah.
I
wanted
to
check
if
if
we
can
get
this
moving
because
it
would
be
really
nice
to
get
this
merged.
C
Yep,
so
there
was
a
concern
by
ancient
or
alberto.
I
think.
C
Yes,
exactly
so,
I
think
you
were
proposing
moving
some
of
the
fields
to
a
conflict
map,
or
you
know
out
of
the
api
and
from
what
I
understand
there
was
this
assumption
that
on
flat
car
yeah,
there's
no
need,
for
you
know
that
this
doesn't
affect
the
bootstrap
of
the
node,
so
we
can
add
it
somewhere
else.
But
from
what
johannes
commented
there
in
this
thread,
it
is
actually
needed
so
yeah
we're
hoping
to
keep
to
keep
this.
This
additional
conflict.
G
C
Thanks,
thank
you
very
much
yeah.
I
think,
after
that
we
should
be
good
to
merge,
but
yeah
we'll
we
can.
You
know,
write
in
the
in
the
apr.
A
H
So
I
started
writing
some
context,
so
what
we're
currently
doing
is
we
have
a
bunch
of
resources
where
you
have
reference
in
them.
So,
for
example,
if
you
have
a
machine
deployment
or
machine
set,
we
have
a
reference
to
an
I
don't
know:
docker
machine
template
or
something
we
have
code
in
our
controllers
or,
let's
say
the
corresponding
controllers
to
those
resources
which
automatically
upgrades
the
references.
H
H
So
what
we're
considering
is,
if
you
should
drop
the
fallback
to
avoid
the
case
that
the
cld
cache
blows
up
essentially
in
the
controller
which
leads
to
the
question,
can
you
even
do
that
and
who
would
be
affected
and
et
cetera?
H
And
it's
a
follow-up
of
us
we
had
recently
because
kzp
was
producing
log
messages
because
we
had.
We
didn't
have
the
right
rights
and
stuff,
but
it's
not
really
important
for
this
year.
H
So
currently,
you
can
only
look
at
that
as
per
deaf
here
and
check
if
your
cod
names
match
that
pattern,
I
think
we
could
do
something
like
block
it
in
in
cluster
cuddle.
So
if
you,
if
you
want
to
do
if
glass
cut
liquid,
is
a
provider
and
there's
a
cld
there,
which
doesn't
completely
pattern,
we
could
return
arrow
if
you
want,
but
that's
currently
not
there.
I
Just
to
clarify,
like
the
controller
tools
and
q
builder,
projects
like
they
do
generate
like
names
like
these,
so
those
should
not
be
affected
whatsoever.
I
It's
if
you're
literally
creating
those
crd
manually,
which
I
hope
you're
not
doing,
because
that's
kind
of
that's
a
lot
of
work,
but
if
folks
are
like,
if
that's
where
like
it
could
break
if
the
kind
does
not
match
the
alternative
here
is
like
to
keep
the
listing,
but
without
caching,
which
would
be
like
still,
you
know,
really
slow,
because
you
have
to
list
all
the
crts
and
incur
in
like
lots
of
memory
usage.
B
Is
this
here
yeah?
I
think
from
the
provider's
perspective.
I
don't
think
this
is
impacting
as
long
as
you
were,
you
know
doing
things
the
right
way,
which
is
very
likely,
like
being
said,
I
think
you'd
really
have
to
go
out
of
your
way
to
not
have
crds
named
by
q
builder,
but
that
being
said,
I
think
it
would
be
better
if
we
had
some
sort
of
automated
way
to
like
warn
providers
if
they're
not
using
this
naming.
B
But
if
not
in
the
meantime,
I
think
we
could
at
least
like
start
with
a
like,
adding
it
to
the
contract
and
also
maybe
opening
issues
and
like
the
provider
repos
that
we
know
of
to
like
audit
their
crd
names.
I
think
it
should
be
a
pretty
easy
manual
check
to
do
just
go
to
the
crd
basis
directory
and
check
all
the
the
names
match
so
just
like
to
make
sure
that
we're
yeah,
like
warning
providers
ahead
of
time.
B
A
Okay,
stefan
do
you
think
that
it's
doable
to
go
down
this
path,
so
other
warning
cluster
cattle
and
kind
of
define
a
deprecation
path
for
these
features?
So
when
this
happen,
no
one
should
be
surprised
by
this.
H
Yeah,
I
think
it
shouldn't
be
a
big
problem
to
to
adjust
dust
cuddle,
to
emit
a
warning
for
release,
or
so
and
yeah
and,
of
course,
immediately
add
it
to
the
contract
and
then
drop
it
on
release
later
or
something
sounds
reasonable
and
I
think
for
the
known
providers.
I'm
not
sure
if
you
have
to
open
issues,
I
probably
can't
just
take
or
whoever
does
it
can
just
take
a
look
at
the
crds.
H
Good
question
I
mean
if
we
introduce
a
warning
in
cluster
cuddle
and
some
of
our
tests
are
using
plastic
cutter
depends
on
so
we
could.
We
could
add
a
warning
for
cluster
cutter
and
we
could
add
a
flag
that
warnings
may
be
blocking,
and
then
then
we
can
adjust
our
test
that
that
uses
that
flag.
So
essentially
our
cluster
cuddle.
Essentially
the
tests
which
are
using
classical
would
break
if
the
series
don't
match.
I
think
something
like
that
should
be
possible.
B
Yeah,
I
think,
if
anything,
it
would
fit
really
well
in,
like
a
conformance
test
like
a
cappy
provider,
conformance
test,
which
we've
talked
about
several
times,
but
haven't
really
on
too
far
with
that,
but
yeah
like
have
a
lot
of
these
contract,
things
could
be
automatically
checked
all
right.
I
think
this
would
be
one
of
them.
A
I
I
would
still
like
suggest
for
1.1
to
to
use
an
api
reader
client
so
that
we
never
cache
crds
whatsoever,
because
caching,
the
whole
crd,
like
you
know,
could
take
like
it's
like
all.
Crds
installed
in
the
cluster
will
be
cached,
which
is
a
lot
of
them
and
they're,
usually
pretty
big
as
well
like
in
terms
of
megabytes
of
data,
so
maybe
like
for
the
first
iteration
like
in
1.1.
I
We
could
potentially,
like
always,
opt
in
to
use
the
metadata
plan
first
and
then,
if
we
have
to
like
fall
back,
we
can
use
an
api
reader
instead,
which
goes
directly
to
the
api
server
and
spread
out
some
lots
of
warnings
and.
E
A
See
seems
I
see
stefan
plus
one
this
idea,
so
it
seems
there
is
a
way
for,
or
maybe
we
update
the
issue
and
when
we
figure
out
the
details,
we
can
consider
also
to
send
a
mail
to
the
main
list,
so
everyone
is
informed
and
then
we
we
move
forward
with
this.
B
Oh
yeah,
I
was
catching
this
for
the
end,
but
just
wondering
given
that
next
week
is
thanksgiving
in
the
u.s
and
a
lot
of
people
usually
take
the
whole
week
off
wondering
if
we
should.
You
know,
give
ourselves
a
break
and
skip
the
meeting
next
week,
or
I
don't
know
how
many
people
are
planning
to
be
here.
A
A
I
give
mine
I'm
generic
plus
one,
because
even
if
there
is
some
forks
around
basically,
we
cannot
decide
nothing,
because
our
all
four
or
more
other
people
would
be
around.
So
my
plus
one
for
counselling
and
eventually,
if
people
in
europe
want
to
catch
up,
do
some
making
or
some
cod
world
rule,
let's
figure
it
out
and
and
we
record
it,
but
it
will
be
kind
of
getting
together
if
we
require.
We
think
this
will
let.
B
B
J
Yeah,
mainly
just
broadcasting
an
issue,
so
I
filed
an
issue
to
start
discussing
with
the
community
failure
domain
support
for
managed
clusters.
So
please
take
a
look
and
if
you
have
ideas
or
use
cases,
please
add
them.
J
So
so
yeah,
I,
I
guess,
there's
there.
There
are
a
few
things
with
with
this
work
like
there
are
two.
There
are
two
components
to
it,
like
the
control
control
planes
and
workers,
and
I've
added
like
some
some
of
the
available
options
down
there
and
like
it
seems
that
there's
some
initial
at
least
agreement
on
control
planes,
so
yeah,
please
take
a
look.
We
might
have
missed
something
and
for
workers.
J
That's,
I
think,
where
we
need
to
bring
up
some
of
the
discussions,
especially
like
regarding,
like
cases
where
users
want
like
have
heterogeneous
configurations
for
their
machines,
say
like
they
have
gpus.
They
have
some
fancy
network
cards,
but
they
also
want
to
spread
those
workloads
across
acs.
J
They'd
have
to
create
various
machine
deployments,
slice
and
dice
all
of
the
combinations.
I
don't
think
that
that's
like
the
ideal
case,
so
that's
also
something
that
we
probably
discussed.
I
linked
an
an
ancient
issue,
the
anti-file
that
well
a
good
for
the
for
that
one,
but
we
can
treat
it
as
an
orthogonal,
but
probably
you
should
just
keep
it
in
mind.
I
Please
yeah,
I
was
going
to
actually
call
the
collab
this
year.
I
think,
like
we
said
we
we
won't
probably
like,
like
we
want,
like
spread
automatically
spreading
machine
sets
or
machine
deployments
over
much
for
failure
domain,
but
like
we'll,
definitely
like
to
keep
the
failure
domain
at
like
select
one
for
each
machine
deployment
so,
like
I
guess,
long-term
topology
could
decide
like
how
to
automatically
set
this
field
up
based
on
some
rules.
But
we
don't
have
to
do
that
today,
but
it
seems
like.
J
Yeah
for
now,
the
discussion
is
mainly
focused
on
selecting
and
setting
the
field
for
a
specific
set
of
workers
not
spreading
for
a
given
machine
deployment.
Yeah.
K
Sorry
I
was
slow
to
find
the
the
thing
there
it
it
sounded
like
one
of
the
things
you
seen
was
talking
about
there.
Maybe
vince
was
mentioning.
This
was
like
a
single
machine
deployment
spanning
multiple
zones
is
that
I
mean
not
availability
zones,
but
these
failure
domains.
Is
that
something
that
work
that
we're
considering
or
I'm
just
curious
from
the
auto
scaling
side
like
exposing
some
of
that
information
up
through
the
node
groups,
you
might
get
messy.
J
So
this
issue
for
now
is
only
like
scoped
to
just
put
in
specific
machine
deployments
in
failure
domains,
but
for
yeah
for
for
for
for
the,
if
we're
talking
about
like
spreading
a
given
machine
deployment
across
azs,
that
would
be
probably
reopening
the
the
dutch
issue
that
is
linked
and
moving
the
discussion.
There.
A
L
There
is
a,
I
don't
know
if
I
want
to
use
the
word
niche,
but
there
is
a
use
case
for
when
you're
using
machine
pools
specifically
because
they
support
multi-az,
auto
scaling
groups
and
there's
a
whole
bunch
of
caveats.
Tied
to
that,
like
you
run
into
problems
with
ebs,
persistent
volume,
storage
and
things
like
that.
But
if
your
workloads
aren't
using
those
and
are
very
agnostic
as
to
what
is
either
scheduled
in
then
the
auto
scaling
the
multi-az
auto-scaling
group
works
pretty
well.
L
But
I
don't
know
if
that
is
the
use
case.
We're
talking
about
here,
and
I
don't
know
if
there's
anything
that
that
helps
with
those
other
caveats
in
a
multi-az,
auto
scaling
group.
But-
and
I
don't
know
how
that
question,
how
that
problem
ties
into
what's
being
proposed,
but
those
those
are.
That
would
be
what
I'd
be
curious
about.
Hearing.
K
Yeah
I
mean,
given
you
know,
dane
what
you're
talking
about
there.
I
think
from
the
from
the
autoscaler
side,
if
we're
talking
about
the
cluster
api
implementation,
if
we
get
to
the
point
where
we
have
a
machine
pool
that
presents
itself
as
a
node
group
to
the
auto
scaler,
and
even
if
that
single
machine
pool
spanned
multiple
zones,
the
auto
scaler
wouldn't
care
at
that
point
because,
like
it's
not
doing
any
comparison
between
the
node
groups,
now
the
topology
you're
talking
about
where
there
are
asgs
within
that
machine
pool.
K
You
know
that's
something
I
don't
know
how
we
would
support
that
on
the
on
the
auto
scaler
side,
if
we're
using
the
cluster
api
provider,
because
currently,
there's
no
notion
of
like
infrastructure
provider,
specific,
auto
scaling
technology.
So
my
concern
was
more
about
like
if
we
get
into
a
situation
where
we
have.
You
know
a
single
machine
deployment,
that's
somehow
spanning
multiple
zones,
and
then
we
have
to
present
information
to
the
auto
scaler
about
like
what
zone
does
that
node
group
you
know
comprise
that
will
get
really
complicated
on
that
side.
K
That
was
kind
of
what
I
was
getting
into,
but
I
think
from
what
you're
talking
about
dane
like.
I
have
a
feeling
that,
when
we
get
to
the
point
where
we
have
machine
pools
surfacing
up
to
the
auto
scaler,
it
will
still
be
the
same
interaction
we
have
now
where
it's
just
like
you
know,
replica
changes
basically
and
then
presumably
whatever's
behind
the
machine
pool
will
be
doing
that
sophisticated
logic.
You
know
kind
of
the
business
logic
that
you're
talking
about.
L
Right,
no,
I
understand
that
part.
My
question
was
more
to
I'm
sorry,
I
missed
the
other
person's
name,
but
you
see
yes,
yes,
thank
you.
How
how
the
failure
domain
support
would
play
into
that?
I'm
not
sure
what
the
the
use
case
is
and
how
that
maps
to
the
capabilities
of
the
infrastructure
at
the
moment.
J
So
I
guess
for
for
now
the
initial,
so
originally
we
started
exploring
this
for
machine
deployments.
I
didn't
start
looking
at
machine
pools,
but
as
we're
looking
into
machine
pools,
we
should
probably
ensure
that
we
satisfy,
like
the
at
least
at
the
very
minimum.
The
current
use
cases
that
we
have
today,
if
you
use
a
machine
pool
separately
without
any
cluster
class
or
managed
clusters.
L
Okay,
I'll
I'll
dig
into
the
issue
a
little
bit
more
to
better
understand
the
use
case,
because
I
think
is
this
not
already
possible
with
you
know
three
machine
deployments.
A
I
can
try
to
so
on
to
to
give
my
my
take
at
this,
so
I
think
that
the
product,
currently
the
machine
deployment,
only
support
one
failure
domain
and,
as
of
today,
we
can
use
machine
deployment
in
cluster
class,
but
there
is
no
way
in
the
current
api
to
basically
specify
the
failure
domain.
The
single
failure
domain,
a
machine
that
deployment
should
use.
So
this
is
the
first
problem
that
we
want
to
to
use
to
to
address.
A
If
I
got
right
justine
issue,
it
is
basically
a
better
integration
from
cluster
class
and
what
machine
deployment
offers
today
then
around
the
topic
of
failure
domain.
There
are
some
additional
discussion
that
should
be
figured
out.
One
is:
should
the
machine
deployment
support
many
failure
domain.
This
was
discussed
in
the
recent.
A
Backlog,
grooming,
the
the
idea
is
probably
not
but
yeah
it
is
a.
It
is
a
little
discussion,
some
maybe
we
should
open
or
not.
If
there
are
arguments
and
and
another
side
discussion
is
about
failure,
domain
and
machine
pools
and
the
integration
of
machine
pools
and
auto
scalers.
So
all
of
all
of
these
are
topics
that
should
be
addressed,
but
I
think
I
think
they
are
kind
of
orthogonal
with
respect
to
yesina
issue.
If
I'm
not
wrong,
do
you
think?
Do
you
find
my
recap
somehow
meaningful?