►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Alrighty
well
welcome
everyone
to
this
meeting
of
the
cluster
API
provider.
Aws
part
of
stick
cluster
life
cycle.
As
usual,
we
abide
by
the
CNC
code
of
conduct,
so
please
be
respectful
to
each
other.
If
you
would
like
to
talk,
please
use
the
raised
hands
feature
in
Zoom
I've
shared
the
new
meeting
notes.
So
this
is
new
meeting
notes
document
for
2023
in
the
chat
window.
A
Please
feel
free
to
add
your
topics
for
discussion
into
that,
but
also
add
your
attendance
to
the
attendee
list
as
well.
Please
we
will
get
to
the
calendar,
invite
updated
with
the
meter
notes
document.
A
There
is
an
open
PR
to
do
that
or
open
request,
as
the
city
cluster
lifecycle
leads
at
the
moment
to
do
that
so
brilliant.
So,
if
we
go
to
today's
agenda.
A
Oh
I
think
I
think
people
have
been
adding
themselves
to
the
template
as
opposed
to
the
actual
entry.
A
Easily
done
I,
what
we
should
do
actually
is
what
other
providers
do?
Is
they
gray
that
section
out,
which
is
a
good
point
we
can.
We
can
do
that
so.
C
A
I
will
start
with.
We
only
have
one
PSA
today
and
that
is
sudaf
has
moved
herself
to
Emeritus
maintainer.
A
So
after
I
move,
she
basically
feels
like
she
can't
continue
on
and
give
the
time
so
yeah.
So
she
moves
to
emeritas,
which
is
sad,
so
death
was
with
us
for
a
very
long
time.
Did
a
lot
of
work
helped
a
lot
of
people
out.
So
it's
that
was
quite
a
sad
day
to
see
that
coming
through,
but
yeah
things
changed
so
yeah.
A
So
moving
on
to
the
action
items
so
from
the
last
meeting,
which
was
I,
don't
know
when
before
Christmas
must
have
been
like
second
week
of
December
or
something
like
that,
so
I
had
an
action
on
my
attempt
to
schedule
a
knowledge
transfer
session
end-to-end
tests
from
the
start
of
2023,
which
I
did
not
do
so
I
will
do
that
and
then
I
also
had
a
action
item
to
at
a
issue
for
discussion
in
this
office
hours,
which
I
did
I
guess.
A
B
Sorry
that
was
faster
than
finding
the
freaking
race
and
thing
how
how
are
the
tests
doing
I
I
wanted
to
ask
I'm
still
getting
email
about
failed,
end-to-end
tests,
so
what's
up
can
I
help
out
in
that
I
recently
looks
like
I'll
have
some
time
to
to
muck
around
so
can
debug
stuff.
A
B
I
know
that
on
Peter
has
been
working
on
making
them
more
stable
and
updating
this,
and
that
some
of
the
Amis
weren't
correctly
working
and
requires
a
different,
build
or
a
newer
version
from
some
component.
So
yeah
she's
been
she's,
been
doing
absolutely
amazing,
stuff
I'll
I'll
ask
her
later
on.
A
Yeah,
oh
yeah,
yeah,
yeah,
architect,
look
as
well
because
that'd
be
good
to
get
them
probably
stable
again.
B
A
A
Oh,
so
the
next
agenda
item
was
mine
and
it
came
up
in
a
discussion.
I
forget
where
maybe
on
slack-
and
this
was
about
this
issue
here-
which
is
oh
I,
quite
the
issue
a
long
time
ago.
A
So
do
we
so
when
we
have
a
machine
deployment,
so
the
currents-
I
I,
guess
advice
from
copper
years.
If
you
want
your
machine
deployment,
if
you
want
machines
across
multiple
azs,
then
you
create
multiple
machine
deployments
and
each
of
those
machines
of
deployments
you
assigned
to
a
different
AZ.
A
So
I
guess
this
issue
is
around.
Do
we
want
to
support
the
automatic
spreading
of
machines
in
a
machine
deployment
across
azs
or
at
least
have
it
as
an
option?
A
A
If
one
isn't
specified
so
I,
don't
know
someone
someone
asked
asked
for
this
and
then
asked
for
it
to
be
discussed,
but
yeah.
Don't
everyone
has
any
thoughts.
We
can
always
God
Daniel.
C
Yeah,
this
question
has
come:
I
I
I
have
I
I,
remember
discussing
this
question
in
various
I
guess
in
various
places
over
over
the
years
really
and
I
I
think
it
specifically
for
for
us
for
for
a
provider,
it's.
C
Yeah
I
think
there
are
good
arguments
against
it.
I
don't
have
them
all
on
the
top.
You
know
at
the
top
of
my
head
right
now.
I
think
one
of
them
that
I
that
I
do
remember
is
that
you
lose
you
lose
some
determinism.
You
lose
some.
You
lose
some
control
over
over
how
I
guess
yeah
over
you
know
where
the
machines
are
deployed.
For
example,
if
you
you
know,
if
you
deploy
across
multiple
failure
domains-
and
there
is
a
there-
you
know
one
of
the
domains
is
out.
C
Then
there
is
some
kind
of
rebalancing
right
that,
oh
sorry,
not
rebalancing,
but
there
there
is
there's
a
replacement.
That
replacement
is,
is
I,
I.
Suppose
it's
yeah,
it's
within
our
it's
within
our
control.
But
then,
if
that
original
failure
domain
is
is,
is
you
know
available
again?
C
Do
we
need
do
we
need
to
figure
out
some
kind
of
rebalancing?
You
know
what
at
that
point
you
as
the
I
guess
the
cluster
administrator,
you
you
don't
really
know.
C
Yeah,
you
don't
really
know
you
know
what
what
your
I
guess.
Topology
looks
like
any
more,
at
least
from
you
know,
just
just
looking
at
the
size
of
some
machine
deployment,
whereas
if
you
have
machine
deployments
per
failure
domain,
you
know
that
that's
that's!
That's
more
clear!
I
think
there
were
other
issues
with
you
know
around
scaling
and
auto
scaling
as
well:
I
yeah,
I
I,
don't
I
I
I
I
will
I'll
yeah
I'll
I'll
dig
them
up.
Add
it
to
the
issue.
I
think
it's
also
yeah.
C
It
would
be
good
to
if
it's
not
already
there,
it
would
be
good
to
reference
other
infrastructure
providers.
You
know
what
they're
doing
what
their
experience
maybe
has
been
right.
I
think
the
the
decision
in
core,
if
maybe
I,
I,
if
I'm
remembering
this
correctly
I
thought
the
decision
in
core
was
not
to
support
multiple
failure,
domains
for
a
machine
deployment
yeah,
so
there
would
be
there.
There
would
also
be
the
question
of
how
do
we?
How
do
we
make
it?
A
That's
cool
yeah,
I
know
we
we
we
actually
did
this
with
the
the
micro
VM
provider,
just
because
I
can't
remember
the
reasoning
why.
A
We
hadn't
actually
had
the
concept
of
there
was
just
different
different
host
machines
because
it
was,
they
were
necessary
in
a
different
failure
domain
from
the
traditional
sense
of
hey.
You
know
from
a
networking
and
switch
point
of
view,
so
yeah
yeah.
It
is
interesting
but
be
good
to
I,
guess,
capture
the
old,
those
those
fours
and
against,
and
maybe
persistos
in
the
discussion
somewhere,
maybe
so
that
whenever
it
comes
up
again
say
hey,
we
can
point
them
back
to
it
and
go
actually.
A
C
One
follow-up
question
is
so
I'm
I'm
just
skimming
through
the
issue,
but
have
we
is?
Is
there
some
use
case
that
so,
if
we
don't
have
this
support,
is
there
some
use
case
that
is
just
impossible
to
to
address.
C
B
C
Right
so
then
you
know
I
guess
as
a
when
you
create
the
cluster,
you
would
have
to
create
multiple
machine
deployments.
C
What
do
you
mean
by
if
you
don't
want
that?
Well,.
A
C
Ahead,
no
I'm
just
wondering
why
you
know
why
why
I'm
just
trying
to
get
to
see
I'm
trying
to
see
if
there's
if
there
is
something
that
is.
A
I
guess
I
guess
from
it
from
if
I
was
like
a
new
user
and
I
came
to
to
Cafe
and
I
was
like
well
I.
Just
I
just
want
machines
in
this
region.
I
want
them
across
The
Faded
domains,
but
you
know
I,
don't
necessarily
want
to
create
multiple
manifests,
multiple
machine
diplomas,
because
I
was
just
I,
find
that
too
much
yes,
too
many
too
many
resource
kinds.
I
just
want
it
to
to
spread
automatically
yeah
I.
A
A
B
A
B
Yeah,
so
how
would
we
so
we
we
wouldn't
care,
because,
oh
we
don't
have
to
care
actually
right,
because
we
just
we
just
see
if
there
is
like.
What's
what's?
Maybe
the
the
so
I
had
like
two
failure.
Domains
and
I
have
two
machines,
then
I.
If,
if
it's
requested
like
something
like
Spread,
spread
them
or
out
or
or
whatever,
then
I
always
keep
them
at
a
neat.
B
You
know
middle
ground.
It's
like
one
machine
in
one
failure
domain,
one
machine
in
the
other
failure
domains
goes
down.
Okay,
I
see,
I
have
only
one
domain.
Master
flock
I
put
two
of
them
into
that
domain,
but
as
soon
as
I
see
that
there
is
a
an
available
AC
which
we
already
reconcile
and
then
on
each
reconcile,
say:
okay,
how
many
ACS
do
I
have
available?
Oh.
D
B
A
B
Okay,
so
I'm
not
I'm,
not
saying
that
easy,
but,
like
sort
of
there
are
a
lot
of
things
that
we
could
maybe
simplify
so
to
speak.
Yeah
I,
wouldn't
do
this
out
of
the
box,
though
like
I,
would
I
would
only
do
this
if
it's
requested
something
like
automatic,
spread
failures
on
special
spread
or
something
like
that
and
make
it
a
an
experimental
feature
or
whatever.
C
C
If
it's
simply
a
matter
of
you
know,
I
want
fewer
resources
to
let's
say
to
to
manage
or
create
I
think
you
know
that
there
there
is
a
given
the
complexity
here
and
given
given
I.
Think
the
at
least
having
to
revisit
a
decision
already
made
in
in
cluster
API
core
I
mean
I.
Just
think
that
the
bar
for
this
is,
you
know
it
needs
to
be
pretty.
C
You
know
pretty
high,
so
I
I'm
not
saying
this
because
I,
you
know
I,
you
know
we
shouldn't
or
we
should
just
just
let
let
if,
if
there
is
something
that
is
just
not
possible,
that
would
be
a
very
important
piece
of
information
to
have.
B
C
Yeah
true,
however,
if
you
are,
you
know,
if
why
do
we
need
the
machines?
We
need
the
machines
to
run
our
workloads.
If
we
have
workload
sensitive,
you
know
if
we
have
if
we
have
a
scaling
right,
if
we
have,
if
we
scale
machines
based
on
workload
needs,
then
the
workloads
that
were
being
scheduled,
the
pods
that
were
being
scheduled
to
the
machines
that
are
now
unavailable
will
be
pending,
which
will
trigger
presumably
Auto
scaling,
or
you
know,
we'll
need
to
scale
up
the
existing
machine
deployments.
So
you
I
think
I.
C
B
If
you're
and
it's
it's
a
good
yeah,
it's
a
good
idea.
Well,
I,
don't
I
yeah
what?
If
what
happens
to
your
network
or
your
workload
when
your
failure
domain
goes
down,
you
probably
does
the
auto
scaling
group
understand
that
it
now
needs
to
schedule
it
into
a
different
region:
yeah,
not
region.
Sorry,
okay,
I,
don't
know.
D
Yeah
I
mean
it
does
we
had
clusters
in
U.S
east
2
that
were
affected
last
year
when
one
of
the
easies
went
out
and
we
had
Cappy
clusters
that
were
spanning
the
three
azs
and
we
saw
basically
what
was
described
here
with
the
pods
all
go
depending
and
the
other
azs
scaled
up
pretty
quickly.
It
took
a
few
minutes,
but
the
infrastructure
seemed
to
do
what
it
was
supposed
to
do.
B
D
And
I
don't
think
that
we
would
use
this
if
this
feature
existed,
we
we
do
the
machine
deployments,
you
know,
spread
them
out
at
different
ones
in
each
busy.
This
might
simplify
configuration
for
some
of
our
fallback
machine
deployments
with
less
optimal
instance
types,
because
we
have
to
duplicate
those
for
each
AC,
but
I.
Don't
know
that
this
feature
work
is
worth
just
a
little
bit
of
less
configuration.
B
Maybe
it's
just
a
case
of
having
like
proper
documentation
on
how
to
do
multiple
failure,
domain
failovers
yeah
with
with
multiple
machine
deployments
and
then
examples
and
then
saying
you
know
stating
that
what
we
just
said
here
saying
that
okay,
so
if
now
your
final
domain
goes
down,
then
then
the
auto
scaling
group
would
do
its
thing
and
rebalance
the
workflow
the
workload.
Sorry,
that's
a
good
point.
Cameron
thanks.
C
Thank
you,
Cameron
I
I.
If,
if
you
have
some
some
time
and
and
you
can
add
a
note-
you
know
about
your
your
experience
and
just
your
thoughts
to
that
issue
that
I
I
would
I
would
very
much
appreciate
that,
because
yeah
there's
nothing,
nothing
trumps,
nothing
beats!
You
know
real
real
user
experience.
D
C
B
B
A
A
For
once
Plus
three
times,
I
guess
I'll
pause
the
recording
now.
But
if
anyone
wants
to
do
a
need,
look
at
any
issues
or
anything
like
that
informally,
then
we
can
do
we
don't
have
to.
We
can
have
an
easy
one
for
the
new
year
to
start
the
new
you
know
but
yeah,
let's
pause
the
recording
first.