►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
folks,
so
this
is
kubernetes
cluster
life
cycle,
cluster
API
provider
for
AWS
officers
today
is
20th
of
March
2023
and
please
add
yourself
as
the
attendee
in
the
agenda.
Doc
I
will
also
share
the
agenda
doc
in
the
chat.
A
Oh
I
think
Richard
has
already
shared
thanks.
So
please
be
friendly
to
each
other
and
abide
by
the
cncf
code
of
conduct
and
also
use
hand
raised
features.
If
you
want
to
talk
and
give
any
inputs-
and
apart
from
that,
if
we
have
any
one
new
join
today
and
want
to
say
hi,
please
you
can
go
ahead,
unmute
yourself
and
then
introduce
yourself.
B
Okay,
yeah
hello,
my
name
is
Mike
yeah,
currently
I
work
in
I
work
in
New
Relic
before
that
I
was
in
redhead
yeah,
actually
I'm,
a
maintainer
of
class
straight
backerator
in
kubernetes,
yeah
and
nice
to
be
here.
I
have
a
topic
to
discuss
today
so
yeah.
That's
me.
C
Yeah
sure
I
can
jump
in
so
I'm
Dan
I
work
for
for
Adobe
I,
actually
work
with
with
Mike
tujeron
I'm
focused
more
on
like
cluster
scalability
problems
that
that
we're
seeing,
particularly
in
thinking
about
adopting,
Carpenter
and
I'm
interested
in
how
Carpenter
and
Kappa
are
gonna
play
together.
A
Okay,
if
not,
then
we'll
start
with
the
PSA,
so
Luther
is
now
a
reviewer
I
think
this
was
added
by
Richard.
If
I'm
not
wrong,
so
yay
we
have
a
new
reviewer,
so
I
don't
see
Luther
in
the
call,
but
then
I
would
say.
Congratulations!
A
Moving
on
to
the
action
action
items.
Richard!
Do
you
want
to
go
ahead
for
the
action
item
that
you
have.
D
Yeah
I,
just
literally
just
look
now
no
is
the
answer.
So
I
will
create
a
PR
upstreaming
Cappy
like
we
do
for
our
login
level.
You
can
set
it
via
an
environment
variable
when
you
do
cluster
cut
to
win
it.
So
I'll
do
the
same.
C
A
D
Yeah
so
I
think
this
was
probably
come
out
of
a
discussion
of
one
of
Mike's,
PR's
I.
Think
if
I
remember
rightly
so,
really
this
is
around.
Should
we
treat
the
reasons
for
our
events
as
part
of
our
API
contract
so
that
people
will
build
in
Integrations,
based
on
the
events
that
we
raise
have
some
stability
and
guarantees
around
the
the
event
reasons.
D
So
you
know
alerting
maybe
integration
with
other
as
part
of
a
bigger
provisioning
system,
and
things
like
that.
So
really
it
was
just
a
get
people's
ideas
and
there's
a
bit
of
a
there's
another
there's
an
open
discussion
going
up
on
about
this
upstreaming
Cappy.
D
It's
mixed
mixed
views
I
quite
like
the
idea
of
having
stable
event,
names
personally
or
event
reasons,
but
yeah.
That's
my-
and
this
is
an
example.
I
haven't
completely
gone
through
the
source
code.
I
just
took
one
controller
and
and
ripped
out
the
event
names
and
put
them
somewhere.
So
I
see.
Mike
has
his
hand
up
so
I
will
stop
talking.
E
No
I
was
just
gonna,
bring
over
my
two
cents
from
the
previous
conversation
I
like
the
idea
as
well.
So
I
I
definitely
want
to
bring
what
Richard's
doing
here
into
a
follow-up
PR
for
what
I
did
instead
of
Cappy.
So.
F
Thanks
yeah
great
idea:
I
I
guess
there
are
there's,
there's
strings.
You
know
string
identifiers.
At
the
end
of
the
day,
if
we
I
haven't
seen
the
discussion
Upstream,
but
it
we
probably
would
like
to
leave
some
room
for
experimental
reasons
right
and
so
I,
don't
know
how
we
would
name
space,
that
you
know
some
prefix
or
or
some
other
convention,
but
that's
the
only
thing,
but
apart
from
that
yeah
that
I
think
this
would
be
great
for
usability
overall.
D
There's
a
question
on
that
from
for
Daniel
really
around
the
experimental,
would
it
be
sufficient
if
those
event
names
were
in
the
exp,
folder
and
so
were
deemed
as
experimental?
Do
you
think.
F
The
potentially
I
think
it
depends
on
whether
we
expect
consumers
of
the
of
the
events
or
the
reasons
to
sort
of
understand
the
layout
of
the
of
the
of
the
repo,
because
sometimes
I
think
you
might.
You
know
if
you're
consuming
it
just
yeah
without
having
let's
say
a
codependency
right,
if
you're,
if
you
are,
if
you
just
have
some
stable
set
of
identifiers
but
yeah
I
I,
think
that
would
be.
That
would
be
a
good
place
for
us
to
store
them.
A
Okay
fee
would
then
move
on
to
the
next
agenda
item
Cameron.
You
want
to
go
ahead.
G
Yeah
sure,
basically
I,
if,
if
you
want
to
open
that
issue
and
look
at
it,
basically
I
brought
up
some
user
stories
around
machine
deployments
and
trying
to
satisfy
specific
use
cases
with
supporting
multiple
instance
types
in
a
machine
type
machine
deployment.
This
is
particularly
hairy
around
like
keep
being
able
to
satisfy
machine
deployment
with
spot
instances
and
on-demand
instances
which
isn't
really
possible
right
now,
I
mean
there
have
been
some
really
good
feedback
and
questions
about
this,
especially
around
like
deletion
and
whatnot.
G
So
I
guess
I'm
just
trying
to
ask
for
direction
here.
Do
we
think
that
these
these
stories
would
be
better
answered
by
asgs?
G
Do
we
think
that
these
stories
would
be
better
answered
by
possibly
exploring
the
ec2
fleet
API
instead
of
trying
to
bake
more
logic
into
the
the
AWS
Machine
controller,
to
make
it
smarter
so
that
it
can
handle
this
I
I
just
am
looking
sort
of
for
a
direction
of
what
people
think
my
sort
of
concern
on
on
going
and
pushing
people
towards
asgs
is
that
asgs
are
very
incomplete
right
now
and
need
a
lot
of
help
in
order
to
really
use
them
in
a
actual
production
scenario.
G
H
Hey
so
just
a
question
about
asgs.
H
G
E
Ahead,
just
to
throw
out
a
kind
of
tangently
related
topic
from
the
Cappy
side
or
Cappy
meeting
the
other
week.
That
was
one
of
the
reasons
why
Carpenter
support
would
be
really
hard.
Is
the
lack
of
the
multi-instant
support
inside
of
machine
deployments?
G
Yeah
and
from
I
I
implemented
a
possible
path
forward,
specifically
focusing
around
just
capacity
specifically
and
from
what
I
found
in
the
implementation.
There
isn't
anything
in
the
machine
set
or
machine
deployment
controller
that
that
cares
about
mixing
the
machine
types
in
the
underlying
machines.
It's
it's
just
up
to
the
implementation
to
actually
support
it.
The
Kappa
implementation.
F
Yeah,
this
is
a
you
know:
it's
a
great
topic:
I
I,
don't
have
fully
formed
thoughts
on
it
yet
but
I
you
know,
I've
been
I've,
been
reading
the
issue
and
I
I
think
one
one.
Maybe.
F
Something
something
that
I'm
I'm
cautious
of
is
changing
the
changing
the
existing,
let's
say
abstraction
of
a
machine
deployment
to
support
multiple
instances
because
I
it's.
It
reminds
me
of
when
we
initially
added
support
for
machine
pools
and
we
decided
and
and
now
and
now
I
think
we're
actually
trying
to
trying
to
undo
this
right.
We
decided
that
okay
machine
pools
are
not
going
to.
You
know,
have
individual
machines,
it's
just
going
to
be
this.
You
know
this
abstraction
over
over
some
some
back
end.
F
You
know
infrastructure
service
that
deals
with
a
whole
group
of
machines,
and
this
this
reminds
me
of
so
this.
You
know
changing
the
machine
deployment
to
support.
Multiple
instances
reminds
me
of
that,
because
you
would
have
you
know
as
a
user.
Now
you
know
being
able
to
identify
a
specific
machine
or
what
you
know
what
instance
type
it
is
becomes
more
difficult
right
before
you
could
just
go
and
and
sort
of
understand
machine
deployment.
You
know,
get
a
list
of
the
the
machines.
F
F
What
I'm
personally
I'm,
also
trying
to
think
about
and
respond
on
the
issue
is:
are
there
ways
to
address
you
know
to
meet
these
use
cases
by
adding
or
composing
with
the
existing
controllers
and
abstractions,
and
you
know
that
that
I
I'm
also
wondering,
if
that's
an
approach
for
for
supporting
Carpenter
right,
so
not
just
machine.
So
you
have
you
know
machine
sets,
machine
deployment,
maybe
there's
something
that
composes
multiple
machine
deployments
each
having
you
know
their
own
instance
type
is.
G
Yeah
and
it's
possible
that
that
composition
could
be
using
fleets
it's
possible.
The
composition
could
be
completely
separate
from
that
as
well.
G
Both
of
those
directions,
I
think,
are
viable
I'm,
just
sort
of
looking
for,
like
the
the
best
Direction,
because
this
is
basically
the
problem
that
Deeds
facing,
as
we
often
are
provisioning
machine
deployments
and
just
run
into
capacity
constraints
due
to
the
size
of
the
Clusters
that
we're
provisioning.
So
it's
very
frustrating
to
have
to
work
around
that
when
we're
trying
to
deploy
machine
deployments
and
just
trying
to
get
node
types
that
are,
you
know
as
close
together
as
we
can.
D
Would
that
provide
me
yeah
I
needed
to
spend
some
more
time
reading
this,
but
I
quite
like
the
idea
of
using
leads
behind
the
scene
because
of
all
I
guess
deferring
a
lot
of
the
decision-making
to
to
AWS
but
and
I.
Think
Alberto
made
that
comment.
Didn't
he
but
yeah
I
guess
to
give
a
more
informed
answer.
I
definitely
need
to
read
more
but
I.
Quite
quite.
E
A
I
think
I
incline
towards
easy
to
fleas
as
well,
because
I
think
that
was
already
in
our
road
map.
So
if
that
is
anyways
going
to
help
with
this
scenario,
so
I
think
why
not?
We
should
go
ahead
with
this
instead
of
proposing
something
to
change
in
the
existing
controllers.
A
G
A
Sure,
thank
you
Richard.
You
can
go
next.
D
Yeah,
so
this
quick
one,
so
we
got
a
discussion
open
about.
Do
we
need
to
include
introduce
release
cycle
a
bit
like
copy
has
done.
Our
releases
are
ad
hoc.
D
We
do
them
when
someone
says
they
want
the
release,
and
so
we
can
go
months
and
months
without
any
release.
So.
D
Cool
so
yeah
it
was
just
about
it's
probably
I,
think
it
personally
I
put
it
on
the
discussion
there.
That
I
think
it's
probably
about
time
that
we
do
have
a
release
cycle,
and
maybe
that
is
tied
to
the
Cappy
release
cycle,
so
at
least
we're
keeping
Upstate
with
Cappy,
but
we'd
love
to
get
everyone's
opinions
on
that
discussion,
and
then
then
we
can
sort
of
I
guess
work
out
how
we
actually
implement
it
afterwards.
I
A
I
am
plus
one
as
well.
It's
just
that
I'm
not
sure
how
do
we
start
with
it
I
think
we
need
a
plan
on
how
to
start
with
it.
So.
D
Yes,
sorry,
if
we
really
want
to
force
the
the
matter
as
well,
we
could
have
a
GitHub
action
when
it
sees
a
new
copy
release,
great
issues
to
do
it
to
do
our
releases
or
something
like
that.
If
you
want
to
be
in
line
with
Cappy,
you
know
whether
that's
a
couple
weeks
later
to
allow
for
testing
a
new
Cappy
version,
but
we
can
give
us
reminders
to
do
it.
A
Okay,
I
think
next
topic
is
related
to
the
RCA
release
that
has
been
happened
from
gappy,
so
we
have
an
issue
where
we
have
to
test
this
with
RC
bill.
A
D
I'll
say
we
just
test
it
until
we
do
our
next
release.
Yeah,
maybe
and
then,
then
we
can
think
about
merging
it,
but
yeah.
F
A
A
A
Okay
sounds
good,
so
next
topic
is
also
it's
just
an
announcement.
I
would
say
that
we
are
actually
working
for
the
automation
of
cap
IMI
generation.
There
was
a
hard
dependency
of
getting
an
AWS
account
dedicated
to
host
these
images,
because,
right
now
these
images
are
hosted
in
a
account
which
is
owned
by
VMware.
So
then
the
Ami
generation
has
to
be
solely
the
responsible
of
someone
from
VMware.
A
So
we
wanted
to
eliminate
this
dependency
and
to
make
it
Upstream
what
we
are
trying
to
get
is
creating
a
new
or
requesting
a
new
AWS
account
from
testing
for
folks,
so
that
we
can
dedicate
that
AWS
account
only
for
hosting
our
Ami
images.
So
yeah
this
is
just
an
FYI.
So
we
have
the
discussions
going
on
around
this.
A
Okay,
I
think
next
topic
is
from
Mikhail.
B
Yeah
we
just
wanted
to
discuss
it
during
this
meeting
and
I
proposed
a
fix
for
carbon
about
how
we
said
desired,
desired
capacity
for
SGS,
and
currently
we
check
that
if
yeah,
the
fix
is
really
relatively
small.
B
But
currently
we
check
if
machine
pool
replicas
is
not
new
or
we
set
the
value
to
desired
capacity.
But
the
thing.
F
B
Replicas
in
motion
pool
is
never
new
by
default;
it
is
set
to
one
and,
for
example,
in
several
places
other
places
in
Kappa.
We
rely
on
this
fact.
We
don't
check
that
if
it's
new
or
not,
we
just
get
the
value.
There
issue
happens
when
it's
defaulted
to
one
and
we
have
different.
We
have
Min
size
bigger
than
this
default.
B
B
If
it's
not
a
replica
is
lower
or
higher
just
do
not
set
it
allow
awas
to
define
the
default
value
there.
B
Yeah,
so
I
proposed
this
PR.
There
are
two
comments.
One
is
just
clean
up:
the
rename
scope
to
machine
pool
scope
because
it's
a
little
bit
confusing
and
the
second
one
we
just
check.
If
the
value
in
in
motionable
replicas
is
in
between
mint
size
and
mock
size
yeah,
then
we
set
it
as
desired
replicas.
Otherwise
we
just
emit
it
yeah.
So
please
review
and
if
you
have
any
questions
additions,
yeah
we
can
discuss
it.
D
Yeah,
this
is
a
weird
one:
isn't
it
what
did
it
I
guess?
What
do
other
providers
do
in
this
situation
with
their
machine
pools,
because
this
you
sort
of
you
did
yeah
it
feels
like
you,
do
definitely
want
a
machine
pool
where
the
desired
is
is
not
set
explicitly
not
set
and
not
default
into
one,
as
opposed
to
always
default
into
one.
It's
just
it's
problematic,
then.
Isn't
it
a.
B
Year
but
that's
how
copy
Works
Frankly
Speaking
I
haven't
checked
other
providers
I'm
going
to
look
at
Azure
later
today.
Currently
we
use
just
Papa
in
our
system,
but
we're
going
to
also
adopt
Azure
gcp,
so
yeah
I
will
take
a
look
and
let
you
know
what
we
have
there
yeah.
D
D
Yeah
I'm
also
wondering
if
we
need
to
you
know,
propose
changes
Upstream
to
machine
pools,
at
least
while
it
still
is
experimental,
and
you
know
the
API
can
change
yeah
defaults
can
can
change
and
stuff
like
that,
because
when
it
really
graduates
so
experimental
that
it's
going
to
be
a
lot
harder
for
us
to
change
something
Upstream
as
well.
So
it
feels
like
this
big
disconnect
between
you
know
an
infrastructure
implementation
and
the
cap.
Cappy
API
types.
I
I
Like
maybe
I
don't
understand
that
part
correctly,
but
copy
mostly
delegates
to
the
providers
like
right,
so
why?
Why
would
it
even
care
to
set
a
default
machine
count,
especially
if
the
provider
wants?
Something
else
wants
to
do
something
else
or
so
like
there's
definite,
that's
definitely
going
to
cause
some
disconnects
and
problems
with
the
providers.
D
I
D
D
E
Yeah
hi,
so
I'm
Mike.
Some
of
you
know
me
with
Adobe
we're
really
interested
in
pursuing
Carpenter,
which
of
course
would
need
supporting
Cappy
and
Kappa.
Some
of
you
were
in
the
Cappy
meeting
a
week
or
two
back
when
I
brought
it
up
there
just
wanted
to
sense.
E
Carpenter
is
AWS,
wanted
to
make
sure
I
brought
it
up
to
this
group
as
well,
instead
of
just
to
the
generic,
not
generic,
but
Upstream
Cappy
aspect
of
things
to
get
any
sort
of
initial
thoughts
that
this
crew
had
about
Carpenter
support
in
tappy
and
Kappa.
E
It's
something
we
want
to
work
with
the
community
on
and
with
AWS
on,
and
all
the
people
that
be.
That
would
be
interested
to
just
figure
out
what
would
be
necessary
to
kind
of
make
this
happen
and
then
figure
out.
Why
not
it's
feasible
and
how
to
you
know,
scope
out
the
work
we're
assuming
that
it's
going
to
be.
It
would
be
a
significant
effort
and
we're
assuming
that
would
be
work
needed.
E
Cappy
Kappa
Carpenter,
you
know,
be
a
a
very
multi-team
effort,
and
so
we
just
kind
of
want
to
start
some
of
these
conversations,
and
let
people
know
that
we're
thinking
about
it
and
wanted
to
start
talking
about
it.
I
I
After
Mike
I
was
I
was
waiting
to
finish
and
then
oh.
E
Yeah
the
the
the
tldr
coming
out
of
the
the
Cappy
meeting
was
there
was.
E
In
it,
people
were
interested
both
in
Carpenter
and
potential
support.
Machine
deployments
were
kind
of
not
really
designed
for
the
idea
of
multi-instance
types
and
that
the
the
general
thought
was
that
it
would
be
a
significant
effort,
but
people
were
were
interested
in
the
concept
and
that's
that's
kind
of
the
tldr
that
came
out
of
it.
There
wasn't
a
lot
of
specifics
other
than
just
talking
about
a
little
bit
here,
a
little
bit
there.
What
could
or
couldn't
work,
but
nothing
specific
on
it.
I
Yeah,
so
yes,
we
brought
this
up
in
Copper.
I
brought
this
up
in
Kappa
actually
at
after
half
a
year
ago,
or
something
like
that.
I.
I
No,
not
a
half
a
year,
or
maybe
it's
so
half
healing
I
I
have
to
check
the
issue.
Creation
did,
however,
I
bought
this
up
a
while
ago,
and
we
were
thinking
about
doing
that,
and
then
we
shifted
the
whole
thing
into
into
the
copy
side
of
things,
because
we
decided
that
it's
a
there
is
the
right
location
for
it.
Yeah
no
in
November.
I
Okay,
so
wrote
this
up
in
November
and
there
we
talked
about
it
it
when
Alberto,
Garcia,
yeah
and
and
I
also
brought
it
up
with
the
as
the
link,
because
you
suggest
with
the
carpenter
team,
and
we
actually
have
at
least
works.
We
have
a
kind
of
small
connection
with
the
AWS
folks
and
I
talked
with
them
on
an
internal
call
as
well.
I
I
Help
basically
or
or
time
to
to
do
that,
and
they
not
de-prioritized
it,
but
the
sort
of
bump
it
or
pushed
it
into
the
into
the
background,
and
you
can
see
in
the
linked
issue
that
there
are
a
couple
of
things
from
Carpenter.
That's
needed,
not
particular
as
as
far
as
I
understood.
It's
not
even
particularly
difficult
to
do
that,
but
I
think
they
would
really
love
if
someone
could
like
spearhead.
This
whole
thing
like
even
if
it's
an
outside
person,
a
good
help
with
writing
a
code
and
whatnot.
I
I
Yeah,
that's
that's.
All
I
have
yeah
at
least
before
that
thing,
that
support
comes
from
Carpenter's
side,
I'm,
not
sure
what
we
can
do
in
either
copy
or
Kappa.
For
that
matter
to
to
try-
and
you
know,
try
and
use
it.
E
No
I
just
want
to
say
great
thank
you
for
the
context.
Yeah
we'll
be
reaching
out
with
them
as
well
to
follow
up
those
discussions,
and
you
know
talking
with
our
AWS
Tams
as
well.
E
Just
wanted
to
like
I
said
just
wanted
to
start
some
conversations
and
you
know
get
things
going
and
make
sure
it
wasn't
coming
out
of
the
blue
to
people
and
work
with
the
community
on
all
of
it.
So.
I
Yeah
I'm
I'm
super
stoked
about
it.
So
if
you
have
anything,
then
myself
and
Richard
is
also
interested
in
it.
As
far
as
I
know
so
yeah
you
can
push
anything
in
in
our
way
and
we'll
gladly
take
a
look.
A
All
right,
okay,
I
think
next
is
yours:
item
googly.
I
Go
so
this
is
a
weird
one.
I
still
I
I,
like
I
I,
still
need
some
background
about
this,
but
just
wanted
to
bring
the
attention.
I
talked
with
some
folks
at
the
works
and
some
other
sorry,
some
other
company
we've
been
using
copper
and
I'm
gonna
I'm,
going
to
oh
go
on
Richard.
You
know,
you
know
what
this
is.
I
Okay,
I
bet
I
bet.
You
know
what
this
is
so:
okay,
cool,
oh,
oh
sorry,
so
so,
apparently,
there's
some
kind
of
problem
that,
with
the
timeout
that's
happening
to
the
through
the
you,
either
the
user
or
the
I
don't
know
specifically
which
cubeconfig
but
I
I
assumed
it's
the
user
config,
which
times
out
after
10
minutes
or
so
with
the
things
I,
think
kind
of
thingy.
I
But
there's
supposedly
there's
a
problem
with
autoscaler
that
it
tries
to
do
its
stuff
do
its
thing,
but
it
can't
because
the
the
the
conflict
times
out
and
it
restarts
or
something
like
that
and
that's-
and
that
is
kinda
all
that
I
have
on
this
Richard.
D
Yeah
yeah
yeah,
yeah
yeah
yeah,
we've
seen
a
lot
of
people
with
this.
There
is
actually
that's
why
I
was
joking
around
that
there
is
a
open
issue
in
cluster
water,
scalar
sensory
cluster
also
scalers
cash
in
the
cubeconfig.
I
D
I
Do
you
happen
to
have
that
issue?
Oh,
you
will
open
it.
D
D
There
is
another
issue,
especially
in
a
management
cluster
where
you
have
it's:
reconciling
multiple
child
clusters
and
you
haven't,
got
the
the
concurrencies
set
high
enough,
so
there's
not
enough
concurrency
in
the
system,
so
you
know
it's
reconciling
one
cluster
and
because
we
have
weights
within
the
codes
instead
of
saying
returning
in
the
same
EQ,
it's
waiting
while
our
cluster
has
been
created.
Meanwhile,
the
token
for
another
cluster
has
run
out
and
because
this
one
takes,
you
know,
15-20
minutes
it
and
there's
no
other
concurrent.
D
No,
no,
no
other
reconcile
is
going
it
times
out.
So
there's
definitely
two
problems.
We
see
with
this,
so
it
could
be
one
of
either
or
give
it
both.
I
I
Okay
is:
is
there
any
kind
of
workaround
for
this
by
any
chance?
No
yeah,
yeah
I
thought
so.
G
I
was
just
gonna
say:
I
can
confirm
this
is
the
behavior
we've
seen
before
as
well.
We
run
the
cluster
autoscaler
in
the
workload
clusters
for
exactly
this
problem.
G
We,
if
you're
using
an
eks
cluster
and
it's
you're
using
a
management
cluster
as
well,
that's
separated
from
the
eks
cluster.
You
experience
this
problem.
D
I
I
I
I
A
Okay,
so
I
think
this
concludes
our
agenda.
Anybody
has
anything
else
to
be
added
to
agenda.