►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
welcome
to
cluster
api
provider
aws
office
hours
on
monday,
the
22nd
of
march
2021
a
reminder
that
we
abide
by
the
of
conduct,
so
that
basically
means
be
nice
to
each
other.
A
A
A
A
Cool
so
there's
a
few
new
people
actually
that
I've
seen
just
just
join
in
today.
So
if
you
want
to
introduce
yourself,
please
feel
free
to
meet
yourself
and
say
hi
I'll.
Give
you
a
few
minutes
to
do
that.
B
Hey,
I
guess
I
I
don't
know
I
in
my
in
my
zoom,
I'm
I'm
at
the
top
left
corner,
so
I'll
just
go
first,
daniel
pavetsky,
I
I've
worked
on
a
cluster
api
and
yeah
I've
been
been
using
the
aws
provider.
It's
it's
great!
Thank
you
for
for
all,
for
all
your
hard
work,
it's
it's
yeah!
It's
it's
a
pleasure
to
use!
B
I'm
yeah,
I'm
trying
to
you,
know
yeah,
contribute
in
in
small
places
where,
where
I
can
so
yeah,
I've
got
a
something
I
just
want
to
discuss.
So
I'm
here
today.
C
Yes,
I
am
a
new
face.
I
will
also
say
hello,
I
am
my
name
is
ashish.
I
have
been.
I
worked
at
vmware,
I
I
was
part
of
the
cluster
api
aws
provider,
pre
v1,
alpha
one
time
frame
after
a
brief
break.
I
am
back
looking
to
contribute
more.
A
True,
okay,
so
sure
you
move
on
to
the
the
psas.
So
the
first
item
is
cdf
is
now
a
maintainer,
so
the
the
change
has
the
pr
has
gone
through.
So
congratulations
to
death,
that's
great!
A
So
the
next
psa
is
actually
something
I've
worked
on
and
there's
a
behavioral
change
in
the
the
subnet
assignments
for
both
flavors
of
the
machine
pools.
So
this
really
has
some.
Essentially,
we
were
ignoring
some
of
the
failure
domains
on
the
machine
pool,
but
then
we've
also
tightened
up
on
some
of
the
the
logic
and
the
precedence
of
how
a
machine
poor
is
assigned
to
a
subnet.
So,
but
that
will
be
in
the
release.
Notes
of
the
next
release.
A
So
moving
on
to
the
action
items,
so
well
I'm
the
first
one.
So
I
I
haven't
actually
looked
at
the
eks
end-to-end
test
since
the
last
meeting.
I've
I've
raised
more
issues
to
add
more
tests,
but
I
actually
haven't
fixed
any
of
the
flakes
to
the
existing
tests
and
then
the
next
one
for
is
also
me,
which
is
we
took
an
action
item
to
to
do
some
analysis
between
the
the
items
on
all
the
specs
of
the
aws
machine
and
the.
A
What
we
need
for
launch
templates,
because
how
we
handle
launch
templates
currently
is,
is
not
ideal
and
we're
getting
more
and
more
issues,
so
this
is
partially
done.
I
haven't
fully
completed
it
so
if
anyone's
interested
in
this,
I
can
I'll
put
the
the
hackmd
link
in
here
and
you
can
just
have
a
reads
as
well.
A
A
So
so,
where
were
we
yeah,
so
the
the
adl
for
the
usage
of
v
numbers
is
done?
I
just
need
to
push
that
so
I'll.
Do
that,
probably
in
the
morning
now
so
said,
f
was
you
had
the
next
action
item.
D
Yes,
so
the
previous,
actually
we
we
made
a
fix
for
the
conformance
test.
We
were
using
bazel
build
before,
but
it
didn't
fix
the
problem.
Then
I
think
there
was
a
new
kubernetes
1.21
release.
Now,
conformance
tests
are
passing.
A
Brilliant,
so
moving
on
to
the
discussion
items,
so
I
added
the
first
one
which
is
really
about
when
we
plan
to
do
the
0.65
release,
just
wondering
if
there's
any
thoughts
on
that.
E
Yeah,
let's
make
a
concerted
push
to
get
the
prs
in
for
ignition,
and
I
think
there
was
one
for
egf
animals
tennessee
and
then
I
think
we're
good
to
go.
Oh
maybe
it
might
be
worth
putting
up
the
milestone,
just
check
the
other
things
that
we
had.
A
E
Should
we
go
through
and
chuck
somehow
if
we
yeah
or
maybe
maybe
one
two
which
don't
have
pr's
yeah
so
multi-tenancy
we
just,
we
just
need
to
review
the
piano.
It's
it's
ready.
So
I
think
we
should
get
that
in.
A
E
Cool
ips
on
creation
can
probably
pun
it's
just
an
optimization.
There's
new
ec2
api
in
place.
It's
not
really
necessary,
so
yeah.
I
think
that
could
be
moved
to
0.7
black
car
linux
prs
in.
We
just
need
to
review
that.
I
think
kenwork
is
keen
that
we
have
it
supported
me
when
I
have
alpha
3
and
I
agree
so,
let's
get
that
review
done
in.
I
think.
E
Yeah,
if
there's
a
couple
of
easy
ones,
I'll
try
and
get
them
done
this
week,
basically
alongside
all
the
reviews
just
so
that
we
ready
to
go.
Maybe
at
the
end
of
the
week
we
get
ci
running
over
the
weekend
and
then
we
can
do
a
release
on
monday.
So
leave
that
in
if
we
don't
make
it
fine
but
yeah,
it's
an
easy
fix
that.
E
A
Yeah,
what
is
070.
A
You
said
the
same
for
the
the
guy
creation
to
me.
A
Cool
that's
good
to
get
in
the
fargate
pro
case.
Fargo
profiles:
yeah!
That's
good
to
go
just
made
some
changes
today,
so
we
should
be
able
to
get
that
in
so
the
ignition
bootstrap
yeah
get
that
in
as
well
there's
an
end-to-end
test,
one
which
is
a
follow-up
to
just
basically
add
file
gate
profile
into
contests,
so
I'll
get
that
in
by
the
end
of
the
week
and
then
yeah.
So
this
is
another
one
I've
raised
where
the
manager
machine
pool.
A
A
A
F
So
I
was
talking
to
some
people
from
aws
recently
about
spot
instances
and
every
time
I
speak
to
people
about
spot
instances,
they're
always
like
it's
useless.
If
you
can't
do
mixed
instances.
F
F
An
ec2
fleet
allows
you
to
request
a
instant
fleet
of
you
know
some
determinant
capacity
and
you
can
use
spot
instances
in
there.
But
what
that
can
also
do
is
allow
you
to
do
multiple,
launch
templates
or
override
the
launch
template.
So
speaking
to
them,
they
were.
We
were
talking
through
the
sort
of
requirements
for
like
individual
machines
and
spot
instances
and
how
like
at
the
moment,
we
use
run
instances
and
it
gives
us
back
an
immediate
like
answer
yes
or
no.
You
can
or
can't
have
the
spot
instances.
F
At
this
point,
they
were
suggesting
that
we
can
basically
replace
run
instances
with
this
ec2
fleet
for
spot
instances,
and
then
that
would
give
users
the
ability
to
specify
you
know
if
I
can't
have
an
r4
large,
then
okay
I'll
have
an
x
large
instead
or
something
like
that
now.
The
problem
I
see
with
this
is
that
it
relies
on
launch
templates
and
when
you're
using
individual
machines,
you
don't
really
have
any
way
to
life
cycle.
F
Those
at
the
moment
in
in
cluster
api
that
I'm
aware
of
at
least,
but
I
know
I
I
think,
launch
templates
are
used
for
the
machine
pool
implementation,
maybe
so
yeah.
A
Yeah,
I
guess
I
guess
from
my
side
in
the
machine
pools
we
don't
handle,
launch
templates
very
well
in
honesty.
We
we
create
a
default
one,
but
we
don't
especially
in
the
managed
machine
force.
We
don't
expose
the
launch
table
at
all
and
that's
that's
actually
causing
problems
for
for
users.
That
say
you
want
to
change
the
security
groups
associated
with
remote
access.
You
can't
you
just
can't
do
it
without
that.
A
So
there's
there's
a
few
more
issues
as
well,
so
along
those
lines
that
to
do
with
the
you
know
us
basically
not
having
launched
templates.
Well,
when
I
said
dane,
you've
got
your
hand
raised
as
well.
G
Oh
yes,
sorry
I
was,
I
had
a
hard
time
finding
my
mute
button
there,
the
it's
it's.
This
is
a
timely
conversation,
because
we
were
working
on
some
improvements
on
the
machine
pool
side
for
spot
and
mixed
instance,
types
auto
scaling
groups
got
the
they
got
this
new
feature.
I
believe
it
was
in
december
called
a
capacity
rebalance,
which
is
really
handy
for
this
kind
of
thing,
because
it
allows
the
auto
scaling
group
to
preemptively
create
new
instances
of
either
a
different
type
or
in
a
different
availability
zone.
G
When
amazon
detects
that
capacity
will
soon
be
constrained
for
the
type
in
the
zone
that
you
are
on,
so
we're
it's
just
one
extra
field
in
the
api,
so
we
we're
testing
that
right
now
locally
and
then
we
were
going
to
pr
up
that
feature.
G
Trying
to
think
a
lot
of
our
stuff,
though,
is
around
machine
pools
and
not
around.
Obviously,
the
machine
deployments
in
the
run
instances
thing.
So
that's
a
that's
an
interesting
conversation
and
be
interested
to
to
hear
more
on
that
side.
And,
lastly,
the
the
launch
templates.
G
We
could
we
could
break
it
out,
like
we've
kind
of
been
talking
about
and
have
a
separate
api
construct
for
the
launch
template
and
then
make
some
kind
of
a
reference
like
we've
done
with
other
objects.
At
the
moment
the
machine
pool
exposes
most
but
not
all
fields
of
the
launch,
template
and
then
just
handles
the
life
cycle
of
those
itself
if
the
fields
are
mutable
in
cluster
api.
G
So
if
we
modify
it,
then
it
creates
a
new
launch
template
version
for
you
and
acts
more
like
an
update
mechanism,
as
opposed
to
create
new
and
change
reference.
That
has
some
pros
and
cons.
It's
effectively
an
abstraction
over
aws
instead
of
a
representation
of
aws's
construct,
and
I
could
definitely
see
some
benefits
to
having
the
launch
template,
be
its
own
type.
F
So
that's
kind
of
what
I
was
wondering,
whether
there'd
been
any
discussion
about
having
launched
templates
of
their
own
type.
So
one
of
the
things
I
was
I
was
on
a
call
with
some
of
the
pms
at
aws
around
the
different
stuff,
because
I
sort
of
gave
some
feedback
that
it'd
be
much
easier
for
us.
If,
if
the
east
2
fleet,
not
yeah
east
fleet,
api
could
just
accept
like
the
launch
template
kind
of
inline
like
you,
do
chuck
everything
in
for
run
instances
and
we
were
discussing
the
limits
and
stuff.
F
So
we
were
talking
about
how.
How
could
we
leverage
this?
If,
like
every
machine
created
its
own
launch
template
and
how
the
limits
work
for
that,
and
they
were
saying,
there's
something
like
a
10
000
launch
template
limit
or
something
like
that
soon
is
going
to
be
the
case
anyway.
F
So
yeah
there
was
a
lot
of
discussion
about
limits
and
and
whether
we
could
just
create
a
launch
template
for
every
machine,
and
you
know
I'm
not
familiar
enough
with
aws
to
know
whether
we
can
like
create
that
and
then
delete
it
as
soon
as
the
machine's
launched
or
or
things
like
that.
So
there's
yeah.
F
If
there's
interest
for
running
spot
instances
with
individual
machines
through
machine
deployments
with
like
this,
then
I'll
I'll
try
and
do
some
more
research
into
it
and
come
up
with
something
I
guess
for
for
users
of
cluster
api
who
who
can
use
machine
pool.
I
think
it's
a
lot
easier
to
implement
in
machine
pool,
if
I
understood
correctly,
at
least
so.
It
might
just
be
a
case
that
we
want
to
say,
as
a
community
like
if
you
want
mixed
instances,
use
machine
pool
but
yeah.
G
G
I
don't
know
if
it's
similar
enough,
that
it
could
just
be
a
flag
in
the
aws
machine
template
that
it
could
say,
create
a
launch
template
versus
create
versus
you
know,
do
nothing
and
just
be
a
template
for
the
run.
Instances
call,
maybe
that's,
maybe
that's
a
bit
too
awkward.
I'm
not
sure
I
haven't
looked
at
all
the
fields
and
compared
them,
but
that
might
be
something
to
at
least
look
at.
F
That
is
kind
of
one
of
the
ideas
I
was
going
down
as
well.
Whether
you'd
have
like
a
launch
template
cr
that
referenced
the
machine,
template
and
generated
the
launch
template
from
that
and
then
in
the
status
had
the
reference
to
it.
Then
you'd
reference,
the
launch
temp.
It
gets
a
bit
convoluted.
I
think,
but
you'd
reference,
the
launch
template,
which
the
machine
would
then
create
from
that.
It's
yeah
it's
a
bit
weird,
but
I
did
go
down
that
same
sort
of
thought.
A
H
Yeah,
hey,
my
name
is
spike.
I'm
a
colleague
of
daniel
lupoetsky
and
joe
julian
at
d2iq,
I'm
new
to
this
whole
kappa
thing,
but
I've
worked
with
spot
instances
in
the
past
and
supporting
spot
instances
for
our
kubernetes
distribution.
I
just
have
a
question.
It
seems
like
we're
focusing
a
lot
on
the
machine
life
cycle
portion
of
this
and
I'm
a
little
bit
curious
as
to
if
we're
going
to
be
supporting,
like
the
like.
H
I
I'm
not
really
seeing
how
we're
going
to
be
supporting
like
the
different,
like
types
in
within
the
fleet
like
right.
The
way
that
we
we
we've
implemented
it
and
day
2
iq
is
kind
of
like
just
one
spot
instance
and
that
doesn't
really
like
work
out.
So
that
speaks
to
the
fact
that
was
in
the
first
bullet
point
like
if
you
can't
get
this
one.
H
Try
the
other
one
and
I'm
curious
to
hear
folks
kind
of
like
opinions
on
how
that
should
be
implemented,
because,
like
a
lot
of
the
time
like
that's,
that's,
usually
what
people
are.
That's
like.
That's
like
the
best
way
of
getting
spot
implemented,
and
maybe
maybe
that's
to
me-
that's
like
the
bigger
problem
than
necessarily
having
them
have
sharing
like
the
machine
machine
life
cycle
or
the
machine
configuration
stuff,
and
I
don't
know
if
that
would
require
like
a
new
api
type
or
something
like
that.
F
So
I
think
yeah
you've
kind
of
hit
the
nail
on
the
head
there
with
the
problem.
But
it's
you
know
like
it's
not
very
useful
if
you've
only
got
that
one
so
like.
I
know
a
lot
of
openshift
customers
are
running
spotio
underneath
so
that
they
can
sort
of
hack
their
way
around
this,
but
that
only
works
for
like
some
offerings
so
like
there's,
some
red
hat
hosted
offerings
where
you
don't
get
access
to
to
configure
that,
for
instance.
F
So
that's
where
this
kind
of
request
has
come
from
because,
like
originally
it
was
like
yeah.
Okay,
we'll
implement
slot
support,
but
like
we're
going
to
do
this,
just
for
single
machines
and
it'd
be
single
type,
and
if
you
can't
get
that
you
know
have
multiple
machine
sets.
Have
the
cluster
autoscaler.
F
You
know
you
can
sort
of
work
around
it
in
a
way,
but
it
doesn't.
It
doesn't
work
perfectly
right,
and
so
that's
why.
I
think
this
is
important
and
you
know
if
it's
a
case
of
it's
too
difficult
to
make
this
work
for
individual
machines
and
follow
the
machine
deployment
style
and
we
have
to
push
people
towards
machine
pools
and
that's
something
we
could
do.
But
at
the
moment
openshift
doesn't
have
machine
pool
so
like
for
me.
I'm
still
a
little
bit
focused
on
on
the
single
instance.
A
F
Yeah
I
I
certainly
think
that
would
be
useful,
but
I
see
dane's
got
his
hand
up.
So
I
don't
know
if
you
want
to.
F
G
Yeah,
I
would,
I
would
definitely
be
interested
in
in
the
discussion
around
standardizing
our
use
of
launch
templates,
and
from
I
mean
I
have
our
my
my
my
team's
opinions
on
how
we
like
to
run
spot.
We
definitely
prefer
using
the
machine
cools
construct
because
then
we
can
take
advantage
of
multi-az
asgs,
along
with
some
constraints
around
okay.
G
You
can't
use
ebs
volumes
if
you're
in
here
and
you
can't
use
inter-a-z,
pod,
anti-affinity
and
things
like
that,
and
that
has
been
so
far
in
our
early
testing-
we're
very
early
in
that,
but
it
looks
very
promising
and
and
ends
up
giving
a
an
experience
very
similar
to
spot.io,
with
the
automatic,
the
capacity
rebalanced
and
the
avoiding
of
unnecessary
interruptions
and
capacity
optimized
allocations,
things
that
would
be
nearly
they'd
be
very
difficult.
G
I
think
in
machine
deployments,
because
a
lot
of
that
implementation
is
kind
of
you
know
behind
the
black
box
of
the
auto
scaling
group,
at
least
as
far
as
I
can
tell,
but
that
said
I
haven't
dug
into
that
ec2
fleet
thing,
so
maybe
that
does
a
similar
logic
behind
the
covers,
but
yeah.
I
think
it
definitely
warrants
some
more
discussion
to
see
the
the
best
way
so
far.
The
the
machine
pulls
has
worked
really
well
for
us
and
we're
interested
in
in
investing
some
more
time.
F
Thanks
dan,
just
to
round
off
my
thoughts,
if
I
may
yeah,
I
think,
ideally,
we
see
some
sort
of
like
you
know:
standardized
launch
template
sharing
thing
across
kappa.
That
would
be
great
for
me.
F
I
probably
don't
have
much
time
to
work
on
this
for
the
next
couple
of
months,
so
this
is
probably
going
to
be
like
a
sort
of
may
june
kind
of
time
that
we're
looking
into
this
but
yeah
like
anything,
I
can
help
in
the
meantime
I'll
try
and
do
if
there's
anything
that
happens
before
then,
if
not,
then
I'll
try
and
spare
the
effort
a
little
bit
later
in
the
year.
A
So
I'll
take
an
action
light
and
then
to
see
how
we
proceed
with
that
and
then
we
can
either
discuss
it
in
slack
or
or
you
know,
if
we
wanna
someone
wants
to
do
it
sooner,
then
we
can
get
the
proposal
been
sooner
but
yeah
I'll.
Take
that
as
an
actual
item.
A
So
the
next
topic
for
discussion
is
from
daniel.
B
Yeah
I
just
wanted
to
ask
about
ssh
key
validation
fix.
Does
it
it's
it's
in
master?
Does
it
need
to
be
backboarded
to
release
06?
It's
something
that
we'd
really
like
to
to
have
we're
using.
B
I
I
mean
yeah,
we're
using
the
latest
release,
which
is
o6,
so
I,
but
I
don't
know,
I
don't
know
the
what
needs
to
be
done
to
get
it
in
there.
A
A
Yeah
yeah,
that
was
because
basically,
we
we're
making
some
of
the
api
changes
ahead
of
time
on
a
separate
branch.
B
Okay,
thank
you
and
yeah.
The
next
thing
is:
there's
yeah,
there's
just
a
small
refactor,
just
kind
of
a
I
want
to
like
it
was
kind
of
a
bike
shedding
question
the
there
there's
validation
that
happens
for
the
ssh
key
name
in
both
database
machine
and
aws
cluster.
There
used
to
be
like
two
groups
of
tests.
I
grouped.
B
Pr
is
there,
but
I
was
kind
of
wondering
at
the
time
like
I
could
test
the
the
the
sort
of
the
common
function
that
gets
called
right,
which
would
be
a
unit
test,
but
I
decided
to
leave
it
as
a
integration
test
in
the
sense
that
the
test
exercises
the
you
know,
the
actual
validation
of
an
aws
machine
and
an
aws
cluster,
and
then
so
other
things
happen
along.
You
know,
along
with
the
ssh
key
name,
validation
and
yeah,
any
any
any
opinions
on
on
which
should
happen.
B
B
A
It's
good
time
to
see
if
anyone
else
has
an
opinion,
yeah
yeah,
just
personally
yeah,
I
would
always
test
it.
Like
you
say,
in
the
context
of
the
the
cluster.
That's
just
the
way
that
I
do
it.
I
don't
think
that's
just
personal
opinion,
so
I
don't
know
if
anyone
else
has
any
views
on
that.
D
You
already
have
an
open
pr
for
it
right,
danielle,
yep,
yep,
okay,
so
yeah.
We
I
can
try
to
like
review
it
on
on
the
pr.
A
Really
and
sadesh
you've
got
the
next
point
of
discussion.
D
Yes,
so
since
we
are
getting
ready
to
open
main
branch
for
v1
alpha
4
development,
I
did
some
cleanup
in
our
e2e
test
divided
the
test
as
a
main
branch
and
v06
branch.
This
is
similar
to
what
core
cluster
api
and
capzi
is
doing,
but
we
may
wanna
wait
until
we
open
the
main
branch
to
v1
alpha
four
changes,
and
currently
there
are
some
typos.
D
Probably
the
tests
are
failing
in
the
in
the
pr,
but
I'm
gonna
fix
that,
but
I
will
put
a
hold
on
it
until
we,
we
march
v1
alpha
4
into
the
main
branch,
just
just
that
has
something.
A
A
Again,
once
going
twice
now
cool:
well,
that's
we'll
call
the
meeting
to
a
closing
thanks
everyone
for
your
time
and
we'll
see
you
in
a
couple
of
weeks
take
care.