►
From YouTube: 2020-10-05 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
october
5th
2020..
This
is
the
cluster
api
provider
for
aws
office
hours
meeting
this
project
is
a
sub-project
of
syd
cluster
lifecycle.
We
do
have
meeting
etiquette,
which
is
basically
be
kind
to
everybody.
We
do
abide
by
the
cncf
code
of
conduct.
Please
use
the
raise
hand
feature
of
zoom
if
you
are
interested
in
speaking-
and
please
add
your
name
to
the
attending
list
and
the
group
topics
are
completely
open.
A
A
B
Cool
on
the
white
microwave
yeah,
so
testing,
so
we've
had
yeah
good
rafter
features
that
come
in
zero,
six
zero,
particularly
around
eks,
but
we
also
had
the
spot
instances
for
on
machine
pools.
B
One
thing
we're
really
lacking
at
the
moment,
so
there's
a
fair
amount
of
unit
tests
as
well,
but
we
are
lacking
e2e
tests
so
right
now,
I'm
finding
it
pretty
difficult
to
say
whether
pr
is
fine,
because
some
of
the
logic
around
aws
causes
it's
basically
not
possible
to
find
out
just
from
reading
the
code
you
kind
of
have
to
run
it.
B
So
I
really
want
so
we're
planning
on
doing
v061
this
week,
but
I
really
want
the
next
release
to
sort
of
focus
on
getting
their
end-to-end
stuff
in
order
and
really
don't
want
to
get
more
pack
more
features
in
without
those
e2e
tests.
So
I
almost
really
want
that
stability
release
and
that
sort
of
ties
on
next
thing
is
thinking
about
v07,
then
open
up
for
we
went
out
for
four,
but
I
just
want
to
see
how
other
people,
in
agreement
about
like
slowing
down
things
for
0.62
and
focusing
on
testing.
A
C
Yeah,
I
completely
agree
that
definitely
from
the
eks
side.
So
that's
that
was
the
next
issue
that
I've
started
to
work
on
was
the
end-to-end
test,
for
the
eks
side
probably
do
need
to
speak
to
someone
because
things
like
the
control
plane,
the
gas
control
plane.
Take
it.
You
know
anywhere
from
like
10
to
20
minutes
to
create
so
probably
just
want
to
discuss
about
whether
we
mop
that
out
or
whether
that's
going
to
be
we're
going
to
trade,
some
timeouts
and
stuff
like
that,
but
yeah.
That's
definitely.
D
Yeah,
I
totally
agree
we
need
to
get
those
in
there
and
we
have
been
working
on
the
machine
pools
stuff.
So
I
may
take
a
stab
at
that.
As
my
time
allows
this
week,
the
there
we
did
have
two
small
features
that
we
wanted.
D
So
we
didn't
want
to
change
the
code
out
from
all
the
reviewers
filed
from
under.
But
it
was
it's
just
adding
tagging
because,
as
the
code
stands
it,
it
doesn't
put
almost
any
of
the
the
normal
cluster
api
tags
on.
A
Anything
all
right,
so
should
we
move
on
to
the
next
one
in
here,
or
is
there
more
on
this
that
you
want
to
talk
about.
B
No,
I
think
that's
fine
yeah.
So
next
thing
is
on.
We
went
out
for
four.
I
just
hope
everyone
can
access
that
spreadsheet.
I
just
did
notice.
It
was
still
stuck
in
my
only
as
it
should
be
accessible
now,
I
will
add
editing
for
case
cluster
lifecycle
in
a
minute.
So
if
you
click
on
the
v060
tab
is
the
planning
that
we
did
for
v60
and
yeah.
We
did
all
the
eks
machine
pool
stuff.
B
We
have
some
things
which
were
in
flight
for
multi-tenancy
rate
limiting
I
was
doing,
but
I
had
to
hold
to
do
some
other
e2e
fix
ups,
and
now
we
gonna
work
together
with
the
aws
cloud
provider
and
aws
to
unify
a
lot
of
this
sort
of
performance
rate
limiting
and
frontline
codes
from
various
projects,
including
aws's
own,
in
one
place
than
the
aws
cloud
provider
that
will
be
consumed
as
a
library
that
that
is
in
flight.
B
B
You
know
a
lot
of
requests
around
using
things
like
direct
connect,
vpn
gateways,
things
like
that.
We
don't
want
to
stuff
them
necessarily
into
aws
cluster
object,
because
that
leads
to
sort
of
conditional
help.
So
I
think
I
was
proposing
to
break
this
out
into
separate
crds.
B
We've
got
an
open
question
on
whether
or
not
we
should
use
aws
controllers
for
kubernetes
project.
I
think
we
need.
I
I
would
like
to
I've
got
some
concerns
around
relying
on
that
project
as
well,
and
there's
also
sort
of
a
competition-
I
guess
from
crossplane.
B
So
do
we
really
want
a
hard
dependency
on
that
project
and
then
independently?
I
think
we
should
just
help
start
designing
that
api.
So
are
people
interested
in
sort
of
a
planning
meeting
around
that
roadmap?
Should
we
do
it
in
the
next
two
weeks
after
we've
done
some
work
on
so
adding
e
to
e
and
give
us
some
time
about
or
do
we
need
a
separate
meeting?
A
So
from
my
experience
with
these
meetings
in
the
past,
some
of
them
are
extremely
short.
Some
of
them
take
a
little
bit
longer,
but
I'd
say
on
average
they're
they're
kind
of
on
the
short
side.
So
I
think
we
could
use
one
of
the
regular
biweekly
meetings
to
discuss
the
networking
evolution
and
what
the
api
there
might
look
like.
A
I
think
this
is
a
time
that
we
already
have
blocked
off,
so
we
don't
have
to
come
up
with
a
separate
doodle
poll
to
schedule
another
meeting,
so
I'm
in
favor
of
using
either
this
time
today
or
in
two
weeks
time
to
talk
about
that.
C
B
We
can
do
a
bit
now.
Has
anyone
got
any
big
ticket
items
or
do
they
see
any
of
these
priorities
that
need
to
change?
Something
needs
to
move
up
higher,
or
is
this
some
general
ordering
that
we
have
right
now,
look
good.
A
My
my
initial
thought
on
whether
it's
ack
or
cross
plane,
or
something
else,
is
that
that's
a
significant
enough
change
that
it
should
definitely
come
in
a
new
minor
version
and
not
in
the
zero
six
patch
stream.
E
Yeah,
I
think
one
thing:
that's
com,
a
little
bit
confusing
about
figuring
out
whether
or
not
to
rely
on
like
ack
or
crossplane
is
just
understanding
like
how
much
work
it
actually
is
like,
for
example,
given
that
ack
like
doesn't
yet
support
ec2
like
would
we
effectively
be
buying
into
like
implementing
ec2
for
ack
as
part
of
this
milestone,
et
cetera,
et
cetera,
and
it
gets
really
hard
to
plan.
Based
on
that,
so
I
would
be.
E
I
don't
know,
I'm
a
little
skeptical
of
like
being
super
reliant
on
on
a
on
a
ck,
because
I
don't
think
any
of
the
apis
that
we
use
are
implemented
yet
in
it,
and
I
don't
know
if
there's
like
a
short
list.
Road
map
that's
been
published
yet
for
what's
coming
next.
A
Yeah,
I
think,
given
that
ack,
we
can't
use
it
yet
and
maybe
we'll
want
to
use
it
in
the
future.
Maybe
not
we
don't
know
so
I
think
trying
to
talk
about
the
api
that
we
need
in
kappa.
First
can't
we
can
do
that
independent
of
the
back
end
that
we
use
to
implement
it.
So
I
would
say:
ack
is
probably
a
hundred
percent
off
the
table
in
the
short
term,
and
then
the
decisions
are,
and
they
can
kind
of
be
split
like
one
is.
A
D
B
Here,
oh,
that's,
all
really
useful,
actually
so
yeah,
I
think
my
main
sort
of
what
I
wanted
to
figure
out
is
is
network.
Was
network
topology
completely
dependent
on
ack,
or
can
we
sort
of
proceed
with
trying
to
figure
it
out
regardless
and
if
the
answer
is
to
make
it
completely
like
at
least
api
independent,
then
that's
great
and
sort
of
start
to
figure
that.
A
Out
yeah,
I
think
that's
the
right
way
to
go
because,
presumably
like
there's
there's
what
aws
at
a
you
know,
basic
level
supports
and
what
it
doesn't
support.
And
it's
not
like
we're
going
to
be
able
to
create
an
api
that
won't
align
with
what
you
can
do
in
api.
F
A
A
All
right
so
hearing
nothing
and
I'm
gonna
guess
that
we're
probably
not
prepped
today
to
talk
about
the
networking
apis
for
kappa.
A
G
G
It's
basically
like
a
pull
mechanism
where
you
constantly
constantly
have
to
look
for
new
messages,
so
basically
one
we'll
basically
be
looking
for
messages
every
20
seconds
for
each
of
the
clusters
that
we
have,
and
that
seemed
like
fairly
large
processing
overhead
and
also
that's
a
that's
an
extra
call
to
aws
for
every
time.
G
We
look
for
new
messages,
so
the
other
alternative
is
sms,
which
is
a
proper
push
notification
system
and
it's
able
to
basically
send
out
something
to
an
http
endpoint,
but
that
would
require
us
to
have
like
a
server
set
up
that
can
consume
those
http
requests
and
act
accordingly.
So
before
I
get
too
far
ahead,
I
want
to
bring
that
up
here.
A
B
Yeah,
it's
actually
the
typical
pattern
that
I
see
is
actually
you
set
up
an
sns
subscription
and
then
also
you,
the
and
the
sqs
subscribes
to
that
sns
subscription,
and
then
your
client
code
is
always
pulling
off
the
sqs
qy
long
pulling
you'd
see
very
rarely
a
model
where
program.
Well,
software
or
servers
are
directly
consuming
from
snss
is
used
more
to
wire
up
between
various
other
aws
services
and
the
more
con.
B
Yeah,
so
the
long
pole
is
20
seconds
and
if
there
is
something
that
appears
on
that
queue,
you
get
a
response.
Then,
at
the
end
of
the
20
seconds,
it's
closed
and
you
just
keep
doing
that
in
the
retry.
A
And
do
we
need,
is
it
one
cube
per
cluster,
or
can
you
just
have
one
queue
for
everything,
you're
managing.
G
Gab
I
I
basically
started
with
one
cube
per
cluster
along
with
the
event
event
bridge
rule
per
cluster,
just
because
you
know,
as
a
cluster
spins
up
and
spins
down,
we
basically
create
and
delete
that
I
didn't
want
to
leave
any
resources
behind
when
the
capital
manager
stops
for,
for
example,.
A
G
A
I
know
we
we
have
some
scale
testing
that
we've
been
doing
internally
for
aws
and
kappa.
A
We
maybe
could
see
about
trying
to
do
some
scale
testing
with
this,
but
it
probably
would
have
to
get
incorporated
in
planning
like
from
our
product
perspective,
so
probably
better
to
see
if
we
can
do
something
with
the
resources
that
we
have
in
the
short
term
to
just
see
if
we
can
get
a
sense
for
what
it
looks
like,
as
you
start
to
scale
up
yeah,
and
here.
B
I
I
do
actually
have
a
scale
test,
e3
pr,
that's
sort
of
half
working.
I
was
waiting
for
zero
310
cappy,
because
that
that
I
wanted
to
clean
up
the
e3
code
before
I
put
that
in.
But
I
do
have
something
that
will
just
sort
of
just
dump
fight
at
like
x
number
of
clusters.
You
could
say
500
clusters
and
we'll
just
go
and
do
them
all
at
once
and
wait
for
them
to
actually
succeed.
A
So
my
feeling
is
that
sqs
probably
makes
sense
to
pursue
and
if
we
can
do
some
scale,
testing
that'll
be
useful
and
informing
definitely
think
that's
probably
going
to
be
better
than
solely
sns.
Given
that
there's
not
going
to
be
connectivity
in
many
or
most
cases
from
aws
to
whatever
service
or
endpoint
sms
needs
to
talk
to
for
kappa.
G
Right
now,
with
my
implementation,
I'm
only
looking
at
the
pc2
instances.
I
think
spot
instances
come
through
a
different,
basically
event
type,
so
maybe
that
can
be
captured
in
a
separate
issue.
B
Yeah,
so
this
is
for
the
use
case
that
ec2
instance
got
terminated,
either
user
delete
and
console
something
happens
like
for
some
reason.
The
instance
is
terminated
for
spot
instant
support.
I
think
we
were
really
looking
at
in
court
like
using
some
sort
of
when
we
figured
out
add-on.
Support
for
cluster
api
is
to
then
use
the
aws
node
termination
handler,
which.
B
This
one
and
having
that
as
an
add-on
that
will
catch
so
instances
get
told
themselves
that
you've
got
two
minutes.
Go
go.
Do
your
whatever
you
need
to
do
to
clean
up
and
that's
sent
as
something
on
the
instance
metadata
and
that
that
point
they
can
sort
of
start
requesting
no
drains
and
things
like
that
that
we
can
go
and
clean
up.
So
it's
kind
of
separate
to
the
cloud
watch
event
bridge
which
is
more
like
after
termination
remove
the
machine
versus
for
spot
instances.
B
We
want
machine
is
going
to
be
terminated
due
to
drain
and
dented
and
handled
the
deletion.
E
Andrew
did
you
still
have
some?
I
was
more
or
less
going
to
say
the
same
thing.
The
no
termination
stuff
I
mean
like
it
might
be
worth.
I
think,
there's
an
issue
open
for
it
already
in
kappa,
but
it
would
be
pretty
easy
to
hook
into
like,
even
if
you,
even
if
we
didn't
say
like
finish
the
add-on
stuff
first,
if
we
like
added
something
special
for
like
importing
the
label
key,
that
node
termination
handler
uses
and
applies
to
nodes.
E
H
So
if
I
got
it
right,
it
works
basically
by
pool
so
we
are
forced
to
check
if
the
machine
are
there-
and
I-
and
this
happen
at
with
a
given
frequency
that
we
we
have
to
choose.
So
even
that
we
are
going
to
basically
push
and
we
are
so
to
pull
the
request
with
our
cycle
that
we
have
to
manage
and
we
are
instead
interested
only
in
basically
one
type
of
event,
which
is
machine,
terminated
it.
B
Yes,
so
it's
slightly
different,
so
it's
not
that
we're
pulling
every
20
seconds.
We,
you
poll,
sqs
sqs,
doesn't
return
a
response
for
20
seconds.
So
if
there's
no
messages
on
the
queue
you
get
a
blank
response
at
the
end
of
20
seconds
and
you
immediately
repo
again
if
an
event
appears
within
those
20
seconds,
it
returns
the
result.
B
So
it
is
it's
basically
how
aws
expects
you
to
do
get
event
notifications
using
a
http
api,
but
it
doesn't
tell
you
it
doesn't
say
anything
about
how
soon
you
get
those
events.
It
just
tells
you
the
maximum
time
that
you
can
pull
and
you
get
in
the
response
at
any
time.
A
All
right,
I
don't
see
anybody,
so
let
me
just
real
quick
see
if
we
have
anything,
we
need
wow,
there's
15.,
do
you
want
to
go
through
these
and
assign
milestones
together
or,
if
not,
can
do
it
asynchronously
I'll
leave
it
up
to
you?
If
you
all
want
to
listen
to
me,
read
these.