►
From YouTube: 20190729 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
July
29th
edition
of
the
cluster
API
provide
our
AWS
office
hours,
a
sub-project
of
cluster
API
and
C
clustered
lifecycle.
We
have
a
relatively
short
agenda
today,
so
if
you
have
any
topics
you
want
to
cover,
please
go
ahead
and
add
them
to
the
agenda.
I've
went
ahead
and
linked
that
to
the
chat
for
the
people
who
are
here
live
to
start
off
with
today.
A
I
want
to
let
everybody
know
that
we've
had
the
version
C
or
not
3.6
release
that's
been
sent
out,
major
highlights
from
that
or
closer
API
components
have
been
updated
to
version
0.18.
There's
an
optional
prof
server
for
debugging
additional
machining
cluster
events
and
various
other
bug
fixes
and
probably
the
most
impactful
for
people
is.
A
We
started
using
the
image
promoter
from
the
k2
infra
working
group,
so
there's
a
new
production
image,
location
for
the
images
and
I've
linked
that,
in
both
the
release,
notes
and
also
in
the
agenda
for
anybody
who
wants
to
take
a
look
at
that.
The
big
highlight
there
is
that
now
the
release
infrastructure
is
completely
owned
by
the
community.
So
previously,
cluster
API
required
Justin
Sandberg
Sena
Barbara
from
Google
to
push
images.
A
A
That's
a
much
longer
term
so
after
the
image
promoter
is
considered,
GA
and
all
of
the
kubernetes
images
have
been
moved
over
to
use
the
image
promoter
instead
of
well
right.
Now
we're
the
only
guinea
pig
users
of
the
image
promoter.
Once
all
of
the
existing
images
have
been
moved
over
to
use
the
promoter,
then
the
Cates
GCR
that
I
owe
vanity
domain
will
be
moved
over
and
we'll
be
able
to
access
our
images
underneath
that,
but
I
haven't
heard
any
timeline
on
that.
A
Indeed,
that
is
correct.
The
am
eyes
are
still
being
built
underneath
a
vmware
account-
and
I
don't
know
when
we'd
be
able
to
migrate-
that
in
the
past,
when
we've
discussed
that
there
were
some
concerns
about
hosting
those
under
a
cnc,
f
owned
account
specifically
around.
How
do
we
handle
vulnerability
scanning
and
remediation,
and
things
like
that,
so
because
we
try
to
warn
people
against
using
those
images
in
production
and
that
people
should
be
building
their
own
images
for
production
use
cases
we'll
have
to
figure
out
as
community.
A
A
A
A
I
can
fake
to
sort
that
out,
but
that's
the
last
blocker
there,
so
the
DNS
is
configured
correctly
and
I
have
a
PR
out
to
update
redirects
for
the
release,
0.1
branch,
but
until
we
solve
that
DNS
or
the
resolving
issue
on
the
net
laughs,
I
sign
when
using
the
cname
record,
where
we
got
to
hold
off
on
that
PR.
So
hopefully
we'll
be
able
to
get
that
sorted
out
within
the
next
week
or
so
does
anybody
have
any
other
topics
before
we
move
on
to
backlog,
grooming,
I.
B
C
B
We
have
some
internal
use
of
kappa
and
they've,
been
setting
the
memory
limits
and
requests,
I
guess
fairly
small
like
one
or
two
or
three
hundred
Meg's,
and
they
keep
getting
them
killed.
So
I
don't
know
if
they
just
legitimately
aren't
giving
it
enough
memory
or,
if
they're,
doing
something,
that's
causing
some
sort
of
leak.
So
that's
why
I
added
prop
so
that
they
could
try
and
diagnose
it,
but
I
figured
since
y'all
were
using
it
or
to
ask
if
you've
run
into
anything
yeah.
C
B
A
B
A
I
think
the
biggest
thing
that's
missing
right
now
is
actually
documenting
the
use
cases,
because
I
couldn't
find
an
instance
where
we're
documenting
how
to
specify
which
availability
zones
you
wanted
subnets
created
in
or
the
fact
that
you
can
use
the
availability
zone
when
defining
a
machine
to
point
to
a
particular
AC
versus
a
specified
subnet.
All.
A
B
B
B
I'm
gonna
just
stick
this
in
the
0.4
milestone,
although
it
really
needs
to
be
done
as
part
of
what
sin
really
0.3.
I,
don't
know
if
we
want
to
have
like
a
zero
3x
milestone
that
we
can
just
use
to
put
things
in
that
are
not
really
related
to
the
one
alpha
to
work.
That
might
be
something
to
think
about
yeah.
A
B
A
It
so
it
seems
like
a
reasonable
feature
request.
The
biggest
issue
is:
is
that
there's
a
separate
policy
that
needs
to
be
applied,
that
we
don't
currently
apply
for
the
CNI
to
work
and
nadir
brought
up
the
idea
of
having
a
separate
policy
that
could
be
manually
attached
to
the
nerd
role
and
we
can
likely
solve
it
through
documentation
for
now
and
then,
once
that's
in
place,
we
would
still
have
to
give
the
user
instructions
on
how
to
deploy,
maybe
provide
a
separate
add-ons
file
to
use
if
they
want
to
deploy
that.
C
B
A
A
A
One
of
the
things
that
I
did
when
I
poked
nadir
is
I
was
hoping
that
he
knew
of
a
way
to
scope
down
the
actual
policy
that
Amazon
recommends
for
using
the
V
PC
provider,
because
it
is
overly
broad
right
now,
where
we
try
to
scope
things
down
based
on
the
ownership
label
within
kubernetes,
to
limit
what
the
rolls
can
interact
with.
So
that
might
be
something
that
we
want
to
look
at
as
well,
but
we
can
easily
treat
that
as
a
follow-up
item
as
well.
B
All
right
anything
else
on
this
one
y'all
know:
okay,
so
get
this
milestone
updated.
So
this
one
Daniel
filed
and
basically,
if
you
have
a
control
plane
that
starts
up
successfully
and
the
control
plane,
ready,
annotation
gets
set
on
the
cluster.
And
then
you
go
delete
all
your
control
plane
machines.
You
can't
create
any
more
control
plane
machines,
because
that
annotation
doesn't
let
you
or
the
whale
logic
is
coded
that
specific
set
of
conditions
keeps
you
from
being
able
to
get
any
more
control,
plane
machines.
B
B
B
B
Sounds
good
to
me:
they
just
refresh
here
I,
think
everything
has
yep
other
than
the
ELB
one
and
we're
gonna
check
back
with
Andrew
to
see
if
we
can
either
close
it
out
or
figure
out
if
it
was
a
different
set
of
conditions.
So
actually,
while
we
are
here,
I'm
just
going
to
put
this
one
on
next
to
indicate
that
we
have
looked
at
it.