►
From YouTube: SCL Cluster API Provider AWS Office Hours 20190715
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
July
15th
edition
of
the
cluster
API
provider
AWS
office
hours,
a
sub-project
of
both
cluster
api
and
say
cluster
lifecycle.
We
have
a
relatively
light
agenda
today,
so
if
you
have
any
items
you
want
to
discuss,
please
go
ahead
and
add
them
to
the
agenda.
Doc,
I
just
linked
it
in
the
group
chat
and
to
start
off
with
today.
I
wanted
to
give
a
PSA
to
everybody
that
version
0.3
dot
4
has
been
released.
A
The
main
features
there
that
the
cluster
API
components
have
been
upgraded
to
0da
1.6
and
it's
also
the
first
cluster
API
release
that
is
using
the
image
promoter
from
the
Kate's
info
working
group.
So
one
of
the
things
that
I
would
like
to
do
going
forward
for
the
AWS
provider
releases
is
I
would
also
like
to
switch
to
using
the
image
promoter
as
well.
Unless
there
are
any
strong
objections
there,
some
additional
features
have
been
added
into
the
meeting
notes.
I,
don't
see
me
to
necessarily
bring
them
up
directly
here.
B
B
So
a
few,
a
few
changes
like
we
moved
everything
to
the
P
1
alpha
2
types
and
there
have
been
a
bunch
of
issues
open
for
me-
went
out
for
two
I'm
currently
working
on
them:
AWS
machine-type,
which
is
a
new
type
and
like
the
controller
logic
which
is
going
to
be
like
kind
of
like
a
merge
of
like
the
the
actuator
and
the
old
cluster
gamma
XI
controller.
So
it's
gonna
be
like
a
match
all
of
them
and
then
inside
the
machine
controller
and
the
other
type
that
we
add
is
the
a
SS
machine.
B
B
One
thing
that
I
haven't
changed
is
the
go
length,
CI
tools
and
that's
actually
pretty
good
if
he
stays
in
there,
because
then
it
fixes
the
version
for
every
one
of
us
like
we
go
and
do
make
lint,
then
that's
not
gonna,
be
like
any
version.
Mismatch
between
the
winters
is
always
going
to
be
the
same.
So
that's
that's
one
of
the
nice
things
that
I
like
about
business
like
fixed
version
stuff,
but
other
than
that
it
was
much
more
complicated
before
than
it
is
now
so
yeah.
B
C
B
D
E
Right
over
to
you,
Andy
thank
you
Jason,
so
we
left
off
according
to
the
notes
from
last
time
on
issue
773,
which
is
the
first
one
on
page
two,
so
we're
just
gonna
go
do
what's
on
page
one
and
I'm
gonna
work
from
the
bottom
up,
so
the
first
one
is
mine
that
I
opened
about
not
having
to
generate
all
the
generated
code.
Every
time
we
run
pretty
much
every
single
target
in
the
makefile.
This
is
not
a
high
priority
item,
but
I
know
events
have
made
some
changes
to
the
cluster
API
make
file.
E
B
E
A
E
A
I'm
torn
on
it
having
previously
been
involved
with
doing
some
of
the
testing
where
we've
had
to
build
fakes
for
the
AWS
SDK,
it
became
very
painful
to
kind
of
build
up
those
fakes
and
maintain
those
fakes
over
time
and
go
Mach,
simplify
that
part
of
things,
but
at
the
same
time
go
Mach
is
not
ideal
either.
So
I
don't
want
to
get
in
the
point
where
we're
like
building
all
these
complicated
fakes
and
maintaining
them,
but
I
mean.
E
A
So
that
comes
with
challenges
as
well,
because
that
was
when
we
initially
did
an
internal
prototype
for
the
NWS
provider.
That's
how
we
build
things
out
and
it
turned
out
that
we
were
passing
a
lot
of
data
between
that
AWS
abstraction
layer
and
the
actuator.
So
we
simplified
things
when
we
created
the
universe
provider
by
having
the
shared
scope
that
is
passed
down
through
and
no
longer
had
to
kind
of,
like
marshal
data
back
and
forth
through
different
layers.
I
also.
B
E
C
It's
not
about
mocking
things,
I
think
mocks
in
unit
testing
and
go
it
pretty
much
inevitable.
It's
more
that
go
Mach
the
style
of
testing
and
go
Mach
is
I
feel
like
it
makes
you
test
what
is
going
to
be
called
in
the
implementation.
How
many
times
and
it's
not
I,
don't
know.
If
that's
exactly
what
we
want
to
be
testing,
it
ends
up
making
us
test
and
write
a
lot
of
things
that
we
don't
really
need
without
giving
us
too
much
benefit.
F
That
I've
seen
can
also
be
convenient,
sometimes
to
have
like
this
Denise
e2
response
and
like
be
able
to
have
a
mock
that
basically
returns
that
ec2
response,
at
least
when
I've
been
building
kind
of
AWS,
tooling
I
found
that
to
be
useful,
which
I
don't
think
is
go
Mach,
but
necessarily
but
I'm
actually
trying
to
remember.
There
is
inside
the
ec2
API
directly
I
thought
there
was
like
a
way.
You
could
basically
give
it.
Those
like
JSON
responses
and
say
like
this
is
what
I
want
you
to
pretend
yeah.
E
E
C
E
B
This
is
not
be
wonderful
and
we
could
backboard
the
object,
meta
field,
I,
don't
know
if
we
did,
but
if
we
wanted
to
it,
wouldn't
fix
this.
We
can
backward
and
change
the
actually
just
that
would
remove
the
creation
time
stamps
that
this
problem
wouldn't
happen.
But
I
think
there
was
a
breaking
change
for
v1
off
for
one
technically.
B
E
E
B
So
I'm
trying
to
see
if
we
actually
back
for
at
this
but
a
it,
was
because
we
were
embedding
optic
Mara
from
the
manor
p1
package,
but
into
a
spec
which
then
generates
so
like
the
creation.
Timestamp
has
a
curt
custom,
Jason
Marshall,
and
it
would
expect
that
you
actually
pass
in
null
instead
of
like
just
empty,
and
so
that's
like
why
validation
would
fail.
But
we
actually
added
like
a
new
object
matter
like
so.
That's
like
a
streamlined
version
of
like
what
we
actually
only
need.
B
E
F
F
E
E
E
A
E
D
A
E
G
But
I
think
there
was
a
comment
in
the
machine
actuator
which
says
like
we
should
be
careful
about
something
I'm,
not
able
to
remember
what
that
exact
thing
was
like
the
machine
actuator
when
it
checks.
If
the
machine
exists,
there's
a
comment
saying
we
do
this.
We
pick
the
first
one
that
matches,
but
maybe
we
shouldn't
do
that
or
something
to
that
effect.
I'm
trying
to
pull
out
that
code.
I
E
G
E
E
Thanks
and
fill
your
events
to
the
cluster
object,
Jason
I
knew
that
you
had
been
working
on
a
PR
to
do
some
additional
clustering
machine
events,
but
I
don't
know
that
it
had
gotten
to
all
the
various
AWS
resources
that
we
are
doing.
Are
you
still
planning
on
adding
that
or
do
you
want
somebody
else
to
help
out
so
I
think
there's.
E
Okay,
so
do
you
want
to
take
this
one
and
roll
it
into
the
PR
that
you're
working
on
for
the
high-level
events
for
the
cluster
yep.
E
And
retry
to
resource
tagging,
so
I
know
that
this
is
definitely
something
that
we
need
to
get
into.
So
why
don't
we
talk
about
this
one
I?
Imagine
Andrea
probably
coasts
or
relates
to
your
security
group
one.
So
is
this
something
that
we
need
a
design
document
for
for
each
of
the
various
resources
and
whatnot?
A
I
think
it's
more
complicated
than
that
going
back
in
reading
some
of
the
documentation
around
the
eventual
consistency
of
the
end
of
us,
SDK
I
think
what
we're
gonna
end
up
having
to
do
is
anywhere
where
we
query,
where
we
create
something,
and
then
we
try
to
do
something
with
that.
We
need
to
add
in
kind
of
reach
back
off
logic.
E
D
E
F
E
E
E
E
Yeah,
this
is
definitely
something
that
I
think
we
are
interested
in
doing
for
v1l
the,
given
that
we
have
said
that
the
providers
should
assume
that
it's
a
one
and
done
creation
and
that
some
providers,
if
they
want
to,
could
try
to
remediate
any
issues
with
infrastructure
but
I
think
it's
safer.
If
we
just
assume
that
the
VM
is
earth,
the
VM
is
either
stopped
or
terminated.
Well
stop.
This
may
be
a
hard
one
to
deal
with,
but
at
least
terminated.
We
can
update
the
status
of
the
machine.
E
This
one
might
need
a
little
bit
of
brainstorming
around
stopped
instances.
I
think
I
can
never
get
my
clapper
providers
straight
in
the
kubernetes
codebase
I
think
AWS
will
turn
remember.
Is
it
the
one
one
of
the
ones
that
deletes
the
node
if
it
finds
a
stopped
instance,
or
does
it
allow
it
to
remain
and
does
anybody
know.
E
I
think
I
know
that
we've
had
some
issues
in
the
past
with
the
OpenStack
provider,
where
I
think
if
the
OpenStack
van
was
stopped,
it
would
just
delete
the
node
I.
Don't
remember
if
that
applied
to
AWS
or
not,
but
certainly
for
terminated.
I
think
that
kappa
should
update
the
infrastructure
status
with
an
error
message
and
then
that
can
get
copied
over
to
the
machine
status.
B
F
E
F
E
E
This
was
fixed
I,
just
added
this
as
an
issue,
so
I
need
to
close
this
one
out
document:
infrastructure,
ready
and
control
plane,
ready
annotations.
So
these
were
two
new
annotations
that
came
in
with
my
control,
plane,
machine
race
condition
fix,
and
so
we
need
to
do
some
documentation.
I
will
take
this
unless
anybody
else
wants
it
all
right.
Let
me.
E
F
We
we
hard
code,
the
type
of
load
balancer.
So
even
if
you
bring
your
own
V
PC,
we
can't
I
what
I
said
here
is
that
if
they
wanted
to
make
that
one
bit
configurable,
so
that
it
could
work
and
bring
your
own
BBC
case,
and
that
would
be
something
something
to
look
at.
But
currently
today
it's
not
it's.
It's
always
internet
facing
I.
Think.
B
A
E
E
H
B
E
F
E
E
Yes,
yeah
making
sure
that
we
can
still
do
it,
although
the
cloud
in
it
for
the
bastion
host
is
a
relatively
simple
script
that
just
downloads
another
shell
script
from
an
s3
bucket.
That
Amazon
maintains
and
runs
it
so
I,
don't
know
if
there's
some
way
that
we
could
try
and
simplify
that
so
that
it
didn't
need
cloud
in
it,
I
mean.
Presumably
we
could
bake
it
into
images,
but
then
we
would
require
everybody
else
to
build
their
own
fashion
images.
E
B
E
Okay,
then
we
have
controller,
can
generate
an
LV
name.
That
is
too
long.
So
if
you
give
a
cluster
name
that
is
sufficiently
long,
it
ends
up
saying:
I
can't
have
a
load
balancer
name,
that
is
more
than
32
characters,
and
we
had
some
back
and
forth.
My
last
suggestion
was:
we
take
a
look
at
the
clusters
name.
If
it's
greater
than
32
characters,
then
we
could
come
up
with
some
formula
for
using
a
portion
of
the
clusters
name
along
with
a
unique
random
set
of
characters
and
have
that
viola.
Balancer
name.
E
F
F
E
E
C
E
The
Cappy
cluster
controller
then
tries
to
do
an
update,
but
it
doesn't
retrieve
the
changes
to
the
cluster,
so
it
has
the
old
resource
version
and
gets
this
conflict,
so
we
might
end
up
needing
to
do
a
fix
and
cluster
API
to
try
and
deal
with
this
so
that
it
knows.
Okay,
I'm
I've
called
delete
on
the
actuator
or
the
actuator
might
have
modified
the
cluster.
So
we
need
to
go
retrieve
the
update.
I
don't
know
if
that's
maybe
too
simplistic
if
we
drop
some
things
accidentally,
but
at
the
very
least,
I'm
fairly
certain.
E
That
yeah
I
mean,
I
do
think
that,
like
I
will
test
this
and
see
if
it's
like
every
single
time,
you
try
and
delete
a
cluster.
Does
it
get
this
conflict
error?
I
would
expect
it
to
and
then,
if
that's
the
case,
I
can
go
to
the
release,
0.1
branch
and
cathy
and
see
if
I
can
do
a
fix
so
I'll
leave
this
issue
in
kappa
for
now,
but
I
may
transfer
it
over
to
Cathy,
based
on
what
I
find
sounds.
E
Yeah
I
just
yeah,
so
he
said
it
gets
stuck
into
an
endless
loop
like
I've.
Never
the
only
time
I've
ever
been
unable
to
delete
a
cluster
is
if
I
have
machines
better
still
around
and
I
try
to
delete
the
cluster
first,
but
as
long
as
the
machines
are
gone,
then
I've
never
had
an
issue
eventually
getting
rid
of
the
cluster.
So
yeah
I
will
investigate
this
one.
So.