►
From YouTube: 20200727 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello
and
welcome
to
the
monday
july
27th
edition
of
the
cluster
api
provider
aws
office
hours,
a
sub-project
of
both
sig
cluster
life
cycle
and
cluster
api.
Just
a
reminder:
this
meeting
is
being
recorded
and
will
be
posted
up
to
youtube
afterwards
and
the
meeting
abide
by
the
kubernetes
community
code
of
conduct.
So
in
general,
please
be
excellent
to
one
another.
A
If
you
are
attending
live,
please
go
ahead
and
add
your
name
to
the
attending
list
in
the
agenda,
and
if
there
is
anything
you
want
to
bring
up,
please
go
ahead
and
add
it
to
the
group
topic
section
of
the
agenda
as
well.
A
A
I
believe
we
will
be
working
towards
the
official
0.5.5
release
here
shortly.
I
don't
know
if
you
know
off
the
top
of
your
head
andy
if
there
are
any
blockers
for
that.
B
I
was
considering
waiting
for
nadir
to
get
back
from
pto
to
finish
out
the
bastion
pr
there's,
an
issue
where
we
hard
code,
t2
micro
and
that's
not
available
in
all
availability
zones,
especially
us
west
2d.
I
think
so.
If
you
end
up
randomly
getting
your
bastion
created
or
placed
into
that
specific
az,
then
it
will
fail
and
nadir
has
a
pull
request
open
to
try
and
fix
it.
It
changes
the
logic
to
go
to
some
of
the
newer
t3
or
t3a
types
and
fall
back
if
needed.
B
Somebody
also
had
a
suggestion
to
like,
rather
than
hard-coding
specific
instance,
types
hard-coding,
minimum
specs
and
then
describing
the
available
instance
types
and
finding
the
cheapest
one
that
matched
the
minimum
specs,
but
that
conversation
was
after
a
deer
went
on
vacation,
so
I
think
we
can
probably
we
could
cut
it
now,
assuming
we're
happy
with.
What's
in
there,
we
could
wait
for
nadir
to
come
back
and
finish
the
pull
request
either
would
be
fine
with
me.
I
think.
A
We
can
continue
that
discussion,
asynchronously
and
the
other
one.
The
other
psa
I
wanted
to
bring
up
was
around
the
kubernetes
kate
said,
io
gcr
that
I
o
vanity
domain
for
images.
The
domain
flip
has
been
completed
so
all
of
the
images
that
we've
been
producing
and
promoting
to
us.gcr
dot,
io,
slash
gates,
artifacts
prod,
are
now
available
using
the
the
alias
as
well
so
kates.gcr
dot,
io
slash
image
name.
A
C
D
For
the
past
couple
weeks,
we've
been
working
pretty
much
all
day
every
day
on
adding
eks
support
into
kappa.
A
couple
of
updates.
I
spoke
to
richard
this
morning
he's
unable
to
make
this
call,
but
he
said
he
expects
the
control
plane
pr
to
be
unwhipped
this
week,
he's
working
on
some
tag,
some
tag
plumbing
right
now
and
he
says
if
it's
not.
If,
if
it's
not,
you
know
working
very
shortly,
he's
just
gonna
open
it
up
as
a
follow-up
pr.
D
So
looking
forward
to
that,
the
there
are
a
couple
of
pr's
that
are
like
open
but
kind
of
useless
without
the
control
plane
stuff
for
one,
the
eks
bootstrap
provider
has
a
pr
that
is
open.
It's
a
bootstrap
provider,
so
there's
a
lot
to
look
at
in
there,
but
we
finished
that
last
week
and
it's
tested
and
working
with
some
hacks.
D
You
know
hacks
due
to
lack
of
control
plane,
but
that
needs
some
eyes
on
it
for
one
which
so
that
would
be
great.
I
expect
to
trade
pr
reviews
with
the
weave
works
folks,
once
the
control
plane
stuff
gets
open,
but
it
would
be
great
to
get
some
other
eyes
on
them
just
from
outside
the
perspective
of
like
working
intimately
with
eks.
D
There
is
also
a
pr
open
for
automatically
detecting
the
appropriate
official
eks
ami
given
given
your
region
and
and
configuration
that
pr
doesn't
necessarily
make
any
sense
to
go
in
until
there's
a
control
plane,
because
part
of
the
detection
logic
is:
is
this
an
eks
cluster?
If
so,
maybe
prefer
the
eks
official
ami
over
the
default,
lookup
ubuntu
ami,
and
so
that's
open?
D
I
don't
really
know
what
will
change
in
that
pr,
depending
on
the
control
plane
stuff.
I
don't,
I
don't
think
very
much
so
those
two
are
basically
open.
Ready
to
go
can
be
reviewed,
but
I'll
probably
slap
a
hold
on
them
just
because
they
they
don't
do
anything
in
in
the
current
code
base.
So
it's
really
up
to
how
you
how
the
logistics
are
preferred.
The
last
thing
is
by
the
end
of
this
week,
we
expect
to
have
the
aws
machine
pool
implementation
for
unmanaged.
D
Node
groups
complete,
so
this
is
using
auto
scaling
groups
behind
the
scenes.
The
implementation
follows
azure
machine
pool
relatively
closely
with
some
modifications
for
things
that
are
specific
to
the
the
auto
scaling
apis.
There's
a
lot
involved
in
that
they're.
You
know
we're
pulling
in
new
apis,
auto
scaling
and
we're
doing
some
new
things
in
ec2
with
launch
templates.
So
there's
a
lot
of
moving
parts
in
that,
but
we
do
have
it
mostly
working.
D
We
just
need
to
plumb
some
more
fields,
so
we
expect
that
pr
to
be
open
as
well.
I
think
I
can
stop
talking
now,
and
so,
if
there
are
questions
or
feedback,
please
go
ahead.
Did.
D
80Rs
machine
pool
using
asgs
ready
in
the
near
yep-
that's
basically
it
so
from
our
perspective
for
the
new
relic
use
case.
We're
gonna
have
a
couple
of
these
providers.
The
first
is
the
unmanaged
like
asg
machine
pool
provider
at
some
point.
In
maybe
the
medium
term.
We
also
expect
to
do
the
managed
node
group
provider,
and
on
top
of
that,
we
are
thinking
about
spot
instances
as
well,
and
that
may
present
a
third
provider.
D
Asgs
have
some
plumbing
for
this.
There
are
some
differences
that
we
want
to
account
for,
but
those
that's
just
something
we're
thinking
about
as
well,
but
we've
started
with
what
we
think
is
the
most
straightforward,
which
is
auto
scaling
group
launch
template
by
straightforward.
I
mean
the
one
that
we
have
experience
using
really,
but.
E
With
the
auto
scaling
groups,
is
that
flipping
where
the
cluster
auto
scaler
should
point
like?
If
I
have,
if
I
want
those
to
auto
scale,
I
should
point
the
cluster
autoscaler
to
make
it
an
aws
endpoint
instead
of
the
cluster
api
endpoint
for
scaling.
D
That's
a
great
question
and
in
fact
the
answer
is
currently
unknown.
The
if
we
are
following
the
way
cluster
api
works
like
28t
by
design
proposed
from
the
beginning.
We
would
actually
implement
a
machine
pool
integration
for
cluster
api
in
the
auto
scaler
and
the
reason
for
that
is
the
data
flows
in
that
direction
for
the
most
part
in
in
cluster
api.
But
we
discussed
a
couple
weeks
back.
D
So
there's
a
pretty
solid
argument
for
allowing
the
machine
pool
to
actually
take
its
replicas
data
from
the
other
direction,
if
that
makes
sense
so
telling
auto
cluster
auto
scaler
to
look
at
the
auto
scaling
groups
directly
and
telling
cluster
api
hey
like
you're,
going
to
see
some
changes
to
the
replica
account
and
that's
because
we
treat
aws
as
the
source
of
truth
and
not
the
machine
pool.
So
it's
it's
something
that
we're
still
thinking
about.
E
And
how
does
that
work
with
the
the
manage
managed?
Node
groups
are
the
actual
like
eks
managed
nodes
right,
like
yeah
customer
doesn't
interact
with
them
at
all.
Those
are
also
a
controlled
asg,
but
controlled.
You
know
os
completely.
As
far
as
you
know,
a
customer
is
concerned.
How
does
that
interact
with
what
cluster
api
is
doing
to
the
machines
and
provisioning
them
and
understanding
what
it
is?
I
mean
that
again,
as
source
of
truth
is
aws,
so
is
that
same
idea.
D
My
expectation
for
the
manage
group
is
really
that
it
will
just
create
one
and
any
options
that
you
can
specify
through,
like
the
aws
api
for
configuring.
Them
can
be
plumbed
that
way,
but
other
than
that,
I
expect
this
controller.
To
just
say:
does
it
exist?
Does
its
configuration
match
where
I've
passed
through
and
that's
basically
it?
I
mean
machine
level,
information
kind
of
goes
away,
big
time
in
this
configuration
and-
and
you
know
we're
okay
with
that.
D
For
us,
the
value
in
having
that
object
at
all
is
just
the
way
that
our
dev
teams
consume.
Cluster
api
is
by
interacting
with
these
objects
and
that's
their
lens
into
the
underlying
infrastructure
in
the
cluster.
So,
while,
while
the
controller
wouldn't
actually
do
very
much
like
it
might
not
scale
it,
it
might
not,
you
know,
handle
targeting
and
terminating
individual
machines.
E
I'm
also,
assuming
that
all
of
the
well
managed
no
troops
right
away,
but
at
least
with
the
asg
the
machine
pools.
Those
are
all
the
same
instance
type
right,
you're
not
doing
mixed,
because
I
know
asg
can
have
spots
or
or
non
spots
and
also
different.
You
know
sizes
and
types
of
instances.
This
is
assuming
that
an
asg
is
just
one
type
right.
D
Now,
yes,
we,
we
have
a
pretty
strong
need
for
mixed
instance,
type
support
and
for
a
while
that
will
come
in
the
form
of
just
having
multiple
asgs,
but
we
do
want
this.
We
do
want
that
feature
to
work.
D
We
have
no
idea
how,
but
we
do
want
that
to
work,
and
we
want
auto
scaling
for
that
to
work,
and
we
want
a
lot
of
things
that
are
probably
very
hard
to
do,
but,
like
our
cluster
api
environment
is
incredibly
large
and
and
like
anything,
we
can
do
to
have
one
less
machine
deployment.
One
less
machine
pool
like
is
actually
huge
because
of
the
number
of
clusters
that
we
support
that
are
deployed
this
way
so
like
we.
D
We
really
want
that
capability,
and
I
know
that
there
are
a
lot
of
things
with
like
cluster
auto
scaler
that
make
it
really
difficult
to
do.
But
you
know
we're
we're
committed
to
figuring
out
how
to
do
that.
E
Asg-
and
the
last
question
I
had
about
the
machine
pools
is:
how
is
that
working
with
multi-azs?
Are
we?
Is
it
one
machine,
or
is
it
automatically
setting
up
for
one
machine
pull
across
all?
You
know,
if
you're
in
three
azs
to
scale
it
across
all
of
them
or
is
it?
Is
there
a
way
to
target
each
a
z
for
ebs
volumes
and
that
sort
of
stuff.
D
So
the
way
that
the
code
is
sitting
kind
of
incomplete
right
now
is
you
know,
you
basically
specify
subnets
and
then
that
just
kind
of
determines
like
what
az's
you're
in
cappy
has
the
concept
of
failure,
domains
that
have
not
fully
been
plumbed
through
to
things
like
machine
deployments,
for
example,
and
I
don't
know
how
it
how
it
should
work
for
machine
pool
yet
per
se.
There
are
a
lot
of
kind
of
strange
things
that
happen
in
that
interaction.
D
D
How
do
we
improve
the
logic
for
looking
that
up
so
anyway
to
shorten
the
answer
right
now
you
provide
subnets
and
that
kind
of
declares
what
az
you're
in,
but
we
should
have
failure
domain
support
which
will
allow
you
to
say:
hey
here's,
a
machine
pool
and
by
default
it
will
look
at
the
failure
domains
available
to
you
in
the
cluster
and
deploy
your
machine
pool
in
that
way.
A
And
this
comes
back
to
cluster
auto
scaler
integration
as
well,
both
through
the
cluster
api
provider
and
also
through
the
native
aws
cluster,
auto
scaler.
Both
of
those
are
limited
to
kind
of
1az
right
now
for
scheduling
purposes.
They
don't
have
enough
kind
of
knowledge
internally
to
know
that
it
needs
to
expand
a
node
pool
in
a
particular
az
to
support.
You
know,
pods
that
are
pending
scheduling
versus
you
know,
just
increasing
the
node
pool
size
right.
A
A
And
I
can't
actually
see
the
participant
list
still
so,
if
anybody's
raising
their
hand
in
chat,
I'm
not
able
to
see
that
and
then
any
other
topics
to
bring
up
before
we
look
into
the
issue.
A
Triage
all
right,
so
actually,
I
have
quite
a
bit
of
a
list
today
see
first
one
figure
out
how
to
package
resources
for
eks.
D
It's
me
yes,
so
we
just
a
couple
meetings
ago
we
were
discussing
like
hey,
we
bundled
the
ammo
in
cappy
in
three
separate
files
and
then
an
aggregate
file
in
kappa.
We
haven't
had
a
need
to
do
this.
Yet
now
we
might-
and
that's
that's
the
issue.
A
D
A
All
right,
fixed
race
condition
for
resolution
of
artifact
folder.
A
We
should
probably
do
that
at
some
point
all
right.
Let's
see
api
evolution
for
vpc
and
networking
topologies.
A
B
Or
no,
this
is
the
machine
deployments.
So
this
is
the
aws
version
of
the
issue
that
I
opened
in
capi.
That
was
really
just
a
like
asking
people.
If
this
was
something
that
anybody
wanted,
I
don't
have
strong
feelings
that
a
single
machine
deployment
should
necessarily
span
azs,
but
it
could.
If
people
decided
it
was
worth
doing,
and
I
know
you
had
some
comments
on
the
cappy
issue
jason.
So
this
is
just
the
corresponding
capital,
one
all
right.
A
A
B
B
A
All
right
audit
remove
package
errors,
usage
in
favor
of
standard
lib
errors.
This
came
up
as
ben
was
working
on
fixing
some
of
the
lin
errors
related
to
the
go
113
error
winter.
A
A
Yes,
so
gab
was
doing
some
work
and
we
discovered
that
right
now,
if
you
specify
a
specific
subnet
id
on
an
aws
machine
and
there's
a
failure,
domain
defined
on
the
machine
say
for
a
cube,
adm
control
plane.
The
subnet
id
will
actually
cause
the
aws
machine
to
be
created
in
that
subnet
id,
regardless
of
whether
or
not
it
actually
should
be
in
that
specific
az
based
on
the
failure
domain
configuration.
B
A
Like
longer
term,
when
we
don't
have
the
differentiation
between
the
deployment
the
web
hooks
running
in
and
the
controller,
I
think
it
would
be
nice
if
we
could
do
it
as
part
of
a
web
hook.
But
that
requires
us
to
fix
the
multi-tenancy.
A
A
A
E
A
A
Because
previously,
when
we
reloaded
the
systemd
unit
file,
it
would
re-pull
the
sources
of
information
and,
with
this
newer
image,
whether
it's
the
cloud
init
version
or
something
different
on
the
ubuntu
host,
it's
no
longer
doing
that
anymore.
It's
using
it's
reusing,
the
previously
pulled
kind
of
source
data,
and
that's
where
it
kind
of
crept
up.
A
That
work
for
the
ubuntu
ones.
Yes,
I
know
robert
was
looking
into
trying
to
downgrade
cloudinit
to
see
if
the
issue
went
away,
but
that
doesn't
look
like
it
necessarily
did
it.
I
started
digging
into
this
issue
late
last
week
and
I'm
planning
on
digging
back
into
to
try
to
help
troubleshooting
further
over
the
next
couple
of
days.
So
hopefully
we
can
come
up
with
a
resolution
soon
or
at
least
work
around.
A
All
right
and
then
next
one
is
instances
should
be
tagged
with
their
machine
name.
This
is
an
issue
that
ben
put
up
there.
He
thought
he's
thinking
that,
if
we
add
a
tag
to
the
instance,
it
might
be
easier
for
folks
that
are
trying
to
match
up
to
specific
machines
if
they're,
starting
from
like
the
aws
instance
itself.
A
All
right,
I
think
I
will
go
ahead
and
leave
this
one
untriage
for
now,
until
we
have
a
chance
to
dig
into
this,
did
you
have
somebody
in
mind
for
trying
to
take
a
look
at
this
a
little
bit
further
andy
or.
B
A
D
Yeah
yeah,
I
think
we
were
both
getting
a
little
frustrated
with
like
the
kind
of
auto-generated
nature
of
the
api,
for
the
aws
stuff
means
that,
like
the
struct
for
tags
in
every
create
is
different.
D
So,
like
I,
I
think,
maybe
he's
thinking
about
making
that
work
in
a
more
interfacey
way,
so
that
you
can
just
apply
tags
and
it
will
know
what
to
do
behind
the
scenes.
A
No
milestone,
let's
see
fixed
error,
handling
for
go
one
third,
goer
113
lanter.
A
A
Then
cleanup
et
delete
old
jobs.
A
A
D
D
So
last
week
I
took
that
up
with
the
azure
folks
and
I
think
there's
a
cappy
issue
related
to
this,
but
just
wanted
to
call
that
out
that
the
eks
bootstrap
provider,
like
it
doesn't
have
any
refresh
requirements.
So
it
will
work
right
out
of
the
box,
but
for
any
machine
pool
implementation
that
uses
a
user
data
style
pattern
to
work.
We'll
need
that
upstream.
B
Vaguely
remember
when
juan
lee
was
first
demoing
machine
pools
on
azure
well
before
any
pull
requests
were
open.
I
think
he
had
either
hacked
the
qbm
bootstrapper
or
he
did
something
in
the
machine
pool
code
or
the
azure
machine
pool
code.
That
was
refreshing
the
token,
but
I
don't
I
guess
it
never
made
it
in.
D
Yeah
I
went
into
the
azure
channel
and
I
was
like
hey.
I
don't
know
if
this
works
and
you
know
the
folks
in
there
were
like,
I
think
it
works
and
then
a
couple
minutes
later.
Oh
wait.
No
that
doesn't
work
at
all.
So
I
think
you
know
well
we'll
see
what
happens.
I
think
the
answer
was
just
hey,
let's
detect
if
this
needs
refreshing
and
then
refresh
it.
C
I
thought
that
they
were
flipping
the
ready
state
back
to
unready
so
that,
like
it,
would
refresh
the
token
it's
a
hacker
on
an
api
and
like
that's
for
sure,
because
cappy
k,
refreshes
the
token
until
the
machine
is
booted
up
but
like
get
config
owner,
can
be
either
machine
or
machine
pool.
And
if
machine
pool
is
back
to
not
ready,
then
it
would
refresh
that
token
or
get
a
new
one.
C
A
All
right,
if
not
see
you
all
in
two
weeks,
thank
you
very
much.