►
From YouTube: 20190408 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
Monday
April
eighth
edition
of
the
cluster
API
provider
AWS
office
hours.
We
have
a
relatively
late
agenda
today,
so
if
you
do
have
any
topics
that
you
want
to
bring
up,
please
go
ahead
and
add
them
to
the
agenda
and
also,
if
you
are
here
attending
live,
please
go
ahead
and
edit
your
name
to
the
attending
list
on
the
on
the
notes
as
well.
I've
linked
that
in
the
group
chat
so
first
item
on
the
agenda.
A
A
It's
the
0.20
release
on
github
and
shortly
after
we
released
it,
we
had
a
few
issues
come
in,
so
we
will
have
zero
got
2.1
release
here
soon
and
the
one
issue
that
I
am
aware
of,
that
we
probably
want
to
get
in
is
one
of
the
issues
that
are
related
to
the
PRS
that
are
actually
being
brought
up
later
in
the
agenda.
So
with
that,
let
me
go
ahead
and
toss
it
over
to
Seth
to
talk
about
the
tagging,
security
groups
and
cloud
providers.
B
But
then
it
became
clear
pretty
rapidly
that
the
way
it
wanted
to
manage
them
was
delete
them
because
they
weren't
part
of
the
spec
for
that
security
group
and
so
nadir
was
kind
enough
to
file
issues
704
they're
about
how
like
what
the
right
way
to
share
resources
between
close
to
API
and
the
entry
cloud
provider.
I
don't
actually
have
a
very
good
way
to
look
refer
to
those
two
things
separately
because
they're
both
cloud
providers
right.
But
that's
so
that's
what
I
have
PR
sort
of
speculatively
to
fix
that
by
creating
a
load.
B
Balancer
group,
that's
separate
from
the
other
security
groups
and
then
moving
some
of
the
tagging
around
so
that
we
basically
don't
end
up
having
both
things
trying
to
manage
the
same
resource.
But
I
definitely
wanted
to
bring
up
for
discussion
with
the
right,
like
approaches
for
those
sort
of
questions.
Yeah.
A
A
I
know
cops
for
sure
tanks,
things
in
a
similar
way
and
I
was
actually
hoping
that
Justin
would
be
on
today
and
it
doesn't
look
like
he
is,
and
I'm
not
sure
if
cops
has
had
similar
issues
with
this
or
if
they've,
just
not
kind
of
managed,
either
the
security
groups
or
the
EO
bees
and
the
way
that
we
are
and
haven't
come
across
the
same
issue.
I
think.
B
A
Yeah
I
think
it
would
just
happen
if
you
came
through
and
you
ran
another
command
with
columns.
So
it's
not
necessarily
an
active
management
like
we
are
doing
with
cluster
API,
but
it
is
meant
to
kind
of
reconcile
the
intending
state
in
a
similar
way.
So
it
may
only
be
something
that
they
would
hit
if
they
were
running
like
a
reconfigured
operation
were
like
an
upgrade
versus
us,
hitting
it
periodically
as
part
of
the
racing.
Yes,.
B
I
forget
what
the
numbers
they
are,
that
I
put
together
as
like
a
perfect
concept
for
704.
There
was
it's
just.
It
uses
two
tags,
one
for
the
like:
these
are
the
resources
managed
by
Kappa,
and
then
it
separates
out
the
kubernetes,
io
/
cluster
thing
and
that's
the
like
handoff
mechanism
for
like
I,
give
to
you
entry
cloud
provider,
these
resources
and
I
promise.
I
won't
look
at
them
as
sort
of
the
thought
that
I've
had
it
seemed
to
work.
Okay
when
I
tried
it,
but
you
know
take
that
for
what
it's
worth
yeah.
B
The
other
sort
of
knocked
on
piece
of
that
that
I'd
appreciate
taking
a
look
at
is
the
managed
tag.
That's
in
there,
it's
a
net
like
a
manage.
True,
is
sort
of
redundant
with
the
with
the
separate
tag
that
I
introduced
for,
like
kappa,
owns
this
thing,
but
I'm
not
100%
sure.
So
you
take
a
look
like
with
an
ID
about
it.
I'd
appreciate
it
yeah.
A
Yeah,
the
only
reason
why
we
really
added
that
before
was
specifically
for
being
able
to
differentiate
specific
things
that
we
split
up
as
part
of
cluster
API
versus
things
that
were
potentially
spun
up.
You
know
from
the
integrated
cloud
provider,
so
if
we're
using
distinct
tags,
then
managed
the
manage
tag
would
definitely
be
obsolete.
A
C
Decide
mine
question
around
this
was
if
we,
if
both
polite
is
make
use
of
their
if
they
count
water
only
makes
use
of
the
plus
the
name
tag.
Then
there's
always
a
risk
diagonals.
As
its
function,
T
increases.
They
start
modifying
resources
that
we
also
spinning
up.
Do
we
need
a
like
a
long
term
solution
to
this,
and
what
would
it
look
like.
A
C
C
B
A
A
A
That'd,
be
great
I
just
want
to
make
sure
that
you
know
that
one.
You
know
if
there's
a
legitimate
reason,
why
they're
using
the
same
tag
and
why
they're
not
hitting
issues
versus
the
reason
why
we're
hitting
issues
and
then
the
other
one
being
is.
If
we
are
hitting
these
issues
and
it's
something
that
they
might
likely
hit
in
the
future,
then
we
definitely
want
to
make
them
aware
of
it
as
well.
D
Yes,
so,
as
I
said
just
mentioned
about,
while
working
on
704
there's,
there's
been
issue
about
meal
pointer
dereference
in
the
controller
code,
and
it
seems
that
797
697
fixes
start
and
while
I
was
working
on
701
I
I
found
the
same
issue
and
basically
I
fixed
that
as
well.
We
did
my
PR
and
we
should
probably
decide
what
to
do
in
this
case.
I
personally
think
we
should
just
merged
697
first
and
then
this
will
simplify
701
and
yeah,
probably
doing
the
way
to
go.
If
it
comes
to
me,
I'm.
B
D
A
Yeah
I
think
that
sounds
good
to
me.
Definitely
in
favor,
more
targeted
fixes
versus
kind
of
pulling
pulling
in
effects
through
another
change
that,
if
at
all
possible,
so
I
will
definitely
go
ahead
and
take
a
look
at
both
of
those
PRS
today
and
that
ordering
looks
good
to
me
cool.
Thank
you
and
then
well.
The
next
item
still
with
the
AG.
A
D
So
I
work
on
extending
the
cluster
EWS
a
DM
command,
so
I
introduced
additional,
create
a
chai
cluster
subcommander
and
basically
the
status
of
that
is
that
it
seemed
to
work.
It
seems
to
work
and
I'm
able
to
create
free
control,
playing
notes
cluster
for
me,
then
I
devote
them
from
kind
cluster
and
then
eventually
I'm
able
to
get
the
note
worker
single
worker
note
for
now
running
and
joining
the
cluster,
because
I
hit
some
issues
like
networking
issues
and
on
AWS.
D
That
stopped
me
from
from
progressing
on
that,
because
I
wanted
to
sort
out
this
first
to
make
sure
when
PR
is
ready
for
radio,
it
will
actually
work
when
someone
tries
to
run
it
so
one
six,
nine,
seven
and
701
are
merged.
I
should
be
pretty
much.
I
still
need
to
tidy
up
some
things
around
the
PR,
but
the
the
majority
of
work
is
done
by
security.
A
Now
so
alright,
so
one
thing
and
I
wasn't
predicting
this
kind
of
coming
in
as
quickly
as
it
did.
When
I
initially
proposed
the
sub
command
under
cluster
eight
of
us
ADM.
We
did
have
a
PR
recently
to
upstream
cluster
API,
to
add
support
for
deploying
multiple
control
playing
machines
with
cluster
cuddle
directly
I.
Don't
know
how
that
how
that
workflow
necessarily
differs
from
the
work
that
you've
been
doing
for
kind
of
enabling
H
a
so
that
may
be
another
alternative.
We
can
pull
in
the
DHS.
B
E
Yeah
I'll
give
a
shot,
though,
essentially
all
that
that
change
really
does
is
if
the
specified
machine
zmo
has
multiple
control
plane
machines
in
it,
it
kind
of
just
takes
the
first
one:
it
spins
it
up,
waits
for
ready
and
then
instantiates
the
other
two
and
has
them
join
the
cluster
and
then
kind
of
all
the
helpers
associated
with
doing
that.
We're
modified
to
accept
multiple
control,
plane
instances
and
follow
that
same
path,
and
this
a
lot
of
that
logic,
I
believe,
is
due
to
an
upstream
issue
where
they
have
to
be
joined.
E
A
Yeah
currently
cube
ATM
doesn't
do
any
locking
around
the
handling
of
the
cube
ATM
config
config
map.
We've
recently
seen
some
traction
on
that
issue
and
there's
a
proposed
PR
out
there
that
would
target
qubit
em4
1.15,
but
because
it's
new
functionality
I
think
the
chances
of
it
being
back
ported
to
earlier
versions
of
cubed
a.m.
are
probably
unlikely,
so
we
would
probably
need
to
continue
to
do
it
serially
until
at
least
the
lowest
version
that
we
support
in
cluster
API
would
be
1.15.
D
Okay,
so
just
just
to
summarize
on
on
my
side.
So
if
I,
if
I
use,
apply
machine
files
from
from
copy,
would
that
include
the
the
change
you
guys,
married
or
or
it
might
somewhere
else,
because
what
I?
What
I
do
in
my
code
is
I,
basically
just
call
apply
machine
face
from
from
copy
and
and
that
basically
waits
and
then
I
wait
for
for
the
first
control
train
now
to
appear,
and
then
I
I
create
two
others
see
rarely
as
well.
A
Yeah,
so
if
we
were
to
consume
that,
what
we
will
probably
want
to
do
is
get
that
multi
control,
plane,
support
for
cluster
API
merged
into
the
release,
0.1
branch
for
cluster
API,
and
then
we
could
update
our
vendor
reference
to
track
the
release
year
about
one
branch,
so
that
wouldn't
necessarily
require
a
full
release
of
cluster
API
to
consume
it.
And
it
would
also
ensure
that
we're
tracking
a
branch
that
isn't
going
to
potentially
include
breaking
changes
as
cluster
API
starts
working
on
P
1,
alpha
2.
A
A
C
C
This
been
talk
about,
say
this
big
talk
from
AWS
and
others
around
how
to
handle
spot
pricing,
and
so
advanced
AWS
scaling
functionality
and
not
putting
this
all
into
the
class
altogether
code.
So
this
is
going
to
probably
have
to
find
a
home
at
some
point
and
it's
been
rated
read
a
lot
that
will
eventually
end
up
in
plus
the
API.
So
just
something
to
be
aware
of
and
haven't
really
got
any
deep
thoughts
on
it
right
now.
A
Yeah
and
I
think
this
touches
on
another
topic,
and
this
is
probably
a
little
out
of
scope
for
this
meeting
and
we'll
probably
want
to
bring
this
up.
At
least
a
cluster
autoscaler
talk
on
Wednesdays
meeting,
but
in
general,
we're
already
dealing
with
issues
as
we
talk
about
the
cluster
API
and
autoscaler
integration,
to
where
there's
behavior
that's
being
done
strictly
in
the
autoscaler
that
we
probably
want
to
also
be
able
to
take
advantage
of
as
well
within
cluster
API
as
well.
A
A
So,
even
if
it's
I
don't
necessarily
know
that
cluster
API
should
own
it,
but
maybe
it
should
live
somewhere
so
that
it
can
be
imported
as
a
library
to
be
used
in
both
places,
or
we
should
determine
some
way
to
be
able
to
share
that
and
not
have
kind
of
distinct
behaviors.
Depending
on
whether
or
not
the
autoscaler
is
used.