►
From YouTube: 20190812 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello
and
welcome
to
the
August
12th
edition
of
the
AWS
provider
for
cluster
API
office
hours,
sub-project
of
both
cluster
API
and
state
cluster
lifecycle.
If
you
have
any
items
that
you'd
like
to
discuss,
please
go
ahead
and
add
them
to
the
agenda
and
we
do
have
a
new
doc
out.
So
I
would
work
on
updating
those
references
to
start
with.
I
have
an
announcement.
The
0.37
release
was
cut
I
believe
that
was
the
last
week
there.
A
It's
an
update
to
the
versions
were
down
1.9
of
cluster
API
bug
fix
to
query
V
pcs
before
attempting
to
delete
them
back
off
in
retries
were
enabled
for
subsequent
AWS
calls
to
help
improve
some
of
the
eventual
consistency
handling
for
the
AWS
api,
and
there
was
a
fix
to
put
in
for
the
cluster
actuator
around
removing
the
control
plane,
ready
annotation
if
all
of
the
control
plane
nodes
were
deleted.
So
please
go
ahead
and
kick
the
tires
and
with
that
release-
and
let
us
know
if
you
have
any
issues.
B
C
A
If
anybody's
wondering
there's
a
few
different
reasons
for
the
conversion
to
key
builder
v2,
one
of
those
is
is
that
it
makes
the
webhook
creation
and
management
a
lot
better
than
the
one
at
and
the
other
one
is
is
to
just
provide
consistency
across
the
repos
for
any
new
contributors
onboarding.
If
they're
familiar
with
one
provider,
then
it
should
be
relatively
easy
to
contribute
on
a
different
provider,
because
we
hopefully
have
similar
layouts
and
similar
tooling,
and
a
similar
experience
around
that.
D
So
the
great
the
great
news
is,
I
was
able
to
make
a
functioning
cluster
from
all
of
you.
One
A
two
components
like
just
from
Y
Amal,
which
was
really
cool,
I,
didn't
have
a
cute
config
to
look
up
and
I
was
gonna,
contribute
that
work,
but
I
just
wanted
to
confirm
that
so
that
was
done
by
Kappa
in
v1
alpha
one
and
it's
going
to
move
into
the
bootstrap
provider
for
v1
a
I
just
I,
don't
know
if
there's
still
discussion
about
that
or
if
it's
confirmed.
E
B
C
B
C
C
B
C
D
B
If
I
remember
correctly
the
proposal,
it
actually
doesn't
specify
anything
about
it,
so
we
might
want
to
bring
this
up
at
the
cluster
API
meeting
on
Wednesday
to
just
discuss.
If
the
brochure
pradesh
should
always
created
a
cube,
config
or
not
more
inferior
to
always
create
it,
but
there
might
be
some
other
use
cases
that
I'm
nothing
about
I.
E
C
E
I
mean
the
figures
that
we're
talking
about
now:
I
guess
what
I'm?
What
I'm
wondering
is
if,
if
there
are,
if
there's
more
than
one,
if
there's
just
one
controller
than
one
secret,
is
fine,
but
if
there
are
multiple
controllers
and
maybe
if
we
have
a
secret,
you
know
per
per
controller,
so
they're,
not
so
there's
no
chance
of
them.
Stepping
stepping
on
each
other
but
I,
see
I.
Just
is
not
clear.
It
sounds
like
there's
just
going
to
be
one
controller.
Writing
this
the
secret
and
multiple
readers.
That's
possible.
E
A
I
think
there's
potentially
some
value
in
having
separation,
especially
when
we
start
talking
about
the
the
CA
certificates
versus
the
and
especially
when
we're
talking
about
the
key
part
of
the
CA
certs
versus
the
cube
config
secret,
because
that
would
give
us
more
flexibility
on
our
back
to
say
what
has
access
to
the
CA
data
versus
what
has
access
to
be
able
to
consume
the
cube
config
and
the
cube
config
in
the
case
of
the
cubanÃa
provider.
What
we've
talked
about?
A
It's
just
basically
exposing
the
admin
cube
config,
but
over
time
we
may
have
reason
to
expose
a
lower.
You
know
cube
config
with
lower
permission,
so
I
don't
think
we
want
to
necessarily
combine
those
I
know.
There
was
some
discussion
about
separating
the
individual,
CA
secrets
and
all
certificate
secrets,
and
all
of
that,
so
that
may
I
think
we
still
have
to
have
that
discussion,
but
I
think
separating
the
cube,
ATM,
config
or
the
cube
config
versus
the
other
secrets
makes
a
lot
of
sense.
A
A
C
A
So
I
think
I
think
there's
potentially
two
ways
that
we
can
tackle.
This
one
would
be
to
like
you
said,
expose
support
to
you,
know,
optionally,
create
the
EOB
internal
like
an
internal
you'll,
be
versus
an
external
he'll,
be
the
other
one
wondering
if
we
could.
Potentially,
we
could
potentially
support
it
as
part
of
like
a
bring
your
own
network
scenario
to
I,
don't
know
which
is
necessarily
better,
obviously
having
it
exposed
that
the
higher
level
would
would
be
an
improved
user
experience
for
people
that
want
to
do
that.
A
But
I
wonder
if
that's
also
creating
a
kind
of
shoot
yourself
in
the
foot
scenario
where
somebody
doesn't
necessarily
have
that
connectivity
configured
and
then
we're
just
standing
up
a
load.
Balancer,
that's
inaccessible
from
you
know
where
they're
standing
that
up
so
I
wonder
if
putting
it
you
know
making
it
a
requirement
to
have
it
pre-existing,
as
part
of
you
know,
bring
your
own
network
type
scenario.
Would
you
know
lessen?
You
know
some
of
the
support
issues
that
might
come
up
from
people
trying
to
leverage
the
internal
load
balancer
without
you
know.
E
Mean
I
think
that's
I,
think
that's
fair,
but
I
in
this
case,
I
mean
I.
Think
it's
a
fair
concern,
but
given
the
sort
of
the
amount
of
work
that
that
one
would
have
to
do
to
you
know
to
to
bring
a
you
know,
to
bring
up
all
the
info
structure
just
to
be
able
to
to
to
change
this.
This
property
of
the
elbe
I
think
you
know
there
are
the
remaining
topology
is:
is
the
same
I.
E
Get
given
that
I
I,
wonder
if
you
know
making
it
clear
in
the
documentation
what
you
know
wherever
or
wherever
it
needs
to
be
made
clear
in
in
the
you
know
in
the
API
stack
and
comments.
If
that
would
be,
you
know
an
acceptable
compromise,
which
is
so
so
yes,
a
user
could
shoot
themselves
in
the
foot,
but
we
would
you
know
we
would
make
an
effort
to
to
call
out
in
the
documentation
that
you
know
you
you
need
to
ensure
the
the
connectivity
and
the
I
would
say
that
it
it
would
be.
E
You
know
you
would
you
would
shoot
yourself
in
the
foot,
but
you
would
be
able
to
use
kappa
to.
You
know
may
be
stitch.
You
know
stitch
your
foot
up
after
the
after
after
shooting
it,
because
I
think
what
would
happen
is
you
would
deploy
the
cluster,
or
rather
you
create
a
cluster
object.
Capital
would
go
and
reconcile
and
create
the
AWS
resources.
It
would
then
try
continually
to
reach
that
workload
cluster
and
if
you
did
not
provide
some,
you
know
mechanism
for
doing
that.
E
It
would
just
continue
to
fail,
and
you
know
back
off
and
retry
at
some
point.
You
would
hopefully
just
discover
this
discover.
You
know
what
what
problem
is,
and
maybe
we
could
do
something
to
improve
that
experience,
and
then
you
wouldn't
be
able
to
say.
Okay,
let
me
delete
this
cluster
object
at
which
point
capital
would
remove
all
those
resources,
so
you
wouldn't
be
left
with.
E
B
So
sorry,
that's
I
was
just
gonna
say,
like
I
think
it
would
be
okay
to
support
like
a
private
ulb
option
with
the
German
tation.
That
said,
yeah
like,
for
example,
if
you
run
closer
cuddle,
a
minor
working
kind
right,
because
you
will
have
to
set
up
like
a
VPN
in
darker
as
well
like
I,
specially
on
Mac,
so
yeah
as
long
as
like
all
of
those
are
like,
probably
like
a
documented
I,
don't
see
why?
B
Wouldn't
we
because
pre
creating
the
yield
beam
might
be
a
little
bit
kind
of
counterintuitive,
because
you
will
have
to
match
the
security
groups
which
are
created
like
when
the
cluster
is
created
and
then
yeah
the
firewall
rules
and
all
those
things.
So
it
might
be
more
work
to
do
that.
I
guess
then,
actually
supporting
a
VPN
locally.
B
Yeah
yeah,
and
this
will
will
need
to
be
part
of
the
documentation
as
well.
It's,
for
example,
the
remote.
No
reference
wouldn't
work.
If
you
have
a
management
cluster
like
running
in
kind,
for
example,
locally
without
a
VPN,
but
in
like
it
probably
work
if,
like
it's
inside,
the
AWS
network
is
I
mean
the
private
you'll
be
will
be
discovered,
and
but
it
has
to
be
in
the
same
PPC,
I
believe
or
something
like
that
anyway.
E
B
The
documentation
will
need
to
be
really
good
at
Rasmus
I,
guess
it's
otherwise.
The
bugging
like
something
like
this
would
be
kind
of
hard
for
sure,
because
there's
gonna
be
like
a
number
of
things
that
won't
work
for,
don't
put
the
machine,
said
scaling
or
the
machine
deployment
as
well,
because
Romano
reference
one
won't
be
there.
So.
A
I'm
wondering
if
this
is
something
that
we'd
be
able
to
do
some
type
of
validation
around
in
the
cluster
controller
itself,
well
or
at
least
the
AWS
cluster
controller,
because
I
don't
think
this
is
something
that
we'd
want
to
do
more
generically.
This
is
like
if
they
define
that
it's
a
private
internal
load
balancer,
then
we
attempt
to
actually
connect
to
and
and
verify
that
we
get.
A
E
So
III
think
that
would
be
that
would
be
nice
to
have,
but
it
that
actually
throws
a
wrench
in
I
think
in
in
my
idea
of
finding
a
or
you
know
establishing
connectivity
in
a
way
that
doesn't
acquire
any
changes
to
kappa.
So
what
I?
What
I
was
thinking
is
in
the
future.
For
example,
this
is
kind
of
a
separate
upstream
issue.
If
a
coop
config
can
support
a
specific
HTTP
or
socks
proxy,
then
I
could
just
update
the
coop
config
and
then
Kappa
would
be.
You
know
what
talk
did
the
same.
E
You
know
we'll
talk
to
that
end.
Point
talk
to
this
re
talks
with
a
cluster
API
endpoint
and
you
know,
sort
of
transparently
or
you
know,
I
mean
it
would
be
going
through
a
proxy,
but
as
far
as
Kappa,
no
just
it's
just
using
some
coop
config,
and
that
tells
us
how
to
reach
it
and
how
you
use
this
product
etc.
E
Now,
if
we
do
validation
on
the
EOB
itself
prior
to
actually
you
know
prior
to
like
saying
okay,
this
is
a
cluster
API
endpoint,
then
I'm
not
sure
how
to
I'm,
not
I'm,
not
sure
I'm,
not
sure
how
to
sort
of
tell
Kappa
hey
this.
E
Is
you
know
here's
this
way
that
you
reach
the
ELB,
then
I
then
I
would
have
then
the
only
solution
or
then
the
only
sort
of
network
solution
you
you'd
I
think
have
to
have
is
some
kind
of
totally
transparent
proxy
so
that
yeah
I,
don't
I,
don't
know,
maybe
there's
something
we
could
figure
out.
I
think.
B
Until
until
that's
something
that
we
have
implemented
like
TCP
tunnel
check
like
might
be,
okay-
and
just
this
is
a
warning
to
the
user
say
like
hey
I
can
connect
to
this.
If
you
don't
have
connectivity
things
my
break
and
maybe
see
this
talk
for
more
information,
nothing
breaking
per
se,
but
more
like
of
a
warning
event.
I
think
that's
what
Jason
was
suggesting.
E
Okay,
yeah
warning
III,
sorry,
I
I
thought
I
thought
it
was
gonna,
be
an
error.
Yeah
burning
my
dull
times,
yeah.
A
Yeah,
definitely
not
you
know,
cuz
in
the
case
that
somebody
doesn't
have
some
configured,
give
them
a
chance
to
get
it
up
and
and
get
back
into
a
good
state.
But
just
in
the
case
that
you
know
what
especially,
is
around
how
we
follow
the
node
references
and
stuff
like
that,
just
you
know
give
somebody
a
heads
up
that
this
might
be
an
issue
at
least.
E
B
Being
one
outfit
sounds
good,
so
one
thing
that
myself
and
Asian
were
talking
about
this
morning
was
how
the
p150
user
experience
would
look
like
in
v1
and
for
one
we
had
closer
cuddle
built
inside
the
AWS
provider
and
then,
like
a
user,
would
generate
a
yahoo
files
and
like
apply
those
llamó
files,
etc.
I
know
that,
like
we
still
need
support
the
pivot
fully
in
b1a4.
B
Are
these
from
microscopy
I,
take
you
billion
provider
and
then
put
them
into
the
provider,
components
and
then
running
closer
cuddle
on
it.
So
it's
it's
kind
of
like
more,
like
I
wanted
to
just
like
a
more
brainstorm
like.
What
do
we
think,
like
the
experience?
Would
look
like
in
Envy
104,
given
that
cluster
idiom
might
not
be
ready,
probably
won't
be
ready
in
two
weeks,
so
I'm
not
sure,
if,
like
also
the
New
Relic
folks
here,
I
think
entries
on
the
call
of
what
you
think
about
the
user.
Experience.
D
Yeah
I
mean
so
I
I
think
the
biggest
thing
I
know
there's
an
issue
somewhere
that
I
think
Andy
opened
about
this,
but
like
having
to
have
five
objects
in
my
provider,
components
and
having
to
like
fine-tune
each
of
those
five
is
I
mean.
Maybe
it's
coming
a
little
bit
cumbersome,
especially
if,
like
I'm,
not
gonna,
do
anything
too
crazy
with
my
AWS
machine
or
AWS
cluster
objects.
D
So
if
we
gets
like
simplify
somehow
like
if
we
can
get
back
to
creating
a
cluster
and
the
Machine
object
in
any
kind
of
way
that
doesn't
just
ruin
the
design
of
be
one
of
the
two.
That
could
be
pretty
helpful,
though
I
would
say
we're,
probably
not
the
best
people
to
ask
around
cluster
cuddle.
Only
because,
like
we
have
a
very
heavily
modified
like
machine
and
cluster
yeah
mole
files
that
we
use
like,
we
don't
use
any
of
the
generated
components
because
of
just
how
heavily
customized
they
are.
B
Okay
and
for
this,
the
simplified
user
experience
like
I
have
been
out
like
for
two
weeks
so,
like
not
sure
like,
if
there
have
been
any
like
progress
on
on
that.
But
it
would
definitely
be
great
like
have
that
51
out
for
two
or
like,
maybe
soon
after
the
release,
so
that
yeah,
a
user
doesn't
have
to
like
to
create
like
ten
different
options
to
to
create
one
machine
or
something
are.
C
A
management
cluster,
that's
assuming
that
you
can
get
a
management
cluster
or
get
a
kubernetes
cluster
running
the
management
cluster
would
not
be
too
hard
if
we
have
proper
releases
on
all
of
the
all
of
the
components
for
all
the
providers
that
we
need,
because
customized
does
do
some
cool,
remote,
URL
stuff.
So,
like
you
can
set
up
all
the
things
with
you
know,
one
or
three
customized
commands
the
zooming
that
we
have
all
the
versions
in
place.
B
C
So
one
thing
that
came
out
of
capti
was
the
tool
that
I
wrote,
called
capti
control
and
just
sort
of
like
became
something
that
was,
you
know,
Oh
a
one-shot
like
create
management
cluster
with
all
the
things
and
then
options
to
override
all
of
the
things
that
you
need
to
override
a
test.
I
don't
particularly
use
that
much
anymore
because
it's
like
it
was
a
development
tool
and
I
used
tilt
instead
to
set
up
my
development
environment
so,
like
I,
haven't,
really
used
it,
but
something
like
that
I
could
see
being
useful
for.
C
D
Got
that
working
nicely
with
the
Kappa
provider
by
the
way?
So
if
anybody
needs
help,
that's
awesome,
let
me
know
I
had
to
do
some
hacky
stuff,
but
it
does
work.
I
am
super
interested
well.
Yeah.
I
didn't
realize
that
there
was
a
little
bit
of
a
con
in
that
like
Kathy.
Just
does
not
work
on
Mac,
but
that's
another.
E
E
So
I'm
wondering
if
something
like
a
you
know,
like
a
you
know
like
a
basic
shell
script
which
could
be,
you
know
fronted
by
a
coop
cuddle
plug-in,
for
example,
you
know,
would
do
the
trick,
something
you
know
that
give
you
an
experience
like
you
know,
coop
cuddle,
run
that'll,
give
you
a
deployment.
You
right.
You
can
give
some
basic
options,
so
you
could
possibly
have
coop
cuddle.
You
know
kappa
create
cluster.
You
know
with
maybe
some
basic
options
exposed
right
like
just
like
in
coop
cuddle
run,
you
can't
you
can't.
C
E
B
B
So
I
was
wondering
like
a
now.
The
closer
cutter
can
work
across
providers.
If
we
can
define
with
like
this
issue
that
Andrew
linked
like
a
nicer
experience,
then
like
them
might
also
be
like
a
like
are
not
including
the
development
experience
as
well,
but
TBD
on
that
I'm.
More
wondering
like
about
an
a
new
user
right,
like
some
someone
that
doesn't
know
anything
about
Casa
API
and
said,
like
I,
want
to
have
a
currents
class
or
hand
up.
B
Yes,
what
do
I
have
to
do
from
to
go
from
zero
to
a
like
a
cross-trained
up?
Yes,
it's
gonna
be
a
fun
ride.
Today,
probably
one
offer
too
so
simplifying
that
and
making
sure
that,
like
it's
like
a
very
streamlined
that
would
be.
That
would
be
awesome,
but
what
you
mentioned
like
if
we
can
create
something
like
that
inside
a
cluster
cuddle
just
like
for
as
default
conventions,
and
things
like
that.
That
would
be
great,
in
my
opinion,.
E
Yeah
personally,
I
guess
I've
been
using
croup
kettle
over
cluster
Hill,
but
maybe
that's
that's
just
the
alpha-1
sort
of
political
and
I
have
I,
have
a
kind
management,
cluster
and
I
kinda,
I
kind
of
like
to
do
it
or
the
hard
way
with
with
coop
cuddle
but
yeah
cluster
cuddle.
If
it
there's
not
going
to
be
a
specific
one
for
each
provider,
then
I.
E
F
The
the
big
thing
that
cluster
cuddle
does
is
it
manages
a
lot
of
the
times
when
I'm
doing
local
development.
I
don't
want
that
pivot
to
occur
on
so
I
just
use
a
key
pedal
as
well,
but
I.
If
we
and
that's
also
it's
much
easier
to
have
a
one-click
EML
file
apply
for
v1
a1
vs.
b1
a2,
which
needs
a
little
bit
more
of
a
manual
intervention.
So
I
think
that's
why
a
cluster
cuddle
might
be
necessary.
B
Yeah
and
I
would
add,
like
I,
also
found
out
that,
like
kind
of
having
a
magic
kind
of
management
plus,
it's
actually
a
little
bit
easier
than
running
close
to
cuddle,
but
it
needs
like
a
dead
experience
like
with
kubernetes
in
kind.
They
like
a
new
user,
might
not
have
so
because
we
want
to
support
we're
gonna,
keep
the
support
for
pivot,
as
as
far
as
I
understood,
we
might
have
to
fix
the
first
user
experience
as
well,
because
we
cannot
require
people
to
necessarily
have
a
local
management
cluster.
B
A
Right
I
just
wanted
to
sit
there
and
try
to
go
through
some
of
the
backlog
to
wrap
things
up
looks
like
the
last
issue
that
we
touched
was
936,
so
looking
937,
938,
939
940
are
all
documentation,
issues
related
to
defining
release
blocking
tests
they're
all
you
know.
These
are
ones
that
I'm
planning
on
working
on
so
not
going
to
go
into
those
too
much.
We
have
this
one
around
documentation,
clarity,.
E
Saw
this
I,
maybe
I,
can
help
add
something
to
the
docs
I
did
find
this
a
it's
a
little
confusing,
because
there's
there's
like
get
text
and
subsea
that
that
we
actually
have
we
depend
on
on
a
on
a
go
implementation
of
end
subs
for
I'm,
not
sure
I'm,
not
sure
why,
but
that's
that's
there,
maybe
for
the
for
the
automated
testing.
So
yes
I,
if
you,
if
you
want
I,
can
I
can
add
some
some
clarification,
some
notes
or
something.
A
A
A
B
It
is
this
for
be
one
awful
one
or
because
I
don't
think
it
says,
I.
B
A
A
A
G
A
A
Default
to
outer
trade
provider
for
V
1,
alpha,
2,
plus
alright,
yeah
Tim,
created
this.
We
talked
about
this
potentially
being
something
it
would
be
a
blocker.
Actually
it's
already
on
next,
an
important
long
term.
There
were
potential
issues,
because
right
now,
sig
AWS
still
recommends
using
the
entry
provider
so
not
much.
We
can
do
until
they're
willing
to
support
the
how
to
treat
provider
in
a
bigger.
A
F
C
A
F
A
F
E
A
A
Convert
project
to
queue
builder
B,
to
your
active
pins,
working
on
that
just
a
tracking
issue,
yeah
all
right
and
remove
local
image
builder
all
right.
This
is
because
there's
now
the
image
builder
repo
under
kubernetes
sakes-
and
we
have
currently
the
vSphere
provider-
has
moved
their
image
builder
tooling
over
there,
and
it
would
be
nice
if
we
can
go
ahead
and
get
our
stuff
converged
on
there
as
well,
so
that
you
know
we
take
some
shared
responsibilities
of
that
and
have
more
unified
image
building
across
different
providers.
A
A
D
I
have
a
quick
question,
so
it
sounds
like
a
lot
of
the
tickets
on
this
backlog
may
be
helped
out
when
we
get
the
Q
Builder
v2
in
with
validating
my
books.
Would
it
be
worth
I
guess,
starting
to
document
and
some
of
the
rules
that
we're
gonna
implement,
because
there
are
some
length
concerns.
There's
some.
You
know
using
dots
concerns
and
there
some
people
get
name
concerns
that
can
probably
all
be
nicely
wrapped
up
into
one
I
guess
work
item.