►
From YouTube: 20190909 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Alright,
if
you
have
access
to
the
document,
please
go
ahead
and
add
yourself
to
the
attending
list
and
any
agenda
items
that
you
might
want
to
talk
about
and
we
will
get
to
it
so
PSA,
v1l
4.
This
is
kappa,
0.4
is
out
and
also
the
correspondent
cluster
api
and
the
committee
and
bootstrap
provider
releases
are
out
as
well.
So
all
of
these
are
using
the
v1
out
the
to
api's.
If
you
are
looking
for
new
features,
they
generally
need
to
go
into
v1
alpha
3
at
this
point,
which
will
be
planning
starting
next
week.
A
B
A
C
B
Well,
it's
here:
let's,
let's
take
a
look
at
the
quick
start,
so
I
think
this
is
a
really
good
way
to
get
a
cluster
up
and
running.
It
tells
you
you
need
cute
control,
you
need
kind
or
some
other
management
cluster.
Here's
how
you
install
the
cluster,
API
components
cube,
idiom,
bootstrapper
and
then
here
you
get
to
an
infrastructure
provider
and
obviously
we're
AWS
so
like
this
is
where
it's
basically
saying
there's
some
prerequisites
that
you
need
to
do.
B
Mm-Hmm
I,
think,
that's
totally
fine
to
say:
okay,
like
you
need
to
now
go
to
some
document
in
Kappa,
I,
wouldn't
call
it
a
getting
started
guide
for
what's
what's
in
here,
I
think
this
is
the
kappa
pretty
requisites
and
we
can
have
some
text
in
here.
That
says
this
is
information
that
you
need
to
read
and
most
likely
execute
in
order
to
be
able
to
use
Kappa,
but
so
like
go
ahead
and
do
this
stuff
so
that
you
get
your
CloudFormation
stack
and
your
credentials
and
whatnot
set
up.
B
C
B
A
A
A
B
How
we
would
necessarily
do
those
I,
I,
think
I
agree
with
Vince
that
I,
like
all
of
this
stuff,
that
says:
here's
where
you
get
Cappy,
here's
where
you
get
the
bootstrap
provider
and
here's
how
you
install
a
provider
like
I,
don't
yeah
well
and
here's
how
you
do
a
cluster
and
I.
Did
this
cluster
and
machine
and
so
on?
I,
don't
necessarily
want
to
copy
and
paste
this
over
to
Kappa
and
then
have
different
scenarios
for
VP
bring
on
V
PC
or
not
right.
Yeah.
A
The
other
thing
that
I
worry
about
is,
is
we
specified
in
this
QuickStart
the
common
QuickStart
to
go
to
the
provider,
specified
dots
for
the
latest
examples
so
doing,
but
we
just
maintain
those
in
Cappy
now,
rather
than
the
providers
themselves
and
Beth.
That's
where
I
get
a
little
confused
about
bouncing
back
to
Kathy,
because
some
of
the
users
who
may
be
contributing
to
the
AWS
dots
they
may
not
even
like
they
may
not
be
contributors
to
pull
up
cluster
API
as
well
right.
B
D
D
C
So
that
I
think
that
would
be
like
a
really
nice
thing
to
do
to
promise
that
these
dogs
are
generated
Rea
like
if
we
grab
a
link,
then
it
will
be
a
build
time.
Only
kind
of
that
will
change
does
that
so,
but
if
we
were
just
pure
pure
in
AWS
to
update
one
of
these,
then
it
wouldn't
be
like
an
iframe.
You
know
I.
C
B
C
A
B
B
C
Yeah
there's
so
many
chicken
and
eggs
around
yeah,
I
think
I
think
this
is
a
good
plan.
I'll
put
some
an
issue
in
copy
to
do
promote
preprocessor,
and
we
can
find
some
naming
convention
around
this
to
making
the
big
yeah
we
do.
We
go
look
in
the
box
that
right
there
or
something
so
every
provider
every
other
product
in
do
this
as
well.
Yeah.
B
E
E
E
E
What
your
guys
is
feeling
would
be
about
like
whether
we
should
sort
of
fork
kappa
for
our
own
needs.
Maybe
provide
provide
these
capabilities
as
PRS
that
you
guys
could
review
or
just
simply
start
from
scratch
with
our
own
thing
and,
secondly,
I've
been
I'm,
not
I.
I,
don't
have
a
great
understanding
of
aster
API
v1
offered
to
works.
E
I
see
that
there
is
a
there's:
the
concept
of
a
provider,
an
infrastructure
provider
which
is
like
the
actuators.
So
that
would
be
the
thing
that
provides
the
infrastructure
I'm,
not
very
clear,
on
how
the
controller
sorry,
the
bootstrap
controller,
exactly
what
it
does
and
how
its
how
its
consumed
I.
Imagine
it's
providing
bootstrap
scripts
for
instances,
but
how
that's
consumed
I'm,
not
sure
and
I'd
also
point
out
that
we
can't
we
I
think
we
would
have
to
provide
our
own.
E
We
have
a
set
of
custom-made,
bootstrap
scripts
that
get
around
the
fact
that
we
have
had
to
use
specific
proxies
and
all
of
the
things
we
install
they
have
to
be,
and
we
can't
just
use
apt-get.
We
have
to
setup.
We
have
to
configure
local
dev
repositories
mirrors
for
for
those
things
and
and
whatnot,
so
we've
had
to
develop
our
own
bootstrap
scripts,
so
yeah,
so
I
kind
of
wanted
to
just
like
level
up
my
understanding
of
Cappy
and
of
capper
and
see
what
your
guys
thoughts
are
about
like
the
direction
we
should
take.
E
A
So
I
think
the
first
thing
I'll
make
a
comment
on
is
as
far
as
fork
or
contribute
towards
the
existing
cluster
AWS
provider.
I
would
say
it
might
be
best
to
take
an
approach
of
both
so
right
now
for
V
1,
alpha
1
or
V
1
alpha
2.
Obviously
we
don't
support
your
use
case.
Specifically.
The
thing
that
I
heard
is
around
these
security
groups,
in
particular
I
think
with
the
V
PC
and
the
subnets.
A
We
already
have
that
covered
to
be
able
to
use
existing
ones,
but
we
definitely
try
to
attempt
to
create
the
security
groups
today
per
cluster
I
think
that
song
that
we
would
like
to
support
in
the
future.
However,
I
would
highly
suspect
that
that
would
require
an
API
level
change,
which
would
mean
the
earliest
we
could
probably
adapt
to
that
for
the
upstream
provider
would
be
4
V,
1,
alpha
3,
and
that's
some
that
I
think
we'd
definitely
like
to
figure
out
how
we
can
do
that.
E
A
B
Master
is
open
for
breaking
changes,
although
we
we
haven't
gone
through
any
planning
sessions
yet
so
that
will
be
next
week.
I
did
want
to
touch
on
the
bootstrapping.
So
first
I'll
explain
how
the
bootstrap
mechanism
works
in
alpha
2
and
then
we
can
talk
more
about
your
needs.
We
have
two
ways
to
supply
bootstrapping
information
to
the
infrastructure
provider
so
that
when
it
creates,
in
this
case
an
AC
2
instance,
it
has
bootstrap
data.
B
One
is
you
can
just
generate
your
data,
however,
you
feel
like,
and
when
you
create
your
machine,
you
can
specify
the
bootstrap
data
as
a
string
on
on
the
machine.
So
if
you
have
some
way
to
manually
come
up
with
out,
you
can
do
that.
The
other
is
to
use
a
bootstrap
provider
and
we
have
one
reference
limitation
that
uses
cube
ADM.
B
So
pretty
much
all
of
the
logic
that
was
in
Kappa
V
1,
alpha
1
cap,
VV,
1,
alpha
1,
I
think
maybe
even
as
your
provider,
they
were
all
copying
and
pasting
generally
the
same
code
that
was
looking
at
the
cluster
and
machine
objects,
the
provider
specs
and
coming
up
with
cube
ADM
bootstrap
data.
That
was
for
clouding
it
and
what
we
did
is.
We
said
we
don't
want
to
copy
and
paste
this
around
any.
B
So
let's
move
this
into
a
separate
provider
and
what
that
does
now
is
it's
meant
to
be
as
generic
as
possible.
Although
there
are
certain
assumptions
and
prerequisites
about
the
base,
images
that
you
are
using
to
launch
your
VMs
that
are
required,
so
we
fully
expect
and
require
that
cute
control
and
cube
idiom
and
cubelet
are
all
pre-installed
like
not
under
control,
but
cubic
and
cube.
B
B
eyes
that
have
what's
necessary,
then
you
would
create
a
machine
and
an
AWS
machine
and
a
cube
am
config,
and
that
combination
of
those
three
would
result
in
the
cloud
and
nit
bootstrap
data
being
generated
for
your
machine
and
ec2
would
would
use
it
when
provisioning
the
instance.
So
you
mentioned
apt-get,
we
don't
have
to
get
anywhere
as
part
of
the
QAM
bootstrap
process.
B
E
That
would
be
ideal
what
there
isn't
a
very
good
story
at
at
Capital
One
for
baking
images.
There
is
a
very
strict
process,
there's
there's
very
strict
controls
on
the
armies
that
are
used
they're
there
they're
created
by
a
separate
and
then
they're
kind
of
released
into
our
accounts,
and
then
we
consume
them
and
because
of
because
of
the
amount
of
churn
around
our
bootstrap
scripts,
which
is
less
these
days.
E
But
because
of
the
amount
of
churn
we've
figured
every
better
just
to
have
our
tool
upload
the
scripts
to
s3
and
have
just
a
little
hook
that
we
provided
user
data
that
pulls
everything
down
and
don't
expect
anything
to
be
impressed
to
be
pre-installed.
So
we
install
couplet
and
and
docker
and
some
other
other
bits
and
pieces
that
the
that
the
organs
that
organization
expects
for
us
to
be
in
compliance.
E
We
do
it
all
from
from
from
base
images,
we
would
like
to
start
baking
images,
but
it's
something
that
there
is
a
way
to
do
it.
But
it's
been
on
the
back
burner
because
we've
just
had,
like
you
know,
bigger
fish
to
fry
we,
but
we
have
like
working
good
bootstrap
scripts,
so
we
just
need
a
way
to
be
able
to
inject
them
into
our
instances.
E
E
B
E
B
A
One
other
option,
too,
is,
if
you
are
able
to
leverage
cube
ATM
for
install,
and
all
you
really
need
to
do
is
get
the
prerequisites
installed.
That
kind
of
instance
stand
up
time.
You
might
be
able
to
inject
that
as
part
of
the
cube,
ATM
config
object,
where
we
give
you
the
option
to
specify
either
pre
or
post
commands
to
run
before
or
after
a
cube
ABM,
and
you
might
be
able
to
specify
that
as
part
of
the
pre
commands
to
basically
pull
down
and
execute
that
script
to
install
the
prerequisite.
Yeah.
E
That's
that's
worth
looking
at.
My
preference
would
be
if
there
was
a
way
that
we
could
simply
lift
and
shift
what
we
have
now
and
not
make
any
modifications,
because
they
they're
quite
difficult
to
test
as
they
are
and
and
they're
somewhat
brittle.
So,
but
we
are
going
to
hit
like
we
are
going
to
have
to
make
a
pretty
significant
departure
from
our
current.
E
B
Right
now
in
in
alpha
2,
it's
so
we
have
this
type
called
QB,
damn
config.
This
is
a
CR
D.
It
is
a
first-class
resource
and
on
the
status
we
have
two
required
fields:
ready
and
bootstrap
data.
Bootstrap
data
is
just
a
byte
array.
We've
had
some
comments
about,
maybe
making
this
a
reference
to
a
secret
so
that
if
there
were
any
private
data
that
could
be
in
a
secret
instead
of
just
in
plain
text
in
a
cube,
a
DM
config
status,
but
that's
not
changing
for
what
we
just
released
an
alpha
2.
B
So
in
alpha
2,
it's
just
in
this
bootstrap
data.
If
you
are
writing
a
provider,
you
would
fill
out
flip
ready
to
true
fill
in
the
bootstrap
data
and
then
the
bootstrap
providers
job
is
done
at
that
point.
It
is
the
cluster
API
machine
controller
that
is
responsible
for
pulling
this
data
from
the
cube,
a
DM
config
status
or
whatever
your
your
type
is
and
putting
it
into
the
machine.
E
B
B
E
B
Your
type
and
then
in
either
situation,
whether
you're
using
a
provider
or
not.
Eventually,
this
data
field
gets
filled
in
either
by
hand
or
we
copy
it,
and
then
once
this
is
populated,
the
machine,
controller
or
sorry,
the
infrastructure,
controller
and
Kapla
in
this
case
would
know
that
it
can
go
ahead
and
create
an
ec2
instance.
Ok,.
E
B
I
mean
the
so
the
Kuban
provider
will
do
some
certificate
management
if
you're
not
specifying
certificates
and
so
it'll
generate
generate
those
for
you
and
then
it
also
will
generate
the
QB
DM
config,
the
NIT
cloud
or
joint
configurations,
and
then
it'll
generate
the
cloud
and
it
data.
That's
got
all
of
that
in
there.
So
it
sounds
to
me
like
you
might
be
better
off.
I
mean
you
could
copy
and
paste.
What's
in
the
cube
idiom
provider
just
to
have
a
template,
but
you
probably
would
rip
what
90%
of
the
code
out
that's
doing
anything.
E
Yeah,
we
really
just
need
that
bootstrap.
We
just
need
to
be
able
to
provide
the
bootstrap
script.
We
the
way
certificates
work.
Is
we
have
a
couple
of
models,
we're
deprecating,
one
of
them,
one
of
them?
So
we
we,
we
wrote
our
own
CLI,
which
generates
a
set
of
initial
certificates
and
uploads
them
to
s3,
and
then
those
are
pulled
down
to
instances
we're
deprecating
that
in
favor
of
the
other,
the
other
strategy,
which
is
it
integrates
with
our
enterprise
vault
solution,
which
has
some
proprietary.
E
E
B
Of
things
for
alpha
2,
but
we're
here
to
help,
so
you
have
API
types
that
you
would
create.
Just
like
that
cube
a
in
config
that
we
were
looking
at
and
then
you
really
just
need
a
single
controller
and
you
can
copy
and
paste
a
good
portion
of
the
logic.
That's
in
this
reconcile
function
a
lot
of
its
boilerplate,
where
we're
gonna
retrieve
the
the
object,
and
you
know
we
short
circuit.
If,
if
it's
already
ready,
we
go
ahead
and
find
the
machine
that's
associated
with
this
cube,
ATM
config.
B
If
the
machine
already
has
been
initialized,
then
we
short-circuit
we
go,
try
and
find
the
cluster,
and
then
I
would
say
at
this
point.
All
of
this
code
is
where
you
would
put
in
your
logic
to
generate
whatever
you
need
to
generate,
because
this
is
doing
stuff,
like
figuring
out.
If
the
control
planes
been
initialized
or
not,
and
if
it
hasn't,
and
we
make
sure
that
we
get
one
control,
plane
machine,
that's
allowed
to
initialize
and
the
other
ones
have
to
wait.
B
And
if
we
have
any
non
control
plane
machines,
they
have
to
wait
as
well.
This
is
all
QAM
related
stuff
and
and
then
eventually
we'll
deal
with
some
certificate
stuff
and
spit
out
join
data
for
QAM.
So
all
like
all
of
this
stuff,
that's
that's
beyond
really
just
getting
the
cluster
and
the
machine
and
the
cube
and
config
you
would
rip
that
out
and
replace
it
with
whatever
you
need.
A
E
Definitely
need
to
take
my
team
through
this
and
I'll.
Try
yeah
understand
it
myself,
fantastic,
okay,
well,
I,
think
I
know
how
we
can
again
were
eager
to
get
started
on
a
proof
of
concept,
and
then
you
know
how
about
we,
you
know
it
it
assuming
I
I'm
sure
I'll
have
a
ton
of
questions
over
the
next
few
weeks,
but
how
about
I
then
bring
what
we've
built
and
show
you
guys
and
we
can
figure
out.
E
B
Sounds
good
I
do
want
to
just
circle
back
and
around
the
like.
Do
four
core
or
not,
if
you
are
making
changes
specifically
around
like
bringing
your
own
security
groups,
I
think
there's
every
reason
that
we
should
include
that
in
Kappa,
because
we
allow
you
to
bring
your
own
DPC
and
subnets
so
happy
to
have
you
start
by
filing
an
issue
if
there
isn't
one
already
about
it.
Certainly
you
and
your
team
obviously
can
develop
it
independently,
but
and
you
know
we'll
take
it
in
format
for
merging
into
mass
eventually.
E
E
B
B
D
B
C
C
Machine
is
there
a
pair
machine.
One
question
that
I
have
for
Andrea
is
like.
Are
the
security
groups
sometimes
like
going
to
be
pre
created
for
you?
You
just
need
to
use
them.
Can
you
give
some
direction
like
like
at
the
network
team,
I
guess
like
around
naming
or
tax
to
add,
or
it's
just
like
you
gotta
use
this
idea.
That's
it.
E
Yeah
that
that
we
have
yeah
here's
the
ID
use
that
roster
the
process
here.
So
security
groups
here
are
seen
as
a
critical
security
control
point
and
as
such,
they
built
a
mechanism
to
allow
for
people
to
request
security
group
to
changes
around
security
groups.
There
is
a
get
an
enterprise
github
workflow,
where.
E
Those
are
like
an
information
security
officer
and
a
and
and
an
architect
have
to
met
like
manually,
review
it
and
give
approval.
Then
it
can
be
urged
into
an
upstream
git
repository
that
we
don't
have
permissions
to
write
to
directly
and
then
once
that's
there.
Then
we
can
run
a
CLI
tool
which
that
can
then
apply
the
changes
in
the
account.
E
C
E
A
C
So
going
back
to
the
original
question
like
if
you
want
to
like
kind
of
do
like
a
bring
your
in
bring
in
your
security
groups,
you
need
something
for
like
these
four
categories:
I
guess
like
a
bash
in
like
we
talked
about
disabling
it
like
having
an
option
to
disable
sip
of
the
bastion,
but
control
point
noted
load
balancer.
You
will
need
at
least
these
three,
so
I
guess
like
given
a
D.
These
are
hard-coded
because,
like
they're
always
required
like
for
the
80%
Hughes
case,
I
guess
we
could
say
like
in
taking
this
back.
C
E
B
E
Our
account
so
there's
there's
two
already.
Then
we
have
a
security
group
which
we
apply
to
every
kubernetes
know,
regardless
of
its
role,
and
then
we
have
specific
roles
for
EDD
worker
and
master
nodes,
and
then
we
have
a
role,
a
security
group
for
our
control,
plane,
load
balancer,
and
we
have
a
security
group
for
our.
How
well
our
workers
or
or
a
set
of
our
workers,
which
allow
a
couple
of
node
ports
which
which
we
use
to
to
route
to
our
nginx
ingress
controller.
E
A
E
B
E
C
B
Yeah,
that's
kind
of
what
I
was
saying
is
that
if
you
specify
security
groups
or
you
kind
of
need
a
way
to
turn
off
reconciliation
of
security
groups,
except
for
maybe
the
ELB,
but
then
at
the
Machine
level
or
the
ec2
instance
level,
you
just
want
to
specify
IDs
and
have
them
applied
and
that's
the
end
of
the
story.
There
yeah.
C
E
B
Yeah
I
think
the
roles
in
your
situation,
like
the
roles
matter
to
you,
but
I,
think
from
an
API
standpoint.
If
we
give
you
the
ability
to
specify
one
or
more
security
group
IDs
for
the
load,
balancer
and
then
one
or
more
security
group
IDs
per
machine
which
translate
to
an
ec2
instance,
does
that
give
you
everything
that
you
need
I
think.
E
C
I
don't
know
what
Andy
said
before
like
this:
it's
not
about
polluting
this,
like
as
like
good
kind
of
features
to
the
product
that,
like
other
folks,
my
like
neat
as
well
so
yeah,
definitely
like.
If
you
don't
want
to
go
to
read
like
I'm,
just
working
everything
like
we
definitely
welcome
contributions
here,
yeah.
E
E
Yeah
yeah,
absolutely
I
know
I'm
a
hundred
cent
agreeing
I'm
thinking
about
the
implantation
and
how
how
we
would
make
it
flexible
enough
that
we
don't
lock
anybody
into
a
particular
way
of
like
I
guess
what
I
want
to.
I
guess
what
we
want
is
whether
we're
that
security
group
role
slice
is
to
find
I
I
guess
we
want
each
of
those
roles
to
have
a
slice
of
security
groups.
B
Like
I'm
suggesting
there's
no
roles
like
you
manage
the
roles
logically
in
in
your
Capital
One
isms,
and
anybody
else
who
wants
to
manage
roles
can
do
it
their
own
way.
But
from
an
API
standpoint,
you
literally
just
have
a
an
array
of
security
group
IDs
that
are
attached
to
the
load
balancer
and
then
an
array
of
security
group
IDs
that
are
associated
with
an
ec2
instance
with
the
machine.
B
So
if
you
want
to
have
an
ingress
load,
balance
or
security
group
and
a
Capital
One
required,
like
you
know,
company-wide
required
security
group,
you
can
have
those
two
IDs
associated
with
a
load
balancer
and
from
Kappas
perspective.
We
don't
need
to
know
what
their
names
are.
That
makes
sense.
It.
C
B
D
E
B
D
B
B
C
B
A
D
C
A
If
we
like
the
credentials
that
were
encoding
into
there,
there's
no
reason
why
we
couldn't
also
support
the
region
being
embedded
in
there
as
well.
Well,
if
somebody
created
their
own
AWS
config
with
the
region
set
in
it,
then
it
should
work.
I
guess
that's
truly
truth
work,
because
it
would
just
pull
it
from
the
default
lookup
half
so.