►
From YouTube: 20200210 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Monday
February,
10th
2020.
This
is
the
clustered
API
AWS
provider
office
hours.
We
are
a
sub-project
of
sit
closer
lifecycle.
We
have
a
code
of
conduct
which
is
basically
be
nice
to
everybody.
This
meeting
is
being
recorded
and
please
go
ahead
and
add
your
name
to
the
attending
list,
and
if
you
have
agenda
items,
please
feel
free
to
add
them
below.
There's
a
link
in
the
zoom
chat
for
this
document.
A
And
these
are
data
where
people
with
read
access
to
or
appropriate,
read
access
to
a
situ
could
get
it.
We
move
it
into
secrets
and
the
secret
identities
or
names
are
randomly
generated
and
they're
short
lived.
So
it
would
be
fairly
impossible
to
guess
the
names
of
the
secrets,
and
even
if
you
could,
they
only
live
for
a
couple
of
minutes.
While
the
machine
is
booting
up
and
retrieving
them
did
I
miss
anything
to
gear.
A
Well,
I
do
want
to
give
you
a
big
shout
out
and
thank
you.
I
know
that
the
review
process
was
extremely
frustrating
both
in
the
number
of
comments
and
the
fact
that
we're
spanning
a
lot
of
different
time
zones
so
I
appreciate
you
sticking
with
it
and
I'm
glad
that
it's
in
now
this
is
cloud
Annette
specific
and
it
is.
A
It
also
requires
bash
right
for
the
the
script
that
we're
using.
So
if
you
are
using
a
custom
image
and
it
doesn't
use
cloud
in
it
or
it
doesn't
have
passionate-
and
this
is
going
to
break
if
you
turn
it
on
I-
think
we
may,
in
the
future,
use
a
go
program
instead
of
bash,
or
at
least
that's
something,
to
consider
all
that
we
have
to
bake
that
into
the
images.
A
All
right,
so
we
have
a
group
topic
which
I
realized.
We
can't
decide
as
a
group
today
but
I
know.
Jason
has
been
wanting
to
attend
these
meetings
and
has
a
conflict
at
this
time
that
has
to
do
with
I
believe
contribs
summit
for
Q
cons,
I
think
so
he
wanted
to
know
if
anyone
objected
or
would
be
willing
to
moving
this
meeting
to
either
an
hour
later
than
now
or
some
other
day
in
time
and
we'll
send
out
something
on
the
mailing
list.
C
Yeah
I
can
I
can
stare,
so
we
didn't
find
issue
extreme
like
regarding,
like
a
multi-tenancy.
It
would
be
enough
for
three.
This
does
not
impact
you
enough
for
two
deployments
and
long
story
short.
Is
that
like
we're
deploying
web
books
upstream
and
how
the
weapons
are
registered
with
the
CR
D
are
like
global
so
like,
which
means
that
you
can
only
have
one
web
book
service
that
responds
to
conversion
requests
for
like
a
given
provider.
So
we
come
up
with
a
few
different
solutions.
C
The
one
that
we
have
in
Broca's
upstream
is
to
move
the
web
book
into
a
different
namespace,
which
is
like
prefix,
it's
fixed,
so
it's
always
gonna
be
like
at
the
same
namespace.
That
really
complicates
the
deployment
the
make
files
and
like
tests
or
me
to
change
the
division
test.
Only
change
as
well,
so
we
were
debating
like
to
understand
if,
like
there
is
actually
like
a
strong
requirement
for
to
have
like
multiple
Kappa
deployments,
like
in
a
single
management.
D
C
And
we
can
always
like
look
to
add
it
to
Kappa
like
multi-tenancy,
so
they're
like,
for
example,
for
each
foster,
you
kind
of
like
say:
hey,
go
use
the
secret
like
if
you
wanna
for
credentials,
but
that's
TBD
and
that's
not
how
it
works
today.
So
it's
a
kind
of
like
a
big
change
and
I
just
wanted
to
bring
it
to
the
group
to
understand
like
if
there
is
interest
or
not
in
a
very
small
group
today
but
yeah.
A
D
A
Are
not
currently
doing
multi-tenancy
but
are
interested
in
exploring
it
and
they're
aware
of
this
issue
and
what
you
just
said,
Vince
so
I
think
this
is
something
we
definitely
need
to
talk
about
together
with
the
larger
audience,
and
we
also
need
to
try
and
resolve
it
fairly
quickly.
But
we
do
have
some
time
in
between
now
and
when
we're
planning
on
releasing
the
next
minor
version
in
early
March,
yeah.
C
C
C
There's
there's
also
the
fact
like
the
SDK
like.
Doesn't
it's
not
really
like
it
written
in
a
way
the
like
it's
like
I
was
in
native
to
like
switch
credentials
on
the
fly,
but
we
can
make
it
work
like.
There
were
some
improvements,
but
I
need
to
double-check.
If
there
were
like
on
the
feet,
you
like,
there's
live
is
a
new
version
of
the
SDK
somewhere
as
well,
so
like
I'm
gonna
have
to
check
as
well.
Maybe.
A
Well,
let's
reach
out
to
whatever
contacts
we
have
within
AWS,
especially
ones
who
know
the
girl
SDK
and
see
what
they
recommend.
If
they
just
flat
out,
say
no,
don't
don't
even
bother
trying
to
use
multiple
credentials
in
a
single
process,
then
that
may
be
the
end
of
it,
but
if
they
have
advice
for
how
to
do
it
efficiently,
I
think
I'd
be
useful
to
get.
C
A
No,
you
add
a
bunch
of
stuff
there
all
right,
so
we
still
have
document
what
you
get
in
a
cluster
still
think
it's
important
I'm,
not
gonna,
move
it
from
milestone
audit
the
code
to
see
that
we're
generating
events.
Every
time
we
talk
to
AWS,
probably
still
worth
doing,
but
no
disks.
This
one
John's
PR
get
in
no.
B
A
A
At
a
roadmap
document
I
still
need
to
actually
started
this,
but
then
I
wasn't
really
sure
what
to
put
in
it,
because
some
of
the
stuff
that
I
had
in
my
Google
Talk
was
out
of
date.
So
maybe
we
could
talk
about
that
real,
quick
I
realize
we
don't
have
too
many
people
here,
but
if
we
jump
over
here
so
one
of
the
things
that
we
had
was
ability
to
disable
with
Bastion
host,
which
is
in
did
that
make
it
in
did
I
get
backward
alpha
2,
or
is
that
just
an
alpha
3
events?
C
A
A
B
A
A
So
just
look
I'll
go
down
one
by
one,
so
for
alpha
three,
we
already
have
disabling
the
bastion
host.
We
had
a
an
idea
to
do
documentation
or
something
around
what
sort
of
topologies
we
may
support.
I.
Don't
know
that
anybody's
done
anything
here
and
I
can't
remember
if
we
have
an
issue
for
this
see.
A
B
D
A
A
B
A
A
B
C
A
All
right
so
did
that
go
in
recently.
C
A
I
mean
it's
for
alpha
3,
yeah,
okay,
I
definitely
want
to
do
the
bass
drum
fire
detection.
We
want
to
do
the
status
conditions,
I'd
love
to
see
a
machine
load,
balancer
implementation,
and
if
we
can
get
around
to
doing
cueing
updates
so
that
we
don't
have
to
pull.
That
would
be
great.
Oh
hey
new,
relic
folks,
so
we
were
just
going
over
what
the
roadmap
could
potentially
look
like,
because
I
still
have
it
on
my
to-do
list
to
convert
this
into
a
document
to
stick
into
the
repo.
A
Anybody
have
any
well
I
guess
if
you
got
any
other
ideas
that
you
want
to
talk
about
now
feel
free.
Otherwise,
I
will
turn
this
into
a
PR
and
we'll
iterate
on
it,
like
we
usually
do.
E
A
And
obviously
you
know
not
all
this
stuff
necessarily
will
make
it
in
for
alpha
4,
but
I
think
we
we
probably
we
need
proposals
for
things
to
potentially
do
in
alpha
for
now,
so
that
as
soon
as
we
get
the
release
done
or
when
we're
really
close
to
doing
it,
we
can
all
come
together
and
agree
on
what
our
priorities
are
for
alpha
4,
so
that
we're
not
waiting.
Two
months
before
we
actually
start
developing
things.
C
A
E
I
have
to
test
it's
funny
the
reason.
The
reason
why
we
were
a
couple
minutes
late
was
because
we
were
basically
discussing
like
all
the
testing,
that
we
want
to
do
in
the
next
couple
of
weeks,
basically
on
v-103,
so
we're
basically
gonna
start
kind
of
swarming
on
that
and
making
clusters
and
upgrading
existing
ones
and
I'll
be
able
to
kind
of
I'm.
I'll,
probably
will
probably
be
opening
issues
left
and
right,
but
I
will
be
testing
at
some
point.
Be
that
exact
issue
that
you're
talking
about
with
the
labeling
okay.
D
D
Example,
let's
say
Kappa
is
dealing
with
AWS
provider.
Then
you
have
other
providers,
Google
providers
and
whatever
kubernetes
GKN.
So
question
is:
are
these
all
going
to
have
a
common
for
the
metal?
Like
sir
API
talks
to
metal,
cube
and
metal
cube
deals
with
the
metal?
There
is
a
bare
metal
and
the
bare
metal
characteristics
that
are
defined
and
machine
machine
pool
machine
set
instances.
Etcetera
are
managed
by
metal
cube.
So
are
we
going
to
have
this
different
in
different
provider
separated
so
that
those.
E
Differences
would
be
captured
in
the
I.
Don't
know
it
would
be
called
like
a
metal
like
bare
metal
machine
CRD
as
opposed
to
them.
So
like
an
AWS,
we
have
the
AWS
machine
CRD
and
that
covers
all
of
the
like
AWS
specific
configuration
items
and
then
the
things
that
are
in
the
machine
object
are
constant.
Like
there's
no
like
like
there
are
some
fields
you
can
put
into
a
machine,
but
a
lot
of
the
provider.
A
Yes,
so
that's
not
a
goal
for
cluster
API
like
there
is
a
purposeful
split
between
the
cluster
API
machine
and
what
you
see
in
a
provider's
machine
type.
So
if
you
look
in
close
to
API,
we
have
machine
types
and
inside
machine
types
in
the
machine.
Spec
we've
got
cluster
named
bootstrap
infrastructure,
rough
version,
provider,
ID
and
failure
domain.
These
are
the
only
things
to
date
that
we
have
determined
to
be
common
across
all
infrastructure
providers.
Some
things
like
version,
some
providers
may
not
use
some
providers
may
not
use
bootstrap.
A
It
just
kind
of
depends
on
what
they're
doing
and
if
you
go
and
you
look
at
your
bare
metal
machine
spec,
it
has
provider
ID
image,
user
data
and
host
selector,
and
these
are
the
things
that
the
folks
at
the
bare
metal
operator
from
metal
cubed
have
determined
that
are
specific
to
bare
metal
machines
and,
and
it's
a
very
purposeful
distinction
between
common
things
that
cluster
API
can
talk
about
and
describe
and
then
infrastructure
provider.
Specific
things
does
that
answer.
Your
question
should.
A
D
D
There
is
a
go
for
APA
associated
with
this,
for
interaction
between
the
metal
cube
and
a
knee
and
exposing
the
label.
The
only
issue
I
had
was
with
the
label
because
when
you
bring
the
label,
if
I
want
to
say
that
this
bare
metal
operator
is
for
AWS,
come
work
load
a
compute
work
load,
then
I
need
to
label
it
as
something
and
that
label
has
to
be
selected
using
the
select
at
the
higher
layer.
I.
Don't.
A
B
You
yes,
sir
thing
I,
think
a
hardware
classification
controller
makes
more
sense
in
a
bare
metal
environment
because
you
need
to
be
able
to
know
what
that
physical
hardware
is.
Kay
then
put
your
work,
lady
cluster,
in
a
double
yes,
because
we're
AWS
already
has
that
information,
and
we
are
just
requesting
a
machine
straight
off
from
AWS,
which
already
has
all
that
information.
So
if
you,
if
you
like
ec2
already
has
that
information
base
is
your
it's
in
a
different
world
when
you're
in
bare
metal.
B
B
E
E
How
do
you?
How
can
we
make
it
easy
to
specify
an
ami
if,
for
example,
kubernetes
version
is
being
passed
down
from
like
Cappy
components
so
like
right
now
it
works
such
that
you
can
do
like
a
reg
XE
query
like
just
you
know,
does
that
work
like
will
that
work
with
like
different
operating
systems,
or
something
like
that,
like
we've,
been
talking
about
core
OS
for
a
while,
like
are
now
black,
are
now
but
yeah.
A
It's
designed
to
do
a
wild
card
or
Reggie
reg,
XE
sort
of
match
for
the
kubernetes
version
and
the
base
operating
system
and
those
are
fields
that
are
available
in
the
spec,
and
you
can
obviously
switch
the
lookup
org
ID
or
account
so
that
it's
not
the
VMware
have
do
one
that
we're
publishing
to
and
then
beyond
that.
If
you
need
any
more
specificity,
you
need
to
just
go,
find
the
ami
and
plug
it
in
yourself
or
adjust
the
code
to
do
different
searches
or
just
search
differently.
Yeah.
E
A
E
I
think
something
that-
and
this
came
up
before
and
might
just
need
some
like
detailed
proposal
style
stuff-
is
like
core
OS
and
flatcar,
like
don't
really
want
to
work.
That
way,
like
you
know,
like
we
don't
build
a
bunch
of
different
core
OS
a.m.
eyes.
We
had
one
and
we
boots
like
as
part
of
basically
booting.
It
ensures
that
it
has
all
the
packages
that
it
needs.
E
So
I
don't
know
if
it's
something
to
consider
for
later,
like
a
like
install
dependencies
on
boot
kind
of
deal,
but
that
is
the
way
like
core
OS
and
flatcar
like
want
to
do
things,
but
in
the
meantime
we
could
surely
meet
the
minimum
criteria.
That
is
like
have
a
bunch
of
images
that
match
the
regex,
etc.
Yeah.
A
A
Awesome,
alright,
so
I
think
everything
in
here
is
still
relevant.
I,
don't
think
any
of
it
is
release
blocking
other
that
I
know.
We
have
some
internal
consumers
that
are
looking
to
get
the
note
disk
configuration
in
for
alpha
3
and
we
have
a
PR
for
it
that
hopefully
can
get
in
this
week.
So
in
terms
of
issues,
I
think
we're
in
a
good
State
for
PRS.