►
From YouTube: Kubernetes - AWS Provider - Meeting 20200515
Description
Recording of the AWS Provider subproject meeting held on 20200515
A
Hello,
everybody:
this
is
the
AWS
provider
bi-weekly
meeting
I.
Am
your
moderator
facilitator,
just
in
Santa,
Barbara
I
work
at
Google
a
reminder.
This
meeting
is
being
recorded
and
will
be
put
on
the
internet
and
YouTube
and
placed
mindful
of
our
code
of
conduct,
which
essentially
is
to
be
a
good
person.
We
don't
have
too
much
on
our
agenda
and
we
only
a
couple
people
here
today,
so
this
probably
won't
take
the
full
hour.
But
why
don't
we
kick
it
off
with
what
Andrew,
who
sends
his
apologies,
but
Nick?
B
Yeah,
so
he
couldn't
make
it.
He
asked
me
to
just
quickly
run
through
his
topics,
though
the
first
one
is
removing
inactive
users
from
the
owners
file.
So
last
week
we
talked
about
this
one
and
we
wanted
to
have
at
least
a
two-week
period
between
meetings
for
people
to
respond
to
the
PR.
I
haven't
looked
at
it
in
a
couple
days.
I
know
there
was
some
some
some
responses
and
some
some
people
didn't
respond.
So
I
think
at
this
point
we
can
assume
that
it
is.
B
B
Cool
yeah,
so
that's
that's
it
for
that
item.
Next
one
is
the
dock.
Siders
up
Andrew
had
a
PR
for
the
dock
site,
I'm
gonna
click
on
that
actually
cool
yeah.
So
we
have.
We
have
no
content,
yet
we
need
people
to
start
contributing.
Docs
I
mean
try
to
start
working
on
that
next
week,
but
the
more
the
more
people
we
have
contributing
the
better
the
docs
will
be
so.
A
Yeah
I
expect
the
there's
a
there
was
a
question
in
the
agenda
around
how
to
ask
for
help
how
to
divvy
up
to
work.
I
I
suggest
that
you
know
the
strategy
of
pointing
people
getting
up
some
things,
copying
a
content
where
we
can
and
pointing
people
to
it
and
suggest
and
explaining
how
they
can
contribute.
It's
probably
the
best
strategy.
So
you
know
it's
like
be
the
old
adage
about,
like
the
the
best
way
to
get.
An
answer
is
to
give
the
wrong
like
the
wrong
answer.
A
B
Sounds
good
to
me,
yeah
I
know,
let's
we'll
just
get
some
quick
stuff
up
next
week
and
then
go
from
there
and
then,
in
terms
of
dipping
up
the
work
I
think.
Maybe
we
should
create
a
few
issues
on
just
different
like
what
sections
we
definitely
want
to
cover
like
one
for
me,
that
is
like
a
huge
pain.
Every
time
I
have
to
look
at
code
for,
for
example,
load,
balance
or
annotations
right,
like
I,
think
we
should
definitely
try
to
cover
that
well
in
our
in
our
box.
So
just.
A
B
B
A
B
I
could
I
mean
if
I
guess
we
could.
We
could
probably
take
a
closer
look
at
what
exists
already
and
if
there's
something
that
they
already
have,
that
that
is
good,
then
maybe
you
don't
need
to
touch
it,
but
if
there's
a
lack
of
dogs
in
that
area
that
we
could
suggest
having
a
section
on
storage,
it
makes
sense
right.
It's
like
it's
a
very
similar.
B
A
B
This
one
actually
I'm,
not
let's
open
up
this
PR
here,
I'm,
not
sure
I'm,
guessing
it's
an
image
for
it.
Yes,
yes,
it
is
awesome.
I
also
created
a
a
image
for
like
I.
We
have
some
internal
accounts,
I
created
an
ECR
repo
for
cloud
provider,
yes
and
built
the
existing
117
image.
For
that,
so
it
looks
like
we'll
have
a
GTR
image
and
a
ECR
image.
That's
good.
A
B
Yeah
we
can
just
to
reiterate:
we
have
now
a
GC
our
image
and
NEC
our
image
and
yeah.
So
we
can
skip
that
item.
I.
Think
I
have
a
PR
open
just
to
merge.
The
like
Andrew
had
a
had
added
the
manifests
for
a
kind
of
an
example:
external
provider
deployment
in
a
cluster,
and
he
had
a
placeholder,
so
I
just
replaced
it
with
these.
Your
image
we
can.
We
can
debate
later,
whether
it's
easier
or
she's
fair
image
that
goes
there,
but
that's
it.
A
B
Exactly
just
a
I
didn't
have
time
to
do
like
super
good
testing
or
or
really
much
testing,
so
I
decided
to
just
make
an
alpha
release.
Let
people
play
around
with
it,
just
get
it
out
there
rather
than
waiting
around
for
it,
and
then
I'll
probably
try
to
do
it
a
real
release.
You
know
next
week,
if
I
have
time
to
test
it
or
or
or
soon
it.
A
C
A
So
yeah,
the
the
the
core
request,
I
think,
is
good,
though
she
is
like
to
be
able
to
add
additional
like
more
granular
or
less
granular,
I
guess
labels
so
that
you
can
target
a
an
instance
family
like
we
were
discussing
out
before
we
started,
recording
actually
Kataria
like
c4
or
see
instances
I,
guess
we
don't
have
a
current
can't
have
a
scheduler
a
way
to
express
it
in
the
scheduler
other
than
to
create
labels
at
the
different
levels
of
granularity.
That's
my
understanding.
A
C
So
it's
in
the
legacy
cloud
providers
under
AWS,
so
each
cloud
provider
specifies
its
own
instance
type
all
right.
So
I
was
wondering
if
we
could
just
add
instance
generation
and
it's
just
family
there
or
does
it
have
to
be
something.
That's
consistent
across
all
cloud
providers
because,
like
there
are
differences,
I
guess,
I,
don't
know
if
all
other
cloud
providers
have
the
concept
of
instance,
family
and
instance
generation
and
all
that
stuff
I.
A
B
C
A
C
A
C
A
The
way
you
did
that
in
autoscaler
today
you
labeled
the
auto
scaling
group.
Was
it
extensible
like
do
we
just
have
to
do
it
to
make
a
change
so
order
scalar,
or
can
we
just
like
change
cups,
change,
eks
or
whatever?
It
is
I
mean
it's
the
node
side,
so
I
don't
know,
but
it
like
to
add
those
labels
or
it's.
C
Been
a
while,
since
I
looked
at
it
I
certainly
remember
hope.
Work
I
know,
there's
like
a
map
where
we
look
up
information
to
know
like
how
much
CPU
or
memories
on
a
specific
instance
type
so
that
we
could
kind
of
pre
calculate
what
you
know
what
the
schedule
thing
would
look
like,
but
because
this
is
not
an
instance
type.
It's
instance
generation
we'd
actually
have
to
like
look
through
all
of
those
I
guess.
A
Yeah
I
mean,
to
my
mind,
I
think.
Probably
the
right
step
is
to
start
the
PR
and
like
we
will
and
then
like
habit
to
do,
which
is
like,
let's
check
what
happens
with
autoscaler
and
because
it
might
be
that
scheduling
says:
oh,
you
should
do
this
and
it
yeah
I
feel
like
getting
the
ball
rolling
there.
C
A
A
A
Mean
I
know:
GCP
has
a
similar
issue
concept
like
with
the
different
architectural
generations,
but
the
names
are
gonna
be
different
like
so
it's
not
like
it's
like
you
can
target
a
sea
and
always
get
like
a
Z
on
whatever
family.
It
is
Ivy
Canyon
or
something
like
you
know.
You're
gonna
get
a
different
thing
based
on
the
different
cloud
provider.
So
wouldn't
it
wouldn't
be
that
you
could
target
in
that
way,
I
don't
know
if
we
want
to
start
like
exposing
the
architecture.
Exposing
the
architecture
feels
wrong.
C
Okay,
so
right
now
the
way
I
think
the
way
we
get
the
instance
type
is
through
the
instance
metadata
and
I.
Don't
think
it
exposes,
like
the
generation
and
family
information
about
that.
So
I
guess
would
have
to
be
some
sort
of
logic
just
to
parse
that,
from
the
instance
type
does
that
have
like
potential
to
break
things
in
the
future,
like
if
there's
a
different
naming
convention.
A
B
A
D
A
Dropped
in
and
out
so
if
we
need
to,
we
can
postpone
if
for
two
weeks,
I
think
why
it
day:
ok
but
yeah
the
anyway.
Yes,
thank
you
for
the
tip
I'm,
not
necessarily
upgrading
it
right
away
to
zoom
five
on
Linux
that
will
probably
save
some
people
and
some
trouble.
But
yes,
so
we
will
postpone
that
till
the
29th,
which
is
in
two
weeks
all
right
but
yeah
that
will
be
a
demo
of
customer
VIP,
one
out
for
three
correct
on
AWS.
A
B
I
realized
I,
just
have
one
really
quick
thing
based
on
so
the
for
the
cloud
provider
image
and
doing
releases,
for
that.
So
andrew
has
a
kept
that
is
sort
of
a
strong
suggestion
that
cloud
providers
follow
for
versioning,
which
is
to
basically
take
the
major
and
minor
version
of
kubernetes
releases
that
they're
sort
of
matching
to
just
use
the
same
major
in
my
own
version
and
then
kind
of
independently,
like
your
your
version
is
the
patch
version,
so
I
did
that
I.
B
Just
if,
like
the
code
right
now
and
the
external
repo
is
117,
so
I
just
did
117
for
the
first
release
of
that,
and
then
do
we
want
to
just
follow
the
same
convention
of
released
branches
where
we
have
a
release
branch
for
each
minor
version
of
kubernetes
that
we,
it
seems
it
seems
reasonable.
I
just
wanted
to
kind
of
throw
it
out
there
and
if
you
know,
if
there's
other
ideas.
A
I
see
a
thumbs-up
from
a
dear
I
do
like.
That
is
what
we've
done
in
cups
for
a
long
time,
and
it
seems
to
work.
The
only
confusion
is
like
then
you're,
like
you
know,
one
seventeen
one
we're
like
kubernetes
is
one
seventeen
eight
but
like
that
seems
to
pale
in
comparison
with
the
like
other,
the
other.
That.
B
A
B
D
B
A
It's
kubernetes
ember
anyway,
so
it's
less
less
summer,
less
strict,
but
and
it
may,
we
probably
want
to
make
our
own
decisions
about
when
to
release
features
like
cops,
tries
to
release
features
only
on
the
miners,
even
on
the,
but
the
miners
are
delayed
compared
to
kubernetes.
So
it's
skewed,
but
like
still
the
same
basic
approach,
but
each
one
can
like
each
project
about
its
own
rules
in
that
regard.