►
From YouTube: Kubernetes - AWS Provider - Meeting 20200918
Description
Recording of the AWS Provider subproject meeting held on 20200918
A
Hello,
everybody
and
welcome
to
provider
aws
meeting
it's
friday
september,
18th
2020..
Please
follow
our
community
guidelines
for
the
investor
behavior.
The
meeting
is
being
recorded.
So
let's
take
a
look
at
the
agenda.
A
We
have
three
items
on
the
agenda
today,
the
first
I
will
take.
A
A
Awesome
yeah,
I
will
follow
up.
I
think
I
need
like
the
credentials
for
that
to
actually
yeah.
B
I
think
I
I
think
you
myself
and
justin's
like
I
don't
maybe
a
work,
email
or
a
personal-
I
don't
remember,
but
it
should
be
registered
in
the
the
cncf
jersey
account.
So
it
should
work,
but
yeah
I'll
walk
you
through
it,
because
it
is
our
manual,
but
we
should
really
get
some
some
prowl
integration
so
that
we
push
automatically
on
a
tag,
and
so
we
want
to
do
it
manually.
Every.
A
Time.
Okay,
that's
a
that's
a
really
good
point:
the
automated
pro
integration
for
push,
okay,
so
yeah!
So
that's
all
I
wanted
to
say
for
that
one
andrew!
Would
you
be
willing
to
talk
about
the
next
one
120
being
the
last
release
for
entry
features.
B
Yeah,
so
as
as
many
of
you
know,
you
know,
we've
been
trying
to
remove
the
entry
cloud
providers
in
kubernetes,
which
is
making
decent
progress,
but
it
is,
you
know,
still
going
slow
and
it's
a
long-standing
effort,
because
we
don't.
B
We
want
to
make
sure
that
we
don't
break
any
users,
doing
the
migration
right
so
like
even
for
the
aws
provider,
I
think
we're
pretty
behind
and
that
the
external
provider
is
not
like
it's
not
really
on
par
with
out
of
tree
or
entry,
and
it's
not
really
tested
significantly
enough,
and
you
know
we
want
to
see
significant
adoption
of
it
before
we
can.
B
You
know
just
go
ahead
and
remove
the
entry
provider,
and
so
we've
been
noticing
that
the
entry
provider
has
been
also
like
seeing
some
features
go
in,
and
you
know
like
not
that
anyone's
like
purposefully
trying
to
you
know
do
that.
But
I
think
just
you
know
things
happen
right
like
people,
you
know
need
a
thing
need
a
functionality.
They
open
a
pr
and
someone
approves
it
and
by
adding
more
functionality,
we
make
it
harder
and
harder
for
us
to
cut
that
tie.
B
So
sick
cloud
provider
wants
to
put
some
guardrails
in
place
so
that
any
pr
labeled
kind
feature
is,
is
not
allowed
for
the
entry
provider
starting
1.21
and
we're
going
to
have
some
pro
automation
to
do
that
automatically.
B
And
basically,
if
you
need
to
merge
a
feature
that
is
marked
as
if
you
need
to
merge
a
pr
that
is
marked
as
feature,
it
requires
sign
off
from
the
sig,
and
there
needs
to
be
a
really
compelling
reason.
Why
you're
doing
that,
and
so
we're
hoping
that
this
is
going
to
help
us
move
that
needle
for
adopting
external
providers,
as
we
add
more
and
more
features
in
the
external
provider
that
may
not
be
available
in
tree.
B
So
that's
just
a
bit
books
have
questions
concerns.
A
So
one
thing
that
I
think
we
should
do
at
the
point
where
we
switch
from
120
to
121
and
add
the
guardrails
to
prowl
is
we
should
either
depending
on
what
we
decided
to
do
with
v2.
We
should
either
at
that
point
say:
okay,
like
you
know,
v2
should
be,
at
least
in
a
point
where
we
can
start
accepting
features
there
or,
alternatively,
we
could
move
the
legacy
code
base
over
instead
of
importing
legacy.
We
could
copy
the
code
over
like
we
have
done
in
the
past,
but
it
was
too.
A
I
think
it
was
premature
at
that
point
and
then
accept
features
there,
but
we,
you
know
we
need
to
have
somewhere
where
we're
allowing
new
development,
obviously
in
the
external
cloud
provider,
so
you
know
v2
would
be,
I
think,
a
reasonable
place
to
do
that
if
it
ends
up
being
what
we
adopt.
B
Yeah,
I
agree
ideally
it'll
be
yeah,
like
we
still
import
legacy
v1,
because
we
still
want
to
get
bug
fixes,
but
then
we
but
then
yeah
all
features
goes
into
v2
and
hopefully
by
1.21
like
v2,
is
at
a
usable
state,
at
least
so
that
users
can
start
playing
around
with
it
and
try
to
get
features
and
stuff.
So
that
sounds
good.
A
So
the
the
only
big
thing
that
I
know
of
you
know
going
on
at
aws
at
least
is
we
have
some
folks
that
are
trying
to
essentially
so
we
talked
about
this
last
meeting
but
they're
they're
building,
what's
called
the
aws
load,
balancer
controller
and
they
want
nlb
and
alb
support
to
be
in
that
controller
and
then
the
cloud
provider
using
some.
A
I
think
google
is
doing
a
very
similar
thing,
so
they're
using
the
same
feature
that
google
is
adding
to
kind
of
basically
just
have
it
not
implemented
in
the
in
the
cloud
provider
and
fall
back
on
that.
So
I
think
that
that's
going
to
require
some
changes.
I
I'm
not
sure
if
they're
merged
yet
or
not.
I
really-
I
really
haven't
been
following
that
very
closely,
but
it
should
require
some
changes
in
the
cloud
provider
and
I
think
that
they're
trying
to
get
it
merged
for
120.
B
Okay,
yeah,
I
know
that's
a
common
ask
from
from
cloud
providers.
I
actually
opened
the
cap.
I
can't
make
it
now
because
I'm
on
my
phone,
but
I
actually
opened
the
caf
to
make
that
generic
in
the
service
controller
by
adding
a
class
annotation
field
to
service.
B
That
way
like
we're,
not
like
every
provider
isn't
implementing
their
own
like
annotation
to
skip
a
low
balancer
it'd
be
great.
If
we
can
all
consolidate
on
that
implementation,
rather
than
doing
that.
A
Yeah,
I
will
try
to
find
that
cap
after
this
meeting,
because
I
you're
too
quick
peter.
C
Agenda
any
idea
if
the
classic
load
balancer
will
still
be
supported
by
whatever
the
plan
is
there
in
the
controller.
A
Yeah
so,
as
far
as
I
know,
classic
load
balancer
support
will
remain
in
the
cloud
provider,
the
club
provider
we
have
now
and
would
also
be
supported
by
the.
A
External
cloud
provider
only
nlp
and
alb
support
is
going
to
be
in
this.
Basically,
what
they're
doing
is
they're
taking
the
nlp
code
from
the
entry
cloud
provider
and
merging
it
with
the
alb
ingress
controller
and
then
they're
just
leaving
the
the
classic
load
balancer
stuff.
There.
C
A
I
mean
if
you
have
specific
callouts
there,
what
I
would
talk
to
are
you
familiar
with
moonfish
who's
driving
that
effort.
C
Good
we've
seen
cops,
many
people
coming
to
us
say
we
want
an
lb
and
we
kind
of
start
explaining
what
the
downsides
are
and
end
up.
You
know
going
away
from
an
lb
or
alb.
Why
can't
we
use
alb
for
our
cluster?
You
know,
because
if
we
put
the
lv,
you
would
have
to
have
at
least
two
availability
zones,
so
until
then
classic
load
balancer
still
is
the
most
flexible
in
these
areas,
which
has
its
downsides
also,
but
yeah.
C
Yes,
I
mean
most
people
want
development
clusters
or
test
clusters
for
most,
so
once
you
start
toying
with
it
you
get
into.
Why
is
not?
Why
doesn't
it
work
with
my
single
ac?
Why
do
I
have
to
put
two
of
them
there
and
you
get
two
other
costs
also,
if
you
get
two
az's,
you
have
to
have,
in
our
case
two
net
gateways
and
so
on.
So
anyway,
it's
a
complicated.
C
A
Yeah,
I
can
relay
your
message
or
give
you
the
contact
information
of
the
person
who's
driving
that
project,
if
you're
interested
just
you
can,
that
would
be
cool.
Thank
you,
yeah
cool,
all
right.
If
there's
nothing
else
on
that
top
andrew,
were
you
did
you
want
to
add
anything
there.
B
Yeah,
I
was
basically
just
going
to
say
around
the
class
annotation.
Basically,
the
proposed
behavior
is
that
if
you
don't
set
the
class
annotation,
it
is
just
the
current
behavior
by
the
cloud
provider
and
then
you
can
basically
like,
if
you
add
an
energy,
if
you
add
the
class
annotation,
it
just
gets
ignored
by
the
provider
and
an
external
thing
can
take
over,
which
I
think
is
basically
what
both
google
and
aws
are
doing
today
so
yeah.
We
should.
B
A
Cool.
Okay
so
and
then.
A
Also,
the
the
v2
cloud
provider-
I
think,
you'd-
be
the
best
person
to
talk
about
that
as
well.
B
Yeah,
so
I
have
started
a
branch
and
started
working
on
it,
and
I
have
a
engineer
on
my
team
at
vmware.
That
is
also
I'm
going
to
help
and
she
has
started
the
the
low
balancer
implementation
of
it
and
I'm
going
to
start
with
the
instances
implementation
of
the
v2
interface.
B
B
That's
not
the
private
dns
name
and
I
kind
of
incrementally
change
and
improve
using
like
the
current
implementation,
that's
kind
of
a
baseline
if
there
are
other
known
things
that
people
want
to
address
like,
I
know
for
the
low
balance,
implementation,
we're
definitely
gonna
implement
a
better
naming
scheme
for
the
elb
or
the
alb.
That's
not
like
some
random
uid,
but
if
there
are
things
like
that
in
the
provider
that
you've
always
wanted
to
see
changed,
but
we
couldn't,
because
you
know,
for
reasons,
we'd
love
to
hear
them
until
we
can.
B
B
A
Yeah
I
mean
annotations
are
not
my
favorite
way
of
configuring
things,
but
I'm
not
you
know,
I'm
not,
I'm
not
taking
a
stance
there.
I
just
think
we
could
you
know
if
knowing
what
we
know
now
and
knowing
all
the
annotations
that
we've
used,
if
we
start
from
zero
and
redesign
it,
I
doubt
it
would
it
would
look
the
same.
So
you
know,
maybe
we
do
like
a
small
aws
cap.
A
You
know
for
for
the
the
annotations
there
or
we
could
just
do
like
like
a
little
design
doc
or
something
like
that,
but
somebody
should
just
kind
of
noodle
over
it
a
little
bit
and
and
see
what
they
can
come
up
with.
A
D
But
a
common
problem
with
annotations
is
that
they're
often
the
values
are
often
organized
such
that
they're
kind
of
aimed
towards
like
a
single
port
or
a
single
set
of
targets,
and
so,
if
you're,
if
you
have
a
service
or
an
ingress,
that's
listening
on
multiple
ports
being
able
to
define
settings
that
are
different
for
each
of
those
ports
is
often
not
possible.
The
annotations
just
aren't
flexible
enough
to
support
that.
A
Point
so
are
there,
I
know
you
opened
a
issue
regarding
b2.
Is
there
places
for
people
to
jump
in
and
help
at
this
point?
I
know
you
know
we
have
a
lot
of
implementation.
Work
to
do
is
the
issue
pretty.
I
haven't,
haven't
actually
looked
at
it
yet.
B
Yeah
so
I
basically
broke
down
the
issue
into
four,
like
implementable
parts,
instances
the
low
balancer
zones
and
routes,
and
so
I'm
working
on
instances.
My
coworker
is
working
on
the
low
balancer
one.
B
A
All
right,
peter.
D
Yeah,
I
nick
you
opened
a
pr
a
while
back
about
adding
the
external
cloud
control
master
to
the
cops
and,
while
we're
talking
about
you,
know
improving
testing
and
stuff
before
going
ga,
I
think
it
might
be
helpful
to
revisit
this
yeah
once
we
load
it
once
we
land
it.
We
can
get
some
periodic
end-to-end
coverage
on
it
and
that,
should
you
know,
help
improve
our
improve
our
confidence
for
going
to
ga
exactly
yeah.
That.
A
Was
the
goal
with
this?
I
I
got
back
to
it
this
week,
a
little
bit
so
I
have.
I
think
I
have
a
couple
changes
that
I
haven't
pushed
to
the
pr
yet,
but
I
was,
I
ran
ede
tests
on
the
cluster
that
it
created
with
this
got
some
failures
that
I
was
that
I
needed
to
investigate,
so
I
haven't
dove
into
those
failures
yet,
but
I'm
hoping
I
was
hoping
to
get
this
done
this
week,
but
definitely
next
week.
A
That
I
noticed
there
was
a
comment
on
behavior
for
pre
118,
where
we
don't
have
the
you
know
the
the
external
cloud
provider
doesn't
have
when
it's
in
releases.
So
you
know
what
what
what
is
the
correct
behavior
there?
What
what
do
we
want
to
do?
Do
we
want
to
you
know
if
they
choose
external,
do
we
say
it's
not
supported
pre-118.
A
A
There
was
some
like
external
cloud
provider
support
and
I
think
it
was
just
an
image
with
all
of
the
providers
yeah
so.
B
A
A
C
A
So
that
was
specifically
added
because
it's
adding
support
for
the
milan
region.
C
A
C
Is
that
okay
yeah
absolutely
it's
hard
to
rebase
this
one
plus
there
are
new
versions
of
the
sdk,
so,
okay,
thank
you.
Yeah.