►
From YouTube: Kubernetes - AWS Provider - Meeting 20210430
Description
Recording of the AWS Provider subproject meeting held on 20210430
Discussion: cloud provider extraction effort update, adding feature gate to disable cloud provider to all components, Sigv4 pre-kep, discussion of https://github.com/kubernetes/kubernetes/pull/101592, discussion of https://github.com/kubernetes-sigs/aws-load-balancer-controller/pull/1948 (PrivateLink support for aws-load-balancer-controller).
B
Provider
aws
sub
project
meeting,
it
is
april,
30th
2021,
please
remember
to
respect
the
code
of
conduct
and
let's
go
ahead
and
get
started,
so
we
have
a
pretty
light
agenda
today
we
have
a
suggestion
that
has
been
getting
kicked
along
of
for
backlog
grooming,
I'm
gonna
move
that
to
the
end,
assuming
we
have
time,
I
did
want
to
give
a
quick
update
on
cloud
provider
attraction
from
the
aws
perspective.
B
So
just
like
a
quick
summary
is
that
we
have
a
number
of
moving
pieces
with
with
the
cloud
product
extraction.
We
have
the
the
cloud
controller
manager,
which
is
the
you
can
find
that
project
at
kubernetes
cloud
dash
provider
dash
aws,
we're
working
on
that
we
consider
it
in
an
alpha
state.
B
We
have
alpha
releases,
for
I
think,
starting
at
118
up
through
1
20
and
working
on
121
and
then
also
trying
to
get
what
we'll
call
beta
releases
out
as
soon
as
possible
for
all
of
those
versions
in
the
next
month.
I'm
hoping,
I
think,
the
the
effort
that
we're
working
on
right
now
with
that.
I
am
looking
into
a
feature
that
was
just
recently
added
to
entry,
which
is
the
leader
migration
feature.
B
So
that's
something
that
that
you
know
eks
needs
for
for
allowing
clusters
to
upgrade
from
a
single
kcm
to
a
kcm
plus
ccm
with
different
leaders,
and
it's
just
some
configuration
basically
and
then
another
piece
to
that
same
repository
is
the
credential
provider
or
yeah.
The
cubelet
credential
provider,
I
think,
is
what
we
call
it.
So
that
is
something
that
a
brick
has
been
working
on
a
lot
where
we
have
an
implementation
of
it
and
he
is
currently
adding
some
type
of
binary.
B
I
think
he's
we're,
starting
with
just
a
github
release
that
can
be
downloaded
and
that
would
go
on
the
worker
nodes
so,
for
example,
eks
the
eks
worker
node
army
is
going
to
have
to
include
that
binary,
along
with
some
configuration
that
cubelet
respects,
which
will
take
the
place
of
the
code.
That's
currently
inside
cubelet
that
provides
the
the
ecr
image
credentials
same
for
cops.
B
I
I
did
open
a
issue
in
the
cops
repo
just
mentioning
that
one
need
to
do
that.
I
think
a
burke
is
thinking
about
that
as
well,
so
I
believe
he
will
be
opening
up
a
pr
for
that.
B
I
was
so
we
did
have
separately.
We
did
have
an
extraction
meeting
that
I
missed.
I
want
to
say
last
week
that
there
was
some
talk.
Maybe
maybe
andrew
can
talk
about
this
a
little
bit,
but
there
was
the
the
the
persistent
volume
label.
B
What
was
previously
the
admission
controller,
which
is
now
the
persistent
volume
labeler
controller
has
some
aws
and
google
specific
code
in
it,
and
I
believe
we
need
to
decide
how
to
replace
that.
I
did.
I
did
miss
that
meeting,
so
I
don't
know
the
specifics.
I
need
to
to
re-watch
that
one
andrew.
Do
you
happen
to
want
to
to
update
on
that.
C
Yeah,
I
wasn't
there
either
just
got
back
from
vacation,
but
I
do
know
that
walter
was
planning
to
rehash
that
on
the
next
meeting
like
I
don't
think
we
made
any
formal
decision
on
what
to
do.
There
got
it.
C
B
So
that's
all
I
can
think
of
off
the
top
of
my
head
in
terms
of
where
we
are.
I
think
the
timeline
is
getting
tighter.
I
you
know,
I
think
that
the
most
reasonable
first
possible
date
for
actually
I
don't
think
it'll
happen
this
early,
but
actually
taking
cloud
better
code
out
of
up
tree
is
probably
like
124.,
so
the
schedule
is
is
now
coming
out
for
122..
B
If,
if
cloud
provider,
if
like
the
the
components
that
we
have
go
beta
as
soon
as
possible,
and
then
you
know
some
type
of
ga
around
123.,
you
know
we're
cutting
it.
We're
cutting
it
pretty
close
at
this
point,
so
we
need
to
so.
From
our
perspective,
we're
you
know
we're
we're.
C
Yeah,
I
just
add
to
that
there
is
a
pr
that
is
currently
open
right
now
in
kubernetes,
where
we're
adding
a
feature
gate
to
every
core
component.
C
It's
going
to
be
called
like
disable
cloud
provider
or
something
like
that
and
obviously
it'll
be
introduced
with
alpha
and
off
by
default
and
at
some
point
we're
going
to
flip
that
feature
gate
to
beta
which
will,
by
default.
Like
turn
off
the
cloud
provider,
and
by
turn
off
I
mean,
if
you
ran
a
core
component
with
the
cloud
provider
flag
set
to
anything,
that's
not
external,
then
the
process
just
like
exits
early
in
the
dive
and
so
like
that,
hopefully
like.
That
is
the
strongest
signal
we
can
give
to.
C
Users
like
please
migrate
off
to
the
out
of
tree,
and
but
they
still
have
the
option
and
I'm
sure
most
of
them
will
just
go
with
this
option.
But
they
still
have
the
option
to
flip
the
feature,
gate
off
explicitly
and
continue
to
run
the
the
entry
provider
and
then
at
some
point
we
flip
the
feature
gate
to
ga,
lock
it
and
then
that's
when
they
have
to
migrate.
C
But
the
idea
is
that
that
switch
to
beta
is
going
to
be.
Like
the
you
know,
it's
it's
the
it's
the
it's
like
the
the
best
warning
we
can
give
because
users
and
then
at
that
point
it's
kind
of
like
you
know,
it's
your
fault.
You
know
migrate
off
and
I
think,
like
maybe
hope
I
guess,
like
the
intention
of
teaching,
was
that
it
puts
more.
C
It
makes
the
whole
like
removing
cloud
provider
disabling
it
more
concrete,
and
we
can
just
look
to
the
future
gate
like
when
the
gate
or
target
when
the
feature
gate
goes
beta
and
j,
and
look
at
that
as
like
the
release
timeline
for
when
we're
gonna
really
cut
the
tie
off
and
that
also
decouples,
like
removal
of
the
functionality
from
removal
of
the
coded
film
right
like
we,
don't
necessarily
have
to
remove
the
code
in
the
same
release.
We
disable
it.
We
can
turn
it
off
for
free
releases
and
then
remove
the
code.
C
So
I
think,
like
124,
for
the
switch
to
beta,
where
we
turn
it
off
by
default,
seems
reasonable
to
me,
but
yeah
that
I
mean
that's
up
for
discussion,
but
I'm
hoping
that
at
least
for
122
we
get
the
initial
p2
gate
merged
and
you
know
it
can
work
and
it
can
turn
things
off
if.
B
B
B
So
for
the
the
kept
that
we
were
talking
about
before
we
started
recording,
I
don't
think
necessarily.
We
need
to
rehash
it
here.
I
will
just
add
to
the
agenda
and
mention
it
and
and
if
if
people
are
interested,
they
can
go
watch
there's
the
conversation
in
sigoth
earlier
this
week,
but
just
as
a
quick
summary,
it's
a
a
feature,
that's
interesting
to
eks,
because
it
allows
sig
v4
on
the
client
side
for
kubernetes
sig
v4
is
the
amazon
aws
method
of
basically
authenticating
requests.
B
So
it's
a
it's
a
request.
Signing
mechanism
the
implementation
is
still
being
debated,
but
what
has
been
mostly
agreed
upon
amongst
sigoth
is
a
extension
to
the
existing
exec
feature
in
glanco,
where
in
this
case
a
binary
would
be
exact
and
it
would
return
a
unix
domain,
socket
path
or
a
port
for
use
on
localhost,
which
would
then
be
communicated
to
by
clanco.
So
all
of
the
requests
would
be
sent
to
that.
B
That
would
be
a
proxy
and
there
there
would
have
to
be
that
proxy
that
localhost
proxy
would
have
to
obey
existing
proxy
settings.
So
you
have
you,
have
this
little
bit
of
ch?
You
know
if
you
already
have
a
proxy
set
up.
You
have
some
kind
of
chaining,
but
the
idea
is
that
that
that
whatever
is
listening
on
that
unix
domain,
socket
or
port
is
going
to
do
this.
The
request
signing
and
then
it's
going
to
send
the
request
on
to
the
api
server.
B
There's
already
a
feature
for
a
front
proxy
in
front
of
the
api
server
to
do
any
kind
of
custom
validation
that
you
that
people
need.
So
we
don't
really
need
to
solve
that
that
side
of
it.
It's
really
just
the
client
side.
That
needs
something
yeah
so
I'll,
link
that
in
the
agenda
and
if
people
are
interested,
they
can
just
go
watch
the
sigoth
update.
D
Currently
our
exec
plugins,
so
currently
our
plugins
for
authentication
are
called
and
return.
As
I
recall,
the
the
token
that
is
used,
it
sounds
like
this
is
actually
a
full
proxy
is
that
is
that
right?
That's
yes,
awesome!
That's
actually
really!
So
it's
much
more
powerful.
E
D
Yeah
so,
for
example,
if
I
wanted
to
like
yeah
this
is,
I
definitely
watch
that
because,
for
example,
if
I
wanted,
we've
had
a
long-standing
issue,
for
example,
to
have
multiple
ip
addresses
right,
be
supported
in
like
for
in
client,
go
like
now.
You
could
do
it
with
this
plugin
or.
If
I
wanted
to
say,
I
want
to
auto
dial
my
api
server
over
the
only
vpn
wire
guard,
then
I
could.
I
could
do
that
sort
of
thing
now,
right,
yeah,.
A
B
B
B
I'd
love
to
talk
to
you
about
that
and
then
just
make
sure
that
you
know
the
more
examples
we
have
the
more
the
easier
it
is
to
push
it
along
and
convince
people
that
it's
important
so
I'll.
Just
I'll
make
a
mention
of
it
and
I'll
reach
out
to
you.
B
I'll
get
the
link
later.
Okay,
I
have
had
backlog
grooming
on
the
agenda
for
a
while.
I
did
quickly
want
to
ask
kishore:
are
you
you
did
have
the
race
condition
for
the
service
controller
issue?
Did
you
want
to
update
on
that
or
or
do
you
need
anything?
Are
you?
Are
you
good
with
that?
One.
H
Yeah,
I
don't
have
any
update
like
since
wednesday.
I
haven't
had
a
chance
to
look
at
it
after
that
time.
Okay,.
C
H
Yeah
I'll
I'll
take
a
look
like
next
week
when
I
get
some
time
to
work
on
the
potential
fix.
I
will
reach
out
to
you
on
slack.
If
I
need
help.
E
B
See,
can
you
see
my
screen?
Is
it
a
reasonable
aspect
ratio
or
should
I
I
think
I
can?
I
can
also
do
this
open
another
window
which
shrinks
it.
So
let
me
let
me
know
if
you
can't
see
that.
B
H
H
A
B
Some
questions
and
see
if
there's
something
misconfigured
with
their
setup.
B
Is
k3s
like
a
mini
cluster
on
one
contestants.
C
Yeah
fork
specifically
optimized
for
like
edge
and
then,
for
that
reason
it
it
actually
doesn't
have
any
of
the
cloud
providers
built
in,
which
is
why
someone
running
k3s
on
aws
would
have
to
run
something
like
ccm
or
something
else.
C
B
B
B
H
I
will
start
monitoring
this
queue
as
well
and
start
posting
updates.
B
Yeah
and
if
you
have
classic
stuff
from
kk
you
don't
have
to,
but
if
you
feel
like
it,
you
can
throw
them
my
way.
H
F
H
So
the
thing
is
like
it
may
not
take
effect
after
they
update
the
service,
so
that
is
one
suspicion
I
have
depending
on
the
version.
Did
they
mention
the
version
here.
D
Yeah
they
I
mean
they
did
add
the
subnet
after
because
they
created
this,
the
node
in
the
second
zone
after
having
created
this
service.
So
it
sounds
like,
although
that
node
did
get
added
according
to
step
six,
we
didn't
update
the
nlb
with
the
additional
subnet
or
or
zone.
H
Yeah
subnets
also
doesn't
get
updated
for
the
entry
controller
so
like
we
only
specify
during
creation,
and
we
stick
with
it
and
and
nlps
have
sometimes
they
have
like
a
temporary.
I,
I
have
always
seen
the
temporary
limitation,
but
it
prevents
updating
the
subnets
after
creating.
We
can
enable
additional
aegs,
but
we
cannot
change
the
subnets,
like
I
always
see
in
my
console,
so
it
says
temporary
limitation,
but
something
that
we
might
have
to
be
aware
of
for
these
type
of
things.
B
Got
it
okay?
So
so
it's
a
an
ordering
problem
and
the
fact
that
so
you're
saying
an
entry
doesn't
support
updating
subnets.
After
the
fact
all
right.
G
B
Okay,
this
one
I
created
just
wanted
us
to
kind
of
have
something
public
in
either
this
repo,
the
load,
bouncer
controller,
repo
or
both
just
kind
of
telling
users
what
we're
doing
and
based
on
what
we've
talked
about.
I'm
you
know
sort
of
making
the
assumption,
we're
moving
nlp
functionality
to
load
monster
controller,
and
you
know
maybe
we
do
some
kind
of
either
through
just
documentation
or
through
you
know,
a
helm,
chart
a
unifi,
unified
installation
method
so
that
users
can
go
to
one
place
and
just
install
both
of
these.
B
It's
maybe
that,
and
maybe
that's
a
bad
idea.
I
mean
you
know
you
might
get
the
the
ccm
through
your
installer
and
you
maybe
you're.
Looking
for
the
you
know,
I
don't
know
how
you
know:
we're
probably
not
going
to
be
able
to
meet
all
use
cases
here,
but
if
you're
using
at
least
eks
it's
done
for
you
if
you're
using
cops,
it
should
be
very
easy
to
configure
these.
H
Yeah
so
for
load
balancer
controller.
We
are
very
close
to
releasing
two
to
zero,
where
we
are
going
to
support
the
instance
target
as
well.
H
H
Yeah
correct
and
if
they
don't
want
the
complexity
of
load,
balancer
controller
like
they
have
simpler
nlps,
they
could
still
stick
with
entry.
Okay
and.
B
Yeah
definitely
I'm
thinking.
One
thing
we
could
do
similar
to
what
we're
doing
upstream,
with
cloud
provider
is,
is
introduce
a
feature
gate
in
ccm
that
disable
at
some
point
disables
nlb
functionality
here
so
they'd
have
to
go
and
re-enable
it,
and
we
can
provide
some
some
message
there
saying
you
know
this
functionality
and
this
controller
is
no
longer
supported.
You
should
move
to
the
load,
balancer
controller.
H
And
clv
is
also
like
a
legacy
feature,
so
aws
kind
of
supports
migrating
to
nlps
and
then
nlps
will
have
like
security
group
and
diamond
routing
takes
at
that
point
like
it
should
be
more
generally
acceptable
right.
So
maybe
that's
the
direction.
We
want
to
think.
B
When
you're
so
you're
saying
when,
like
as
clbs,
are
deprecated
users
should
be
moving
to
nlbs.
H
Correct
so
we
don't
even
provision
clvs
like
in
the
long
run,
we
provision
nlb
by
default
when
users
create
a
service.
E
B
Okay,
this
one
I
had
just
sort
of
you
know
first.
A
B
For
the
foreseeable
future,
next
couple
releases
we
will
have
code
in
the
legacy
cloud
provider
location.
So
there's
some
bug
fixes
go
there
and
then,
if
they,
if
they
do,
go
there,
we'll
need
to
try
to
pick
them
back
to
this,
and
I
was
just
thinking
like
well.
You
know
we
could
maybe
import
this
repo
into
legacy
cloud
provider
which
we
were
originally
doing,
the
opposite.
B
And
walter
said
this
repo
imports
parts
of
staging,
so
we
it's
not
safe
to
import
it
into
any
of
the
kk
public
repos.
I
do
not
see
it
importing
anything
from
kk
private
as
long
as
that
remains
true.
I
think
it
should
be
safe
to
import
in
kk
private,
so
I
don't
know
how
important
that
is.
Maybe
we
just
deal
with
cherry
picking
for
now,
because
it
it's
not
going
to
last
forever
so.
A
21
release
test.
B
I
I
did
do
some
some
work
on
its
potential
test
framework.
H
H
H
B
H
B
H
H
So,
what's
happening
is
like
once
we
select
the
subnet
if
the
subnet
is
full,
the
load,
balancer
creation
fails
and
that's
expected
right
now
we
don't
try
the
next
subnet
and
that's
what
the
user
is
asking
here.
Yes,.
A
H
Only
one
only
one
subnet
per
zone,
so
we
have
to
select
the
one
like
when
we
create
the
load
balancer.
We
have
to
select
the
proper
one.
D
H
So
here
is
the
thing,
though,
remember
like
we
did
some
entry
fix
for
eks119,
where
they
can
specify
subnet's
annotation
to
choose
the
subnet
they
want.
So
that's
gonna
save
them
here.
So
we
should
mention
that
so,
rather
than
adding
additional
intelligence
to
our
code,
we
should
just
ask
them
to
use
the
annotation
and
they
should
give
it
a
functional
organization.
H
It
might
have
been
cherry
picked,
we
can
take
a
look
later,
we
can
go
through.
This
is
nlv
right,
not.
H
B
H
B
And
I
will
try
to
help
you
close
these.
H
I
I
see
a
load
balancer,
I'm
just
taking
it
up
right
now,
so
yeah,
if
it's
not
related
I'll,
definitely
reach
out.
H
A
B
So
what
code
is
actually
like,
which,
which
fix,
are
you
talking
about?
Do
we
actually
apply
that
tag
now
from.
H
So
when
we
create
the
subnet,
we
no
longer
add
the
tag.
So
we
proportionately
for
that,
we
fix
the
entry
provider
to
not
look
at
the
cluster
tag
as
well.
I'll
give
you
the
pr
okay
and
then
you
can
see
if
a
cloud
provider
aws
has
that
fixed.
I
see
the
fixed
in
the
code,
but
I
don't
know
the
code
flows
exactly
to
make
any
further
comment.
H
B
D
So
I
have
a
question:
the
the
it
sounds
like
the
long-term
direction
is
to
get
to
a
much
richer
model
for
load.
Balancers.
Is
that
right
in
the
aws
load,
balancer
controller
project,
article
correctly
and
which
has
like
effectively
like
crds
for
everything?
So
you
can
specify
all
these
things,
and
we
have
less
magic
logic
is
that
is
that
right.
G
D
That
makes
sense
I
mean
like
if
I
recall,
google
added
backing
crd,
I
kind
of
was
whether
it
was
on
ingress
or
whether
it
was
on
service.
It
might
have
been
on
ingress,
but
they
like
that
pattern.
I
think
yeah.
G
The
answer
gateway
api,
so
it's
a
new
model
for
services,
so
we
plan
to
support
that
as
well.
So
internally
we
kind
of
have
a
crd,
I
mean
kind
of
necessarily,
but
you
memory,
which
is,
I
mean,
specify
all
attributes
and
we
translate
the
invoice
mode
or
service
model
into
our
internal
model
and
then
we
operate
on
the
interface.
So
if
we
support
other
apis
like
the
new
api
introduced
by
google,
we
can
just
write
a
translation
layer
which
translate
their
gateway
api
into
our
internet.
D
That
makes
sense,
I
I
I
think
they
also
even
before
adding
an
official
api
added
a
like
additional
crd.
I'm
gonna
try
to
find
the
docs
for
that,
because
it's
a
nice
trick
to
still
support,
still
support
custom
features
without
having
to
do
as
much
without
having
to
get
everything
into
the
core
apis
or
I'll
paste.
The
link.
G
Yeah,
so
this
one,
so
we
didn't
chunk
the
api
when
we
turned
targets
into
nlp.
So
if
we
have
900
targets,
this
900
target
will
be
registered.
At
the
same
time,
however,
the
elb
side
they
will,
they
might
re,
reject
targets.
I
mean
you
know,
I
mean
too
many
targets
in
a
single
call.
I
mean
the
behavior
country
is
unspecified,
so
if
they-
and
if
that
happens,
they
will
reject
the
registrar,
call
infinitely
and
and
they
will
throw
to
the
customer.
So
I
made
a
pair
to
fix
it
by
reaches
targets.
G
I
mean
every
100
targets
in
a
single
api
call.
So
can
we
terrify
this
into
all
the
versions
as
well.
B
Add
to
that,
but
it
it
was
causing.
I
did
like
it,
caused
users
issues
and,
if,
like
I
would
see
this
as
a
bug.
C
Yeah,
if
we're
all
in
agreement
here
that
it's
a
bug
I
can,
I
can
go
ahead
and
approve
it
and
I'll.
Let
other
folks.
E
F
I
added
a
I
added
a
link
to
the
agenda.
I
have
a
pr
to
the
load,
balancer
controller
for
initial
support
for
vpc
endpoint
services
and
I'm
hoping
to
get
eyes
on
that.
H
Aware
of
the
pr
just
we're
just
like
slightly
busy
with
the
220
release
right
now,
so
you
will
have
to
put
on
hold
for
now,
but
we'll
definitely
take
like.
Thank
you
for
the
contribution.
Oh.