►
From YouTube: Kubernetes - AWS Provider - Meeting 20211001
Description
Recording of the AWS Provider subproject meeting held on 20211001
Discussed LKG Testing, review https://github.com/kubernetes/kubernetes/pull/105361
A
Okay,
hello,
everybody
welcome
to
provider
aws
meeting.
It
is
october
1st
this
meeting
is
being
recorded
and
please
heed
the
cncf
community
guidelines
and
be
respectful
to
others
relatively
small
agenda.
Today,
oh
keisha,
actually,
I
think
you're
adding
to
the
octo
the
september
17th
agenda.
Oh
sorry,
my
bad,
no
worries.
A
So
I
will
just
quickly
jump
into
the
first
item,
which
is
lkg
testing,
so
this
is
something
that
is
coming
from
sig
cloud
provider.
It
was
proposed
there
and
I
just
wanted
to
review
it
here.
I
will
try
to
summarize
it,
although
I'm
not
super
familiar
with
it,
so
bear
with
me,
but
basically
the
problem
statement
is
that
when
moving
out
of
tree
for
a
cloud
provider.
A
You
come
across
the
issue
where
changes
in
entry
kubernetes
can
now
break
cloud
providers
without
a
feedback
loop
and
potentially
vice
versa.
So
this
kind
of
goes
to
address
that
and
also
to
make
it
easier
for
developers.
Let's
say
working
on
the
the
cloud
provider
code
base,
knowing
whether
or
not
a
change
was
broken
by
something
in
the
cloud
provider
itself
or
something
that
changed
upstream.
So
it's
just.
A
I
think
it's
a
relatively
common
sort
of
practice
to
do
this
with
dependency
projects,
but
you
test
against
a
last
known,
good
version
of
your
dependencies,
so
in
this
case
kubernetes
kubernetes,
and
I
think
they
will
also
be
running
so
that
could
be
like
pre-submits
on
the
cloud
provider
repo,
but
there
also
will
be
periodic
jobs
running
on
kubernetes
kubernetes,
which
will
provide
feedback
in
the
other
direction.
A
A
You
know
we
won't
block
in
the
in
the
reverse
direction:
prs
and
kubernetes
communities,
but
with
the
pre
submits
running
on
the
individual
cloud
provider,
repos
those
at
the
discretion
of
each
cloud
provider
could
be.
You
know
you
could
disable
merges
based
on
an
inability
to
find
a
last
known,
good
version
because
of
of
some
change.
Basically,
so
I
don't
know
if
my
summary
was
any
good,
but
here's
the
doc
as
of
now
we're
on
board
with
this.
A
This
is
how
we
we
will
do
testing
in
the
cloud
provider.
Aws
repository
we'll
take
advantage
of
of
the
shared
framework
that
cloud
provider
sid
cloudfrider
comes
up
with
so
yeah.
I
don't
think
we
need
to
go
into
more
details
than
that,
but
yeah.
I
just
wanted
to
share
the
share
the
dock
here
and
if
anybody
is
interested
in
it,
you
can
provide
feedback
on
the
doc.
B
B
A
Yeah
like
right
now,
it's
not
super
clearly
defined,
but
we
release
a
new
version
of
cloud
provider
with
each
version
of
kubernetes
and
we
haven't
published.
A
number
of
you
know
like
what
version
is
supported
with
what.
So
I
think
the
implicit
understanding
is
that
for
the
external
cloud
provider,
you
would
upgrade
it
to
the
they
kind
of
share
a
minor
version.
So
you
know
you
upgrade
to
kubernetes
1
123.
A
Then
you
also
upgrade
your
external
cloud
fire
cloud
provider
component
to
the
dot
23
version.
If
we
want
to
do
less
work
and
support
more
versions,
I'm
open
to
it.
But
that's
that's
what
we're
doing
right
now.
B
Okay,
I
I
was
just
thinking
about
it.
Like
the
cloud
provider,
changes
doesn't
have
to
like
keep
up
with
the
kubernetes
right,
because
a
lot
of
times
it's
going
to
be
constant,
because
the
sdks
and
all
the
things
on
the
cloud
provider
side
has
a
different
cycle.
So
I
was
just
thinking
like
a
loud
weather.
It
would
make
more
sense
to
release
with
the
kubernetes
or
have
some
independent
release
cycle
for
the
cloud
provider,
because
those
are
different
repositories,
so
we
don't
have
to
tie
them
up
to
the
kubernetes
version.
A
No,
I
mean
it's
it's
a
good
point
and
I
think
we
should.
We
should
consider
that
the
the
reason
it
is,
what
is
the
reason
we're
doing
the
the
versioning
with
kubernetes
right
now
is
because
that
was
sort
of
agreed
upon
as
like
the
recommended
way
to
do
it
in
sick
cloud
provider.
But
I
I
completely
agree
with
the
downsides
there.
It's.
It
is
more
work
to
keep
up
with
the
upstream
repo
when
it's
not
completely
necessary,
but
yeah
cool.
So.
B
Something
we
can
keep
discussing
further.
Of
course,
I.
A
B
A
Also
a
good
question,
so
I
think
I
mean
I
I
don't
know
exactly
what
version
it
will
be
recommended.
I
think
that's
going
to
be
when
we
decide
that
it's
ready,
the
the
ver.
You
know
it's
not
going
to
be
forced
on
us
for
a
few
more
versions,
because
we're
still
waiting
for
some
upstream
deprecations,
like
the
api
server,
still
has
a
need
for
cloud
provider
code
because
of
the
persistent
volume
labeler.
A
The
sick
cloud
provider
has
all
of
the
dependencies
tracked
and
there's
a
cap
to
remove
the
persistent
volume
labeler.
There's.
C
No,
I'm
sorry
I
was
I
I
didn't
state
that
clearly
so
kishore
had
asked
like
when
we're
gonna
be
feel
comfortable.
Like
saying
you
know
it's
ready
or
whatever
do
we
have
like
a
list
of
sort
of
criteria,
or
you
know
that
we're
gonna
be
confident
that
the
split
out
provider
aws
is
ready
for
prime
time.
A
A
So
you
know
it
the
ccm
specifically
I
mean
that
is
one
piece
and
that
is
close
to
being
ready
right.
We,
you
know
it,
it
essentially
works.
We
just
need
to
finish
the
the
testing
that
we're
working
on
right
now
and
publish
right
now.
The
the
container
images
are
labeled
alpha.
A
So,
okay,
as
we
have
increased
the
testing,
we
need
to
essentially
publicize.
Okay,
we're
you
know
we're
moving
to
beta
now
for
these
container
images.
You
know,
let's
try
to
get
more
users
and
then
that's
you
know
at
some
point.
Move
to
ga
is.
C
The
the
last
known
good
testing
gonna
play
a
part
in
that.
A
C
I
mean
last
known
good
alpha
last
known
good,
like.
A
A
useful
tool
that
we
can
take
it
advantage
of
in
our
testing,
but
it
is
not
a
requirement
and
separate
from
you
know,
just
getting
our
ede
testing
framework
up
and
and
that
kind
of
stuff.
C
B
And
one
more
thing
before
we
move
on
to
the
pr,
so
are
we
gonna,
like
sync
up
the
code
from
provider
aws
to
the
legacy
cloud
provider
in
the
mean
in
the
interim,
so
that
we
can
continue
developing
features
on
the
provider
side
and
we
don't
have
to
worry
about
the
legacy
provider
or
what
would
be
the
story
for
us
like,
while
the
entry
provider
is
still
allowed,
because
right
now
like
we
are
in
a
state
where
we
don't
allow
changing
anything
in
the
legacy
cloud
provider?
B
A
Yeah,
I
I
agree.
Currently
I
think
just
you
know
when
I
I
I
will.
I
will
defer
to
the
upstream
timeline
for
removing
that
code,
which
is
again
reliant
on
the
dependencies.
So
you
know
we
can't
remove
the
entry
cloud
provider
until
we've
deprecated
the
persistent
volume
labeler,
and
that
is,
is
waiting
on
this.
This
kept
to
build
out
a
replacement
for
it.
A
Basically,
so
I
I
think
that's
the
the
longest
tail,
there's
cubelet
image,
credential
provider,
which
is,
I
believe,
beta
now
so
that's
moving
along
well
and
then
you
know
ccm
is,
is
making
progress
so
as
soon
as
all
those
upstream
dependencies
are
removed,
then
the
upstream
code
will
be
removed,
but
I
don't
think
we're
going
to
do
it
earlier
than
upstream
requires.
I
guess.
B
A
Should
also
like
talking
about
how
they
wanted
to
include
the
dependency
back
in.
B
Yeah
something
similar
I
mean
we
definitely
don't
want
to
stick
out
head
of
course,
but
then
what
it
feels
like
to
me
is
like
it's
gonna
take
some
time
right.
It's
not
gonna
happen
immediately
over
one
release,
or
so
so
we
want
to
be
content
like
con.
We
want
to
continue
to
develop
features
and
fixes
in
the
meantime
as
well.
So
we
want
to
find
out
a
balance
like
find.
The
legacy
provider
is
now
deprecated.
B
A
Sort
of
I
like
I
agree,
I
mean
features-
are
already
deprecated
in
you
know:
we're
not
merging
features
in
the
legacy
provider,
so
they
will
diverge
more
and
more
until
the
the
legacy
provider
is
finally
removed.
I
think,
but
I
agree
you
know
it's.
It's
just
going
to
get
harder
and
harder
to
cherry
pick
stuff
back
and
forth
between
them,
so
get
it
done
the
better.
C
Were
you
asking
whether
whether
we
can
continue
to
merge
new
code
into
legacy
cloud
providers
aws,
along
with
the
split
out
cloud
provider,
aws.
B
So
I
think
that
would
be
difficult
to
maintain
what
I
was
thinking
around.
More
was
like
whether
we
have
a
mechanism
to
use
a
ccm
like
the
cloud
provider
aws
as
entry,
something
to
that
effect.
So
we
only
have
one
place
to
develop,
but
then
we
get
the
features
so
that
entry
deprecation
can
take
its
own
time
and
when
they
replicate
we
are
ready
so
that
that
was
what
I
was
getting
at
more
yeah.
B
Like
we
stopped
the
development
on
this
legacy
provider
that
clearly
doesn't
work
in
our
favor
right,
because
now
we
gonna
tell
the
customers.
Okay,
these
features
won't
be
there
or
these
features
won't
be
there.
So
that
is
what
will
be
difficult
for
us
to
maintain.
A
Yeah,
it's
kind
of
a
it's
leverage
to
get
people
to
move
to
the
out
of
tree
provider.
So
yeah
google
had
the
same
want
and
after
a
discussion
and
said
cloud
provider
they
eventually
they
were
sort
of
denied
and.
C
Couldn't
we
make
a
a
rule
that
everything
has
to
go
into
ccm
cloud
provider
aws
first
and
then
only
in.
B
B
Sure
sure
yeah
yeah,
let's
bring
that
up
as
well.
So
yeah,
that's
where
the
dilemma
comes
in
right.
So
I'm
going
to
explain
you
my
dilemma
there
so
yeah.
B
Course
so
here's
the
thing
yeah,
so
this
feature
gate
will
be
enabled
like
we
plan
to
enable
this
mix
protocol
lb
service
in
123..
So
what
happens
is
after
this
is
enabled
kubernetes
will
no
longer
validate
that
the
protocols
are
the
same.
So
far
until
now,
kubernetes
had
been
validating
that
all
ports
are
either
tcp
or
all
ports
are
either
udp.
No
mix
support
is
available
and
the
way
that
entry
cloud
provide
entry
controller
code
is
built
in
right.
Now
it
assumes
that
validation
will
be
there.
B
So
that's
what
like
most
of
the
code
is
built
in,
so
we
do
have
udp
support
for
nlp,
but
then,
after
this
feature,
gate
is
enabled
that
code
has
to
be
refactored.
It
doesn't
work,
as
is
because
the
assumptions
that
were
made
earlier
is
no
longer
valid.
So
so
I
had
two
options:
one
was
like:
okay
go
refactor,
the
whole
thing
make
the
udp
work
in
mix
protocol
with
tcp,
and
there
was
one
option,
but
then
this
is
the
legacy
code
where
we
don't
want
to
offer
like
lot
of
features
right.
B
So
that's
where
things
come
in
and
then
what
I
end
up
doing
was
just
move
the
validation
to
the
aws
provider
code.
That
way,
if
there
are
services
with
mixed
protocol,
we'll
reject
them
will
not
provision
any
load
balancer.
So
what
that
will
do
is
whatever
that
exists.
Right
now
will
continue
to
work
in
the
entry
without
significant
changes.
B
So
this
is
the
first
step
that
I
wanted
to
pursue
and
after
this
change
goes
in
depending
on
like
if
we
have
time
resources-
and
we
want
to
take
the
changes
we
can
make
further
fixes
in
the
entry
like
during
123
or
124,
then
the
urgency
is
not
there
right.
Then
we
don't
block
every
other
cloud
provider
like
this
feature.
Gate
can
go
in
and
we
have
our
own
schedule
and
time
to
support
this.
B
So
further
fixes,
like
yeah,
we're
not
going
to
do
entry
because
it's
legacy
right,
so
we
can
do
in
the
cloud
controller
manager
so
that
that's
where
the
thinking
goes
and-
and
I'm
always
split
like
so
far,
my
development
I've
been
using
entry.
I
have
to
build
the
whole
thing
from
master
and
test
and
all
that.
So
if
we,
if
we
want
to
go
to
the
ccm
route,
then
that's
where
we
need
to
switch
as
well
like
the
primary
development
mechanism.
B
So
I
I
need
to
do
that
as
well,
of
course,
and
that's
where
we
should
push.
C
So
kishore
you'll
have
to
forgive
me
if
I
don't
know
the
the
details
of
this.
So
that's
why
I'm
asking
you
these
questions,
but
from
looking
at
the
the
pr?
It's
really
just
it's.
It's
making
a
change
into
the
legacy
cloud
provider
aws
to
prevent
any
service
of
type
load,
balancer
with
multiple
ports
right
with
mixed
with
mixed
ports.
Are
you
saying
that
the
upstream
or
upstream,
the
kubernetes
api
server
is
no
longer
going
to
be
validating
that
and
it's
basically.
C
To
the
cloud
provider,
okay,
correct,
and
so
what
this
pr
is
doing
is
just
for
the
legacy
cloud
provider
aws.
It's
saying:
okay,
let's
just
deny
it
for
the
legacy
cloud
provider
stuff
and
then
ccm
the
broken
out
provider.
Aws
then
we'll
enable
it
as
needed.
B
Correct
and
we
can
do
more
fixes
in
there
and
also
we
already
offer
like
nlp
through
aws
load,
balancer
controller
and
there
we
already
support,
mix
protocol
just
fine
with
limitations,
but
we
definitely
support
that.
So
we
have
a
solution
for
the
customers
who
want
to
use
this.
So
that
is
the
reason
why
I'm
I
decided
not
to
invest
more
time
to
fix
the
legacy
cloud
provider.
A
If
we
should
just
I
mean
yeah,
we
can
definitely
merge
this
here
and
I
think
this
should.
This
obviously
has
to
go
entry.
The
other
option
is,
if
we
can
do
it
in
time,
we
merge
this
entry
only
and
then
invest
the
time
which
you
know
if
we
have
it
to
allow
an
lb
with
with
mixed
protocol
on
the
ccm.
B
We
can
do
that,
but
what
happens
is
like
as
soon
as
this
feature
gate
is
enabled,
then
the
ccm
will
be
broken
for
that
feature
right.
So
that
is
what
I
wanted
to
avoid.
So
this
has
to
be
a
stepping
stone
and
we
built
on
top
of
this.
So,
okay,
we
are
always
offering
like
something
that
is
stable
to
the
customers
right.
We
knowingly.
We
can't
have
things
that
break
so
that
that's
the
reason
I
was
pushing
it.
A
B
C
You
I'll
comment
on
it.
No,
I
don't
have
anything
else,
and
actually
I
have
a
heart
stop
anyway.
Sorry.
A
But
I
guess
all
right:
well,
thanks
everybody
for
coming
and
oh
next
meeting
will
be
probably
canceled
because
of
kubecon,
but
maybe
you
know
if
anybody's
in
la
I
will
be
there.
We
can
have
an
informal
lunch
or
something
like
that.