►
From YouTube: Kubernetes - AWS Provider - Meeting 20201016
Description
Recording of the AWS Provider subproject meeting held on 20201016
A
Hello,
everybody
and
welcome
this
is
the
aws
cloud
provider
bi-weekly
meeting
of
the
kubernetes
project
today
is
friday
october
16th.
I
am
your
moderator
facilitator
for
the
day,
justin
santa
barbara.
I
work
at
google
a
reminder.
This
meeting
is
being
recorded
and
will
be
put
on
the
internet
and
to
therefore
please
be
mindful
of
our
code
of
conduct.
If
we
have
a
lot
of
people
trying
to
talk
at
once,
please
use
the
raise
hand
feature.
Otherwise.
A
I
pasted
a
link
in
the
chat
to
our
agenda.
Please
feel
free
to
add
any
items
you
may
have
to
that
and
please
feel
free
to
add
your
name
to
the
attending
list.
It
can
be
helpful
helpful
for
people
watching
the
video
to
correlate
back,
who
you
are
and
otherwise
looks
like
we
have
so
far.
Four
items
on
an
expanding
agenda
right
now
and
nick
looks
to
give
the
first
three
do
you
want
to
take
us
off
with
v2
provider
upgrades.
B
Yeah
so
for
this
I
just
wanted
to
query
the
group
and
see
what
the
what
people's
thoughts
were
so
for
for
eks,
especially
if
we're
going
to
use
the
v2
provider
on
eks,
then
it's
really
important
for
us
to
be
able
to
upgrade
clusters
from
v1
and
2..
Obviously
so
so
two
features
that
we
were
talking
about
having
in
the
v
provider
are.
B
Expanded
node
names.
I
think
that
the
one
that
ever
sort
of
agreed
on
would
be
good,
is
instance,
ids
and
then
another
feature
that
has
been
talked
about
is
friendly
load,
balancer
names.
I
just
wanted
to
kind
of
throw
it
out
there
and
see
if
other
people
are
thinking
the
same
way
that
that
we
would
be
to
to
be
upgradable
to
from
v1
I'm
assuming
so,
and
if
that
is
the
case,
I
was
thinking
either.
B
I
or
someone
else
could
kind
of
put
together
some
some
thoughts
on
how
that
might
might
happen,
but
I
haven't
done
much
thinking
about
it
as
of
yet
so
just
curious
to
hear
your
thoughts.
A
I'm
very
much
in
favor
of
supporting
upgrade.
If
we
can,
I
mean
the
only
reason
it
obviously
helps
with
it
helps
our
users.
It
helps
with
adoption.
It
seems
good
all
round.
The
only
reason
to
make
a
breaking
change
as
it
were,
is
if
it
somehow
is
impossible
or
like
very
difficult
to
do
that
upgrade.
A
So
I
I
think
what
you
propose
sounds
wonderful
in
terms
of
documenting
the
process,
and
should
it
happen
that,
like
the
processes
is
hard
for
us
as
implementers
to
support
upgrades,
then
we
can
cross
that
bridge
right.
Like
do
we
change
the
v2
functionality?
Do
we
decide
it's
a
breaking
change
like
because
there
will
be
much
lower
uptake
and
a
sort
of
complicated
migration
one
way
or
the
other
if
if
they
are
not
seamlessly
compatible,
as
it
were
right.
A
Awesome
thanks
all
right.
Next
on
the
agenda
nick
you're
going
to
talk
about
specifically
the
v2
load,
balancer
support.
B
Yeah-
and
I
was
hoping
actually
andrew
would
be
here
for
this
one,
but
I
will
I
will
mention
it
and
then
I
will
follow
up
with
him
yeah
if
you'd
like
we
can
pause
and
see.
If
he's
going
to
come.
C
A
C
B
Yeah
so
I'll
just
kind
of
introduce
the
the
topic
and
then
we
can
discuss
it
at
a
later
date,
but
so,
basically
what
aws
is
doing
is
we
have
a
you?
You
guys
might
be
familiar
with
the
alb
ingress
controller
and
we
are
some
engineers
are
the
load
balance
or
the
the
nlb
support
from
the
to
the
ingress
controller
and
calling
it
the
load
balancing
or
something
like
that.
A
B
A
We
can
hear
you,
we
were
just
losing
a
couple
of
words
there
here
and
there
which
made
it
hard.
Is
it?
Is
it
clear
we
lost
the
second
half
of
gotcha,
I
think
but
go
for
it,
and
I
will
interrupt
again
and
if
it
happens
again,
you
might
have
to
start
from
the
top.
I
think
on
this
topic,
if
it.
B
Yeah,
just
just
interrupt
me
if
you,
if
you
can't
hear,
but
basically
what
I
was
saying
was
so
the
ek
networking
team
on
bringing
the
nlb
functionality
from
the
cloud
provider
into
the
alb,
ingress
controller
and
sort
of
reaming.
That
project.
I
think
it's
called
like
aws
load.
Balancer
controller.
B
Okay,
just
check
it
so
so
the
idea
is,
the
goal
is
to
move
nlb
kind
of
deprecate,
classic
load,
bouncer
support
and
move
nlb
into
this
other
controller
and
just
kind
of
have
all
the
load
bouncer
logic
in
that
one
controller,
and
then
I
guess
the
the
cloud
provider
would
still
support
classic
load
balancers,
but
it
would
be
in
a
sort
of
you
know:
unencouraged
deprecated
state,
so
I
first
of
all,
I
guess
I'm
just
curious
for
thoughts
on
this.
B
A
A
Yeah,
I
think
what
you
said
sounds
great.
I
think
one
of
the
things
we
should
be
wary
of
is
whether
they're
you
know
like
the
the.
So
I
think
more
modular
controllers
is
a
good
idea.
I
think
the
deprecation
of
classic
is
something
we'd
want
to
sort
of
surface
and
verify
separately
with
users.
A
I
know
that
nadir
and
I
have
had
this
discussion
in
the
past
about
hairpin
and
things,
but
I
I
don't
know
whether
it
still
exists
any
reason
to
use
classic,
but
it
would
be
a
you
know
like
going
back
to
the
initial
point
about
like
compatibility,
someone
that's
using
classic.
If
we
are
going
to
deprecate
a
classic
load,
balancer
would
presumably
have
to
do
something
and
might
interrupts
their
connectivity.
B
Yeah
and
with
regards
to
deprecating
classic,
I
think
it's
it's
probably
at
least
not
anytime,
soon
going
to
go
away,
but
it's
more
of
just
you
know.
We
we
think
is
is
better
in
a
lot
of
ways.
So
we
want
encounters
to
default
to
nlb,
but
not
necessarily
we're
not
we.
I
don't
think
we
should
get
rid
of
the
classic
support.
B
I
don't
think
that's
that's
clear,
yet
that's
kind
of
still
still
open
for
discussion.
My
personal
opinion
is
that
we
should
just
leave
it
and
leave
the
classic
support
in
the
existing
provider,
as
this
def
maybe
not
default,
but
this
you
know
very
basic
feature.
We
would
then
move.
B
A
Yes,
you
were,
you
broke
up
a
little
bit
there,
but
I
will
summarize
for
the
record,
I
think,
which
is
that
there
are
two
nlp
modes,
one
of
which
is
currently
in
the
existing
cloud
provider
and
that
would
be
moved
into
the
alb,
the
current
alb
controller,
so
that
it's
all
in
one
place
exactly.
A
I
think
the
I
think
it's,
I
think
what
you
propose
in
terms
of
leaving
the
classic
in
the
in
the
cloud
provider
is
a
reasonable
position.
I
my
concern
is
that
it
is
one
of
it
could
become
a
barrier
to
the
v1
to
v2
type
migration
right,
if,
if
people
that
have
a
classic
load,
balancer
can't
easily
turn
off
the
built-in
cloud
provider,
for
example,
that's
gonna
that
could
become
a
barrier.
A
A
Great
nick,
you
technically
have
the
next
item
on
e
to
e,
but
I'm
going
to
skip
you
temporarily
to
go
to
nadir.
If
that's
all
right-
and
I
don't
know
if
you
want
to
try
to
like
get
closer
to
your
router
or
something
so
I'm
I'm
just
going
to
make
that
decision
and
do
you
want
to
take
the
next
round.
C
Yeah,
so
I
mean
we
chatted
about
this
as
part
of
the
cops
hacktoberfest
around
building
out
cubetest2,
simultaneously
ben
moss
from
vmware
has
been,
has
started,
work
on
the
cube
test,
2
deployer
for
cluster
api,
as
well,
mainly
in
support
of
cluster,
auto
scaling,
but
we'll
be
able
to
reuse
it.
C
I've
just
got
some
questions
about
how
how
we're
going
to
do
cpi
testing
say
like
if
it
commits
done
to
master
what
version
of
kubernetes
are
we
going
to
test
and
if
it's
head
of
tree,
then
we've
got
some
extra
work
to
do
around
like
getting
artifacts
into
the
right
place
and
making
sure
they
get
downloaded.
A
So
there
are
some
marker
files,
which
are
basically
like
just
text
files
and
gcs
buckets
and
they
record
the
last
version
of
kubernetes
that
that
successfully
passed
with
a
cops
built,
and
so
that
is
a
job
that
is
produced
by
a
job
that
runs
against
the
master
of
kubernetes
and.
A
The
equivalent,
let's
just
call
it
a
build
of
cops
and
we'll,
come
back
to
that
in
a
minute,
but
they
run
against
the
master
of
kubernetes,
and
so
it
writes
like
a
ci
green
dot
text
file
is,
I
think,
the
name
of
it
and
then
that
has
the.
I
think
it
suffices
to
just
put
the
get
tag
in
there,
but
it
has
a
tag
in
there
and
it
is
then
possible
to
download
the
binary
artifacts
from
the
kubernetes
ci,
which
produces
those
with
that
tag.
C
A
A
The
the
goal
of
this
thing
is
that
suppose
kubernetes
broke
cops
or
kubernetes
yeah.
I
suppose
kubernetes
broke
cops
in
some
way.
We
wouldn't
update
that
that
ci
green
file
for
cops,
and
so
we
wouldn't
break
all
the
cops
pr's.
A
And
then
you're,
muted,
okay,
the
thing
which
I
elided
is
there
is
a
parallel
sorry,
a
parallel
job
going
the
other
way
for
cops,
which
runs
the
latest
cops,
build
and
produces
those
artifacts
and
uploads
those
artifacts,
and
that
is
what
is
used,
I
think
by
the
ci
green
job.
So
it's
like
a
climbing
the
ladder
type
thing
like
one
hand:
in
the
other
hand,.
A
And
you
probably-
and
you
want
to
have
your
own
job-
that
produces
your
own
ci
green,
because
otherwise,
if
kubernetes
breaks
cluster
api,
I
mean.
Perhaps
that's
not
a
bad
thing
in
that,
like
in
practice.
If,
if
kubernetes
big
breaks
cops,
we
it's
hard
for
us
to
figure
out
what
went
wrong
but
like
technically
like
an
advantage
of
doing
your
own
ci
green
would
be
that
you
would
observe
a
breaking
kubernetes,
wouldn't
break
cluster
api.
C
A
C
C
All
right,
we'll
figure
it
out,
but
yeah
it's
good
that
we
don't
have
to
build.
We
can
reuse
those
ci
artifacts
that
we're
already
doing.
I
would
definitely
I
would
definitely
yeah.
A
They're
kept
around
for,
if
I
recall
like
90
days
or
something
like
more
than
we
need
more
than
we
hopefully
need.
So
I
would
definitely
do
that
approach
and
you
can
list
the
bucket
and
do
things
like
that
if
you
want
to,
if
you
wanted
to
like
pick
out
a
particular
build
for
some
reason.
Okay,.
A
Cool
all
right,
nick,
we
you
we
will
go
back
to
you
for
the
topic
you
raised
about
the
ede
plan.
Prowl
build
and
push
prs
with
cloud.
B
A
B
I'll
cross
my
finger,
if
that
just
interrupts
we'll
table
till
next
time,
but
I
was
just
wondering
what
I
discussed
was
louder
here,
I'm
not
at
home
actually
right
now,
but
so
the
plan
for
for
ede
with,
I
guess
official.
B
What
will
be
our
first
plan
for
ed?
Are
we
gonna
use
cluster
ai?
Are
we
gonna
use?
Cops?
Cops
support,
hopefully,
will
be
merged
soon.
Do
we
you
know?
Is
it
are?
We
are
you
guys
adding
adding.
B
A
So
nick
that
was,
we
did
lose
a
lot
of
that,
but
I
think
we
got
the
gist
of
it,
which
is,
I
think,
you're
talking
about
cluster
eight,
the
version
two
api
provider
for
a
cloud
provider
for
aws,
how
we're
gonna
test
it
and
the
viability
of
testing
it
with
cops
and
cost
your
api
and
our
plans
on
those
two
projects
in
terms
of
adding
tests
and
possible
adoption
of
cube
tests
too.
A
Yes,
thank
you
all
right,
so
we
got
most
of
it:
okay,
good
and
yeah,
and
thank
you
for
your
work
on
the
cops
stuff
to
control
the
boxy
fixed
and
to
contribute
to
that.
I
can
speak
to
the
cop
side
and
then,
maybe,
if
no
dear,
if
you
want
to
talk
on
the
the
cluster
api
side
or
the
yeah,
if
we
get
if
we
get
the
cloud
provider
integrated
into
cops,
and
we
can
do
that
with
like
a
future
flag
or
a
flag,
we
can
create.
A
A
As
in
sorry,
I
should
say:
periodic
ede
with
a
particular
version
of
of
the
cloud
provider,
aws
version
two.
We
can
do
that
in
various
scenarios
and
as
we
I
don't
think
anything
we
do
on
coop
test.
2
should
really
affect
that
too
much
like
that
will
be.
You
know,
just
a
framework
change
and
shouldn't
shouldn't,
hopefully,
and
I'm
sure,
we'll
break
temporarily,
but
shouldn't
break
any
of
the
results
or
signal
you're
getting
for
for
the
cloud
provider.
A
If
that
makes
sense
what
we,
what
we
are
not
able
to
do
is
give
you
continuous
signal
in
your
repo,
in
other
words,
in
the
cloud
provider,
aws
repo
for
a
proposed
change
to
cloud
provide.
Aws
like
is
this
going
to
work.
That
would
have
to
be
a
separate
job
and
it
could
use
cops.
They
could
use,
I
presume
cluster
api,
but
we
would
have
to
set
that
up
specially.
A
We
could
probably
set
up
both.
Actually
it's
not
once
we
have
the
first
job.
It's
not
terribly
difficult
to
to
do
that.
I
don't
think
india
do
you
want
to
oh.
C
Yeah,
so
my
plan
is
to
get
work
with
ben,
get
the
cube
test,
two
deploy
it
on
and
then
add
to
the
cpi
repo,
the
periodics
to
run
cluster
api
with
the
ci
artifacts,
with
a
built
version
of
cpi
from
master
and
then
also
add
the
pr
job
pre-submit.
C
C
A
Value
of
having
two
is
suppose
the
cops
job
breaks
and
the
cluster
api
one
is
working.
Then
we
know
that
cops
board.
We
can.
We
know
where
to
look
right.
We
have
like
indications.
If
you
have
one
you're
always
wondering
like.
Oh,
did
cops
break
or
do
cluster
api
or
did
a
cloud
provider
break.
So
that's
that's!
A
big
advantage
of
having
two
jobs
got
it.
B
Thank
you
yeah.
Those
are
separate,
so
that's
the
like.
I
I
looked
at
what
nadir
showed
me
with
for
the
cloud
build
stuff
and
tried
to
replicate
it,
so
that's
for
pushing
images
to
staging
so
yeah
if
nadir.
If
you
have
a
few
minutes
to
take
a
look
at
that,
that'd
be
great
yeah.
A
Okay,
wonderful,
that
is
the
end
of
our
topics
on
the
agenda.
If
anyone
else
would
like
to
bring
up
any
topics,
please
feel
free
to
do
so
or
just
generally
anything.
Otherwise,
everyone
will
get
half
an
hour.