►
From YouTube: Kubernetes - AWS Provider - Meeting 20200501
Description
Recording of the AWS Provider subproject meeting held on 20200501
A
B
But
in
a
lot
of
projects
there
are
owners
that
are
getting
assigned
PRS
and
issues
by
the
by
the
CI
bought,
even
though
they're
inactive,
so
I'm
gonna
just
go
through
a
bunch
of
the
owners
fall
and
propose
that
we
move
those
people
to
either
emeritus
approvers
or
just
remove
them
entirely.
But
yeah
I
will
do
due
diligence
and
ping
people
prior
to
removing
them,
either
on
slack
or
I'll,
just
think
them
in
the
PR
and
give
them
a
time
to
respond.
Maybe
they
do
want
to
start
contributing
again.
C
A
Cool
okay,
so
the
next
item
was
something
that
I
put
on
there,
which
is
sort
of
in
a
similar
vein.
We
just
had
some.
We
have
this
demo
backlog
and
seems
like
some
of
these
have
been
on
for
a
long
time
for
probably
more
than
six
months,
so
I
just
wanted
to
kind
of
publicly
ask
if
you
know
we
should
just
kind
of
clean
that
up
and
just
kind
of
get
it
on
record.
C
C
C
There
was
interest,
but
not
in
this
time
slots,
so
we
could
definitely
move
it
off
the
top
point
and
that
would
put-
and
we
could
see
if
an
issue
who's
technically
up
next
wants
to
do
it
and
then
otherwise
we
could
go
to
nadir
and
see
if
nadir
actually
wants
to
present
that
next
time,
I
presume
you're
not
ready
to
do
it
right
now.
Oh
you're,
right.
C
C
A
C
Right,
sorry,
do
we
want
to
schedule
in
the
deer
for
two
weeks,
do
you
wanna
do
that
in
two
weeks
it
here.
A
So
it's
not
the
ideal
situation,
but
I
think
I
can
I'm
already.
Basically
halfway
done
so
you
know
by
early
next
week,
I
should
be
able
to
publish
the
first
image
for
that
and
I
think
that'll
hold
this
over
until
we
get
until
we
all
decide
how
we
want
to
do
be
sort
of
CN
CF
owned
PR.
We
thought
I
guess
so.
If
anybody
has
anything
to
add
of
that
or
has
comments
on,
you
know,
I
adjusted,
you
might
have
an
opinion
on
whether
we
should
share
with
the
the
cops
test
accounts
or
not.
C
C
Way,
that's
one
of
mine
time
now,
as
I
screwed
up
in
some
other
way,
they'd
be
a
processed,
should
be
the
same
here
and
then
effectively.
What
we
get
is
we
get
a
working
group,
a
staging,
docker
repo
and
a
method
by
which
they
can
be
promoted
to
the
official
case
GC
our
repo
in
the
CN
CF
billing
account
for
the
accounts
question.
C
We
thought
that
they're
the
right
way
to
do
that
would
be
at
the
project
level,
because
that
we
dots
a
tag
rely
on
each
job
to
correctly
tag
the
resources
which
can
be
a
little
like
if
someone
messes
it
up,
then
you
have
this
big
bucket
of
like
who
knows,
and
so
from
that
point
of
view
that
would
then
imply.
There
are
different
boss
cos
pools
for
the
different
AWS
jobs
and
we
have
to
somehow
teach
boss
cos.
C
We
do
have
the
ability
to
create
accounts
more
or
less
dynamically
I'm,
still
working
through
some
of
the
wrinkles
there,
but
that
more
or
less
works,
and
so
I
mean
there
is
supposed
to
be
limit.
We
haven't
found
it
yet
so
I
guess
we
can
create
a
pool
of
more
accounts
that
would
be
dedicated
to
this,
like
this
particular
set
of
jobs
or
whatever
you
want
to
call
that,
and
then
we
would
know
how
much
we
were
spending
on
each
one.
It
might
be.
We
don't
care
until
the
money
becomes
a
more
significant.
C
C
Basically
we're
gonna
start
with
pretty
rough
granularity
and
so
probably
a
lot
of
sharing
and
then
move
to
much
more
granular,
as
we
need
more
detail
like
so
the
most
of
the
biggest
vendors
GCP
projects
are,
as
far
as
I
know,
more
or
less
unlimited,
so
it
might
be.
We
do
end
up
doing
one
project
per
job,
but
I.
Don't
honestly
I,
don't
know
how
how
painful
it's
gonna
be
to
manage
all
this
a
diverse
projects
or
I
suspect
a
lot
more
restricted
in
terms
of
the
number
we
can
get
immediately
and
so
I.
C
B
A
Thanks
for
doing
that
and
really
super
helpful
on
a
related
note,
the
authenticator
release
I
started
working
on
that
yesterday.
I
know
I
promised
up
for
this
meeting
but
I,
so
that
should
be
either
today
or
maybe
I'll
finish
it
up
over
the
weekend.
But
Peter.
If
you
can
help
me
just
run
a
couple
tests
before
I
publish
the
final,
my
Aires
and
then
I
also
have
a
work-in-progress
pull
or
requests
for
like
an
ete
test
runner-
and
I
want
to
use
cops
for
that.
So
I
have
cops
spinning
up
the
the
UT
test.
A
The
thing
that
I'm
trying
to
figure
out
right
now
I
was
like
what's
the
best
way
to
so
cops.
Has
this
built-in
Authenticator
capability,
where
you
just
set
off
indication
in
the
config
file,
and
it's
just
AWS,
so
like
the
way
that
I
I'm
hoping
there's
a
better
way
to
do
this,
but
right
now
what
I
was
trying
to
do
is
create
the
cluster
with
that
setting
and
then
modify
the
authenticator
pod
to
be
the
image
to
be
what
I
want
it
to
be,
the
one
that
I
just
built
for
the
testing.
E
The
the
API
field
to
enable
the
authenticator
pods
on
a
cop's
cluster,
also
supports.
There's
an
image
field,
see
in
override
the
image
views.
Okay,
so
you
could
do
that
you
right
now.
You
do
have
to
be
editing
that
manifest
and
then
doing
a
cops,
replace,
there's,
no
sea
lion
flag
support
to
set
that
to
enable
Authenticator
or
to
set
the
image
in
a
cops,
create
cluster
command.
For
example,
we
could
look
at
piping
that
in
it
might
be
a
reasonable
thing
to
support
so
yeah.
A
A
C
Where
are
you,
where
does
the
test
live
like
one
things,
I'm
trying
to
figure
out
is
how
we
start
adding
our
own
like
cops.
Today
only
has
the
kubernetes
ete
tests
and
one
of
the
things
I'm
trying
to
forget
us
how
we
could
add
tests
for
additional
functionality
that
are
sort
of
cop
specific,
like
logging
in,
like
with
a
device
I
am
Authenticator
or
like
a
cop's
rolling,
update
or
whatever.
It
would
be
right
talking
about
like
during
runtime.
Where
would
it
live
like
yeah
or
like
how?
C
A
C
That
make
sense
like
it
does.
We
can
talk
more
about
it,
maybe
on
sock
or
something
about
like
some
of
the
things
I've
been
thinking
about
here.
But
yes,
because
I,
don't
we
don't
have
a
as
far
as
I
know
in
the
existing
cops
test.
It's
in
Peter
be
correct
me.
If
I'm
wrong,
we
don't
have
an
entry
point
or
a
hook
point
to
do
anything
like
other
than
like.
C
E
I
think
we'd
have
to
add
a
hook
in
the
test
impute
test
code
to
one
clone
the
cops
repo
with
whatever
commit
I
think
in
the
pre
submits.
We
clone
it
in
order
to
build
that
commit,
but
for
the
periodic
sui
don't
clone
the
cops
repo
at
all.
So
we
need
to
enable
that
and
add
logic
to
additionally
run
tests
from
that
Rico.
D
D
D
A
E
So
the
this
this
is
kind
of
blurry,
because
it's
not
a
kubernetes
repos
under
the
AWS
goetaborg,
but
it's
I'm
working
on
integrating
it
into
cops
and
it's
a
web
hook
that
will
inject
environment
variables
so
that
the
AWS
SDK
can
authenticate
against
using
service
account
opens
the
web.
Hooks
image
is
not
publicly
hosted
anywhere,
even
on
ECR
to
my
knowledge,
so
I'm
just
curious
if
that
can
be
published
either
through
AWS
or
through
CNC.
Oh
yeah,.
A
Was
going
to
take
this
up
at
some
point?
I
just
was
sort
of
waiting
for
people
to
complain
enough
to
prioritize
it.
So
yeah
I
think
I
think
it's
something
that
we
could
do
have
other
things.
Probably
that
are
more
important
at
this
point.
But
if
you
keep
bugging
me,
then
I'd
say
we
can
probably
get
all
that
stuff
get
the
the
repo
and
stuff
created
for
it
in
the
next
few
weeks
or
months.
Maybe
maybe
the
next
month
else
I'll
say
so
yeah
well
I'll
bring
that
back
to
my
team.
All
right.
A
One
thing
we
can
do
is
we
can
we
can
look
at
the
open
issues
in
kubernetes
repo?
We
could
also
look
at
anything,
that's
opened
in
the
cloud
provider
repo
itself
and
then
we
probably
won't
get
to
it,
but
there
are
the
other,
like
Authenticator
and
various
other
projects
that
we
are
sort
of
responsible
for.
So
what
you
guys.
B
Think
so
I
was
going
through
the
cloud
provider
and
I'm
just
we
polish
shoes.
The
other
day,
yeah
I
was
just
wondering
like
when
the
last
time
she'll
Eccleston
I'm,
just
trying
to
gauge
what
kind
of
users
are
posting
issues.
Is
it
like
folks
that
are
trying
to
integrate
with
cobster
or
yes
or
if
it's
just
just
users
out
in
the
wild?
Just
give
me
a
try,
because.
D
B
A
Probably
they're
doing
everything
from
scratch
and
I
think
also,
there's
probably
some
confusion,
because
I've
seen
duplicate
issues
that
people
just
like
don't
know
where
to
create
it.
They
have
some
cloud
provider
issue
and
they
create
one
and
gerbera
Nettie's
of
one
in
the
cloud
provider.
Repo
so
and
then
also.
A
C
C
F
B
Actually
I
was
looking
at
this
one.
Yesterday,
peace
go
down,
I,
think
there's
some
Gallic
I
think
it's
a
lot
to
do
with
the
fact
that
we
don't
document
well
what's
required
in
the
cluster.
So
if
you're
running
an
out
of
tree
setup
and
the
cluster
isn't
tagged
with
the
careers
cluster
tag
and
you're,
not
the
no
names
on
private
ENS
and
you
registration
right,
I
think
that's
what
this
users
seen.
A
D
C
Mean
I
think
I
know
what
that
means.
That's
basically
like,
if
you
want
to
plug
in
your
own
controller,
to
implement
load
balancer.
You
could
do
that.
But
if
you
do
that
today
you
turn
off
everything
at
once.
My
understanding,
I
think
that's
how
he's
already
talking
about
but
I
agree
that,
like
some
some
detail
would
be,
he
would
be
helpful
there
yeah.
A
C
F
C
This
is
a
dupe
of
a
long-standing
issue,
which
is
that
we
should
allow
this
I
think
we
have.
The
list
of
five
prerequisites
have
to
happen.
For
this
to
happen,
we
have
no
objection
because
happening
is
just
to
me.
Let's
feel
like
a
migration
path,
and
that
sort
of
thing
so
I
will
find
these,
but
I
will
find
the
humanities
issue
a
I
presumably
wanted
as
close
as
it
well
I.
Don't
know
we
sure
you
want
to
start
closing
these
things.
B
B
So
I
actually
think
that
a
better
path
forward
for
this
might
actually
be
doing
semantic
versioning
of
the
provider,
implementations
and
so
like.
Instead
of
trying
to
migrate
users
to
use
an
annotation
or
whatever,
we
should
just
say
like
we're,
releasing
an
AWS
provider
of
YouTube
and
that
can
have
a
better
naming
for
little
balancers
any
can
have.
It
could
fix
the
host
name
issue
it
having
to
be
a
private
genus
and
whatever
other
legacy
issues
we
have
with
a
double
ace
provider.
B
A
A
Think
if
we're
going
to
be
and
there's
a
lot
of
behavior
that
people
probably
rely
on
but
is
very,
has
a
bunch
of
corner
cases,
especially
with
like
the
the
load,
balancer
annotations
and
the
security
groups,
and
just
it
would
be
nice
to
have
a
clean
slate
to
start
from
and
be
able
to
break
some
of
those
assumptions
so
I'm.
All
for
that.
But
I
think
we
should
create
an
issue
and
just
get
good
feedback
on
it.
A
C
C
Don't
think
users
will
be
very
key,
like
anyone
has
stateful
apps
in
their
cluster
will
be
brave,
like
will
be
ring
keen
to
move
quickly,
so
it's
gonna,
be
it
could
be
a
long
time.
I'm
also
convinced
that,
like
either
no
naming
or
no
bonus
for
naming
has
to
be
done.
That
way,
like
I,
don't
think
either
one
of
them
is
breaking
changes,
I,
don't
think
either
one
has
to
be
done
as
breaking
changes
like
we
could
easily
like
for
the
load
balancer,
we
can
drop
a
tag,
look
for
the
tag.
C
I
have
a
period
where
we
like
you
know.
We
set
the
tag,
but
don't
use
the
tag
have
a
period
where
we
like
use
it
like
make
the
tag
primary
like
identify
things
that
way
for
the
node
name.
The
problem
is,
we
just
need
to
get
the
volume
providers
to
be
more
careful
about
passing
node
names
instead
of
yeah
passing
node
objects
instead
of
node
names,
and
that
would
be.
That
would
be
very
helpful.
B
Yeah
I
agree,
yeah
I
think
I
mainly
brought
it
up,
just
as
like
as
an
OP
looking
for
and
how
to
think
about
some
of
these
problems,
because
if
we
were,
if
we
are
anticipating
a
b2,
then
you
can
categorize
these
problems
as
a
could
be
to
thing
versus.
Oh,
let's
start
thinking
about
complicated
migration
paths
for
them
and
I'm.
B
Sorry
to
think
that,
like
at
least
from
my
experience
like
working
on
providers
like
some
of
these,
things
are
like
super
hard
to
change,
and
even
if
you
create
migration
paths
that
you
know
they're
just
there's
just
they
just
get
really
complicated,
really
fast
and
so
yeah
just
want
to
entertain
that
yeah,
and
maybe
maybe
this
needs
a
cap
or
something
but
just
want
to
throw
it
out.
There.
A
A
F
A
A
C
Think
there
are
two
parts
of
this:
one
of
them
is
I've,
often
heard
people
wanting
to
specify
a
range
of
ports
on
a
service,
which
is
my
knowledge.
We
don't
yet
support,
and
we
say
you
just
write
out
all
hundred
ports
and
the
other
one
is
so
we
could
make
that
better.
The
other
one
is.
We
could
also
like
merge
adjacent
ports
but
challenge
it.
So,
in
other
words,
if
you
see
port
one,
two,
three
four
five,
six,
seven,
eight
nine
ten,
then
I
met
create
a
rule.
C
C
C
Mean
you
get
you'd
have
to
be
pretty
pathological
to
actually
like
cause
this
to
happen
right
like
to
actually
go
from
a
nice
adjacent
range
and
then
I
delete
every
other
one
to
go
over
the
limit.
That's
pretty
like
yeah.
It
is
a
theoretical
problem,
more
than
a
real-world
problem.
I.
Imagine
actually
I
mean
if.
A
A
C
C
The
but
yes
I
imagine
we
could
do
it
would
be
a
nativist,
only
thing
that
we
could
do,
but
it
would
also
be
like
it's
not
like.
Other
clouds
couldn't
implement
the
same
thing.
I,
don't
think,
there's
any
growth,
any
reason
to
expose
a
library
to
help
clouds
do
such
a
thing
doesn't
seem
like
it's
that
hard,
the
complexities
gonna
be
making
a
diverse
course
not
like
finding
adjacent
numbers.
A
F
A
B
One's
interesting
actually
because
the
other
providers,
that
is,
that
they'll,
be
bounced
rocky,
but
you
don't
set
the
hostname
run
into
much
problems
because
of
the
way
you
proxy
does
like
a
short
path
in
the
cluster
for
the
low
bounce
right
P.
So
pretty
much
if
you
sell
low
balance
right
P,
if
you
use
that
IP
in
the
cluster
it
routes
locally
to
do
proxy
and
not
back
out
to
the
LV,
and
that
causes
a
whole
bunch
of
problems
for
people.
It's
just
something
to
consider.
C
B
A
cute
proxy
behavior,
so
if
you,
if
it
sees
the
bouncer
IP
on
the
service
status,
it
creates
local
IP
table
rules
so
that
any
traffic
to
dolla
balance
right
Pecos,
ghost
that
gets
forwarded
to
the
pods
and
I
think
it's
mostly
a.
It
was
mainly
added
Fujiki,
because
GE
requires,
like
the
little
balancer
routes
to
exist
on
the
nodes,
but
that's
pretty
much
the
behavior.
That's
ready.
B
The
local
traffic
in
your
cluster
for
the
load,
balancer
IP,
doesn't
go
back
out,
do
a
little
bouncer.
It
gets
locally
rerouted
to
the
box.
Isn't
that
what
I
want?
Is
that
what
I
want
or
not
sorry,
I,
yeah
I
think
it
depends?
Who
you're
asking
like
I?
Think
that's
what
you
want,
but
there
are
a
lot
of
cases
where
you
actually
want
to
route
it
back
up
to
a
load
balancer,
because
you
want
the
little
bouncer
to
force
TLS
or
you
want
it
to
do
some
protocol
protocol
on
the
load
balancer
layer.
A
D
C
I
was
just
gonna
say,
and
it
looked
like
that
person
then
commented
that
it
wasn't
in
the
attitude
cloud
provider.
My
understanding
is
the
outer.
If
you
travel
better,
now
is
a
mirror
of
the
entry
cloud
provider
which
wasn't
the
case
when
that
person
made
that
comment,
so
it
might
go
a
bit.
It
is
now
supported.
C
C
C
C
B
Yeah
that
explains
it
and
like
it's,
it's
actually
a
really
big
problem
for
the
providers,
because,
like
especially
the
source
traffic,
is
a
little
bouncers.
Ip
then
like.
If
a
little
bouncer
sends
a
health
check
like
request,
then
that
gets
drunk
because
you
props
you
reroutes
it
so
there's
all
sorts
of
problems.
So
it's
actually
a
good
thing
that
it's
using
annotation
enough.
B
C
B
B
A
A
C
A
C
If
there
are
like,
if
there
are
places,
they
can't
contribute,
I
would
point
them
to
it
like
give
them
a
pointer
to
how
they
can
do
it
like
yeah.
It
sounds
like
we
don't
have
yet
a
good
place
do
I
need
to
pose
anything
but
like
like
we're,
always
looking
for
help
and
that's
organized
around
this
issue.