►
From YouTube: Kubernetes - AWS Provider - Meeting 20211029
Description
Recording of the AWS Provider subproject meeting held on 20211029
Issue Triage: https://github.com/kubernetes/cloud-provider-aws/pull/272, https://github.com/kubernetes/cloud-provider-aws/issues/282, https://github.com/kubernetes/cloud-provider-aws/issues/281
B
Hello,
everybody.
This
is
the
bi-weekly
aws
cloud
provider
meeting.
I
am
your
moderator
facilitator
for
today.
Just
in
santa
barbara,
I
work
at
google
a
reminder.
This
meeting
is
being
recorded
and
we
put
on
the
internet
and
therefore,
please
be
mindful
of
our
code
of
conduct,
which
boils
down
to
being
a
good
person.
We
have
a
couple
of
items
on
the
agenda.
Please
do
feel
free
to
add
your
name
and
any
other
items.
B
So
we've
just
closed
that
out
and
then
we
were
just
looking
at
some
of
the
other
issues
that
were
opened
and
I
think
the
the
first
one
I
thought
was
sort
of
interesting,
well
they're,
all
interesting,
but
I
thought
the
first
one
was
particularly
interesting,
which
was
it's
number
282
saying
that
we
should
remove
the
hard
requirements
the
hostname
matches
the
ecr
pattern.
B
So
as
as
I
understand
it,
we
are
careful
to
only
provide
credentials
to
the
appropriate,
the
appropriate
registry,
so
we
won't
try
to
send
your
aws
credentials
to
docker
hub,
for
example,
and
the
way
that
works
today
is
there
is
a
regex
rejects
that
matches
against
the
host
name,
and
it
sounds
like
this
user
chaos
puppy.
B
Yes,
a
chaos,
puppy
has
decided,
has
set
up
a
mirror,
and
so
now
the
the
the
image
is
docker.io
busybox
in
their
example,
but
they
are
redirecting
it
using
container
d
mirroring
to
to
their
ecr
registry,
but
of
course,
we're
not
going
to
pass
the
credentials
because,
as
far
as
kubernetes
knows
when
it's
pulling
the
image,
it
looks
like
a
docker
image,
and
I
I
think
we
definitely
don't
want
to
be
passing
your
adverse
tokens
to
docker
hub
in
or
in
general,
like
that
pattern
would
be
bad.
B
B
A
When,
when
the
poster
is
asking
for
it
not
to
be
a
hard
requirement,
what
would
a
what
would
the
behavior
of
a
soft
requirement?
Look
like,
I
guess,
and
what
benefit
would
that
have.
B
B
Right,
I
mean
correct
and
I
so
I
think
I
mean
I
don't
know
how
much
yeah
I
mean.
These
are
your
aws
tokens,
so
they
are
they're,
not
nothing.
I
think
I
think
they
might
be
downscoped
tokens,
but
they're
still.
B
B
B
B
A
That
would
be
the
transparent
way
of
transparent
to
kubla
way
of
dealing
with
this
right,
which
would
basically
just
be
like
a
rewrite
of
the
image
uris.
B
B
We're
just
going
through
some
of
the
open
or
recent
open
issues.
C
C
So
hey
there,
I'm
alexander
sizzle
season
in
it's
a
french
translation
of
off
season
in
english,
I'm
french!
Then
I
I
work
as
a
devops
in
a
in
a
small
company
called
innerex
in
france
and
and
yeah,
I'm
the
only
one,
I'm
the
only
devops
in
the
team
trying
my
best
to
so.
C
Yeah
yeah
kind
of
well,
and
so
I
am
trying
every
every
day
to
maintain
all
the
infrastructure,
all
the
applications
and
everything
and
helps
people
better
understand.
What
do
devops
means
like
what
is
the
cic
pipelines
and
how
to
better
handle
things.
C
And
yeah,
it's
it's
been
a
long
time
since
the
last
time
I
went
on
this
zoom
meeting,
so
sorry
about
that
and
also
the
first
time
I
got
the
opportunity
to
to
talk
about
myself.
So
a
huge
thing.
Thank
you.
B
D
C
C
B
B
All
right,
well,
so
jay,
are
you
gonna
comment
on
the
vcr,
or
did
you
do
it?
Sorry?
I
need
to
refresh.
A
I
haven't
yet
because
I'm
still
trying
to
figure
out
what
actually
would
be
the
best
solution
for
chaos
puppy
in
this
particular
case.
A
So
no
sorry,
I
have
not
yet.
B
B
So
we're
talking
here
about
issue
282,
where,
when
a
user
is
using
an
ecr
docker
registry
as
a
mirror
for,
for
example,
docker
hub
they
can,
they
can
configure
that
in
container
d,
however,
container
d
by
default
won't
pass
any
past
the
credentials
to
the
to
the
registry
and
ecr
with
all
will
say.
No.
Thus,
I
think
the
request
here
was
to
allow
ecr
to
pass
it
to
docker
hub
or
some
other,
like
list
of
registries,
that
the
user
could
provide.
B
One
of
the
downsides
that
we
think
might
be
a
problem
is
we're
not
sure
that
a
fullback
wouldn't
send
the
token
then
to
dockerhub,
and
then
I
think,
jay
suggested.
Perhaps
a
better
option
might
be
to
just
configure
container
d2.
B
B
D
B
D
Right
should
also
be
possible
so
like
we
can
use
im
policies
and
we
can
like
that's
how
we
allow
access
to
all
the
ecrs
right
for
our
cases
as
well.
So
yeah.
A
Yeah,
I
don't
think
chaos
puppy
is
looking
for
the
ability
to
write
anything.
It's
just.
It's
sort
of
an
odd
situation
actually.
B
It's
a
it's
a
reasonable
idea
right.
It's
like
you,
have
a
registry
that
often
goes
down
and
why
not
back
it
with
one
that
is
local
and
you
know
presumably
faster
yeah.
A
Could
I
completely
get
that,
but
why
not
just
remove
the.
B
B
I
think
if
there
was
a,
I
think
we
should
ask
chaos
puppy.
If
container
d
could
do
the
ecr
authentication-
which
I
imagine
is
a
solved
problem
somehow
then,
would
that
solve
their
problem
and
then
we
wouldn't
have
to
change
like
it
would
only
be
the
node
credentials.
So
if
they're,
if
they're,
somehow
doing
anything
more
advanced
than
that,
then
we
need
to
rethink,
but.
B
B
And
then
we
can,
we
can
talk
about
the
next
one.
I
think
the
next
one
up
is
add
support
for
elb
session
sickness,
so
I'm
glad
you've
joined
kishore,
because
I
think
this
is
probably
something
you'll
have
an
opinion
on.
So
this
is
281
user.
B
B
I'm
not
showing
my
screen
today.
Oh.
B
B
I
assume
this
isn't
in
there
today
is
this.
D
D
Right
eventually,
so
it's
in
the
path
of
like
going
to
nlp
route
and
for
nlp.
We
have
load
balancer
controller
for
now,
where
we
do
support
it,
so
we
at
least
provide
some
solution
to
the
users.
Of
course,
we
have
to
like
have
a
good
story
for
the
cloud
provider
as
well
like
the
ccm
eventually.
So
that's
the
next
step
that
we
need
to
work
on.
B
Looking
at
the
cli
commands,
it
looks
like
it's
sort
of
additive,
and
if
we,
if
it's
not
in,
if
it's
not
supported
by
the
controller,
it
looks
like
it's
something
that
the
user,
for
example,
could
run
the
cli
commands
on
after
the
load
answer
was
created,
and
I
think
that
ties
nicely
into
jay.
Your
work
on
the
operators
like
is
this:
what
is
the
status
of
the
operators?
Is
this
something
we
can
configure
with
the
operators.
A
The
it
potentially
could
be,
although
we
we
don't,
really
have
a
whole
lot
of
plans
to
do
elb
or
elbv2
ack
controllers,
just
because
the
load,
you
know
the
aws
load.
Balancer
controller
is
already
able
to
program
load
balancer
resources
in
a
kubernetes
native
way,
as
opposed
to
a
crd
based
way.
A
The
the
thing
that
I
can
yeah-
I
I
just
don't
know
if,
if
doing
it
on
a
classic
load,
balancer
is,
is
going
to
be
something
where
we're
willing
to
spend
resources
on
again,
just
because
it's
in
the
deprecation
path.
A
And
especially
since
it's
it's
supported
for
sticky
sessions,
supported
for
alb
by
the
aws
load,
bouncer
controller
right.
D
D
C
D
A
Than
service,
I
I
tell
you
what
I
I'll
I'll
go
ahead
and
respond
on
this
particular
issue
and
just
say
it's
not
something
that
we
plan
to
support
for
classic
load
balancers
at
this
point
and
to
use
the
aws
load
balancer
controller
with
alvs
is
our
recommended
approach.
B
Let's
see
so
the
next
issue
that
was
opened
recently
as
in
like
since
our
last
meetings
are,
is
number
280
which
I
will
place
a
link
to,
and
this
is
around
no
something
called
node
impairment,
which
I'm
not
sure
what
that
means,
but
I'm
going
to
paste
it
first
node
with
impaired
volume,
sorry
taint
applied
when
volume
attach
error
is
recoverable.
B
So
in
issue
280,
which
I
will
place,
a
link
to
user
austin,
berry
says
that
they.
B
Had
a
node
with
impaired
volumes,
taint
applied
to
a
node
and
then
the
disk
immediately
attached
itself,
and
so
the
this
taint
seems
excessive
and
in
fact
they
say
they
have
never
had
a
case
where
the
tank
was
applied
and
the
error
was
not
recoverable.
B
Okay,
I'm
actually
not
very
familiar
with
this
tank.
Is
anyone
familiar
with.
D
B
B
So
if
a
node
is
attached
is
us
stuck
in
the
attaching
state
for
too
long,
then
we
mark
it
as
at
least
going
in
the
original
code,
then
they
mark
the
whole
node
on
sk
unschedulable
so
that
no
more
no
more
pods
go
to
that
node.
B
B
Doesn't
look
like
when
we've
set
that
we
ever
clear
it,
which
is
also
a
little
so
like?
Presumably
it's
not
a
terrible
thing
to
do
if
a
while,
while
the
volume
is
taking
a
long
time
to
go
onto
a
node,
if
we,
you
know
after
five
minutes,
say,
hey
stop
putting
more
pods
on
here,
but
then
once
it
attaches,
we
should
probably
say
keep
going.
B
A
Justin
I
apologize.
I
need
to
drop
okay
thanks,
jake
good,
to
see
you
you
as
well
justin,
sorry
for
being
late.
I'm
sorry
for
dropping
early,
no
worries.
Thank
you.
Thank
you.
Later,
bye.
B
Yeah,
we'll
probably
wrap
up
here
in
a
minute
or
two
anyway.
I
think,
unless
anyone
has
any
context
on
this
particular
bug,
I
guess
we'll
just
tag.
Sig
storage,
I'm
guessing
in
that.
I
think
this
is
a
generic.
I
don't
think
this
is
specific
to
aws.
Oh,
it
is
specifically
addressed.
C
I
I
once
had
the
opportunity
to
to
have
that
kind
of
thing:
the
the
same
aws
version,
but
the
the
thing
is.
It
was
related
to
ebs
storage
that
was
taken
by
some
other
nodes
in
some
other
regions
and.
C
Since
that
time
it
was
not,
and
if
I
remember
well,
it's
still
not
possible
to
have
read
and
write
access
on
pvcs
and,
and
it
was
kind
of
an
error
making
the
node
kind
of
crush.
C
But
yeah,
as
as
you
mentioned
it
after
that
yeah
1.14
cube
version
was
fixed.
C
B
C
B
It
sounds
like
you're,
the
one
you
hit
was
well.
It
sounds
like
the
one
the
the
reporter.
The
bug
reporter
hit
was
just
that.
The
time
I
was
too
aggressive
was
too
short.
It
sounds
like
the
one
you
hit
is
more
complicated
where
another
node
had
it
locked
somehow
was
that
right,
yeah
kind
of.
B
B
B
B
B
But
yeah,
I
don't
have
a
ton
of
context
on
it
and
I
don't
think
anyone
else
here
I
mean
sounds
like
alexandria.
You
have
background,
you
have
the
most
context,
yeah.
B
Oh,
it's
about
continuing
the
conversation
right.
It's
about
like
trying
to
get
more
information,
understand
things
better.
So
even
anything
anything
is
great.
B
I
think
those
are
the
items,
those
are
only
open
or
newly
opened
prs
and
issues.
I
don't
know
if
anyone
else
has
anything
they
want
to
talk
about
today.
C
I
didn't
got
enough
time
yet
to
prepare
that
meeting
sorry
and
yeah
most
of
the
time.
I
find
a
way
to
fix
it
so
and
starting
to
be
more
available
and
trying
to
to
improve
my
skills,
also
in
colleen
to
to
be
able
to
participate
just
kind
of
off
of
debates
and
trying
my
best
to
one.