►
From YouTube: Kubernetes - AWS Provider - Meeting 20201113
Description
Recording of the AWS Provider subproject meeting held on 20201113
A
Hello,
everybody
welcome
to
the
provider
aws
meeting
on
november
13th
2020.
So
I
I
don't
think
we
have
anything
official
in
our
agenda
yet,
but
I
had
suggested
that
we
at
least
go
through
pull
requests
that
that
folks
want
reviewed.
I
know
that
there's
some
open
ones
related
to
v2
provider
and
I
think,
there's
at
least
a
couple
like
there's
one
to
cops
that
we
can
take
a
look
at
and
then
yeah.
I
think
that
that
will
at
least
kick
start.
A
The
discussion
feel
free
to
add
anything
to
the
agenda
as
we're
talking.
So
with
that,
I'm
going
to
attempt
to
share
my
screen
and
see
if
that
works
so
bear
with
me
for
just
one
minute.
B
B
The
text
itself
is
quite
small,
so
we
might
have
to
zoom
in.
But
yes,
we
can
see
it
is.
It
is
a
full
width
of
the
window.
We
can
see
I'm
gonna
experiment
by
opening
up
a
okay
did
that
help.
I
don't
know
that
did
that's
that's
much
more.
Traditional
aspect
ratio.
A
Okay,
all
righty,
so
wonderful,
so
I'm
just
going
to
go
ahead
and
take
a
look
at
our
cloud
fighter.
Aws
support
requests
and
I'm
actually
going
to
just
posit
that
nicole,
do
you
have
any
pull
requests
that
you
explicitly
want
us
to
take
a
look
at?
A
We
don't
need
to
fully
review
them
in
the
in
this
meeting,
but
we
can
just
take
a
look
and
then
everybody
can
kind
of
give
them
a
full
review
after.
C
Yeah,
so
for
the
third
one,
the
text,
pr
text,
initial
implementation
of
text.
I
think
this
was
ready
for
a
review
and
I
think
andrew
left
some
comments
and
I
just
like
updated
and
addressed
some
comments.
C
So
so
I
think
andrew
did
you
want
to
say
something.
D
I
was
going
to
say
that
we
already
merged
like
the
config
pipr,
but
I
feel
like
that.
One
we
merged
it
early.
So
that's
like
to
unblock
like
this
pr,
but
I
think
it
might
be
good
to
just
go
over
the
merge
vr
right
now
and
see
if
we
want
to
do
any
follow-ups
to
fix
it.
D
Basically,
the
rationale
behind
that
one
is
sorry
about
the
background.
The
rationale
here
is
instead
of
using
like
any
based
or
json
based
yaml,
I
mean
it's
still
json,
but
like
we're
gonna
actually
use
like
api
version
types
for
the
config.
That
way,
we
can
you
know
api
version
of
config
is
going
forward.
D
D
But
yeah,
like
I
mean
I
don't.
I
didn't
think
we
needed
to
do
a
formal
api
review
because,
like
the
whole
v2
thing
is
alpha
and
you
know
whatever,
but
I
think
like
obviously
before
we
kind
of
open
like
let
users
to
use
this
api,
I
think
they
should
go
through,
like
a
formal
api
review,
make
sure
we're
all
happy
with
this
config
or
even
now,
like
getting
some
initial
feedback.
Great.
A
Yeah,
I
think
it's
it's,
it's
definitely
fine
to
that.
You
know
that
it's
merged,
and
I
I
think
this
is
great.
I
I
love
the
the
the
idea
of
having
the
version
config.
I
think
it
should
have
been
done
that
way
that
in
the
beginning,
I
don't
have
any
thoughts
yet
on
the
content,
but
I
think
as
we
as
we
use
it,
I'm
sure
stuff
will
come
up
so
yeah.
I
think
the
best
way
to
get
feedback
on
this
is
for
us
to
actually
use
it.
B
B
Technically,
we
don't
need
a
a
review
unless
it's
in
the
case.io,
we
don't
need
a
kubernetes
review
unless
it's
in
the
case.io
names
api
group
I
saw
this
was
in
aws.io,
so
I
don't
know
if
aws
cares,
but
the
the
only
thing
I
think
might
be
relevant
is
whether
we
want
to
try
to
make
align
the
fields
across
other
cloud
providers.
So,
for
example,
if
gcp
does
the
same
thing,
we
would
probably
prefer
that
they
like
have
the
same
path
for
their
cluster
name.
A
Okay,
so
that
is
actually
one
one
comment
that
I
would
have
is:
I
think
we
would
probably
prefer
it
if
it
was
one
of
our
because,
like
like
dot
aws
is
an
actual
domain
that
aws
owns,
but
I
don't
think
dot.
Aws.Io
is
so
we
could
just
do
config.aws
or
another
one
is
dot
amazon,
aws.com
or
something,
and
that
at
least
puts
it
into
aws
own
domains,
not
that
it
really
matters,
but
you
know
just
because
the
I
don't
know
I
o
doesn't
really
mean
anything.
D
Okay,
so
on
the
on
justin's
comment
on
like
consistency
to
my
knowledge
at
least
like
this-
is
the
first
like
provider
implementation
doing
this.
So
I
think
we're
kind
of
setting
the
precedent
here
on
what
the
field
should
look
like,
but
I
definitely
agree
like
if
we
can
formalize
this
a
bit
better.
I'd
love
to
see
all
the
cloud
providers
just
kind
of
following
the
same
format.
And
then
you
know
everyone
can
use
version
types
for
all
these
and
we
can
get
rid
of
any
stuff.
D
A
C
Yeah,
so
for
the
text,
pr,
I
think
as
we
we
are
using
the
defined
cloud
config
extract
in
this
pr
and
as
we
discussed
before,
we
have
the
we
have
a
tag,
format
that
we
are
using
for
now.
I
think
I
mentioned
it
in
the
description,
so
the
format
is
current
dot,
io,
slash
cluster
and
the
text
value
is
the
cluster
id
class
name
or
class
name,
so
that
that
implementation
is
based
on
this
format
and
using
the
defined
cloud
provider.
C
It's
just.
I
think
it's
just
the
initial
pass
for
the
text
implementation,
so
I
would
like
to
get
any
opinions
or
feedbacks
on
this
pr.
I'm
still
yeah.
A
And
I
I
I
was
just
gonna
bring
up,
because
I
think
we
had
this
discussion
last
time
and
justin
wasn't
there
if
I
remember
correctly
and
we're
gonna
ask
about
the
the
behavior
of
or
the
kind
of
some
of
the
reasons
for
the
the
the
why
the
tag
was
the
way
it
was,
and
so
what
we
understand
is
that
the
tag
has
the
cluster
name
in
the
key,
so
that
you
can
you
know
if
a
resource
is
shared
you
can
you
can
tag
it
for
all
the
clusters
that
share
it.
A
But
the
problem
with
that
format
is
that
at
least
from
what
we've
heard
with
our
customers
is
that
when
they
try
to
organize
their
billing
by
tag
that
that
format
actually
makes
it
difficult
to
do,
because
you
can,
you
know
I've
never
done
this,
but
you
can
do
some
sort
of
a
sorting
by
tag
key
and
then
you,
you
divide
it
into
different
subcategories
by
the
the
value
of
that
key,
and
so
it
doesn't
really
make
make
sense
to
divide
it
into
different
building
categories
by
shared
or
owned
or
whatever
it
is.
A
So
we
were
sort
of
thinking.
You
know,
maybe
we
could
have
two
tags
that
could
cover
both
of
the
use
cases.
I
think
nadir
also
had
a
similar
feedback
that
he'd
heard
from
from
users
that
he's
worked
with.
I'm
not
sure
if
he's
actually
here
today.
E
A
B
B
By
two
tags,
do
you
mean
what's
written
here
in
the
in
the
pr
issue
right
in
the
pr
where
we
have
like
those
two
and
it's
sort
of
an
or
do
you
mean
a
second
tag
which
is
used
for
ownership.
A
The
one
yeah,
so
I
think
I
mean
there's
only
like
I
think,
not
all
resources
customers
care
about
tagging
for
billing.
I
think
the
biggest
ones
are
like
load
balancers.
Just
you
know
anything
that
that
actually
costs
cost
money.
So
I
think
the
billing
tag
would
be
secondary.
B
Yeah
and
the
with
the
club
I
mean,
I
think,
for
example,
like
I
believe,
cops
lets.
You
add
additional
tags
to
all
your
resources
and
I
feel
like
we
could
also
all
the
ones
that
cops
creates,
and
we
could
also
have
the
cloud
provider
able
to
create
when
the
cloud
provider
creates
resources
able
to
add
whatever
tags
the
user
specifies.
I
think
that
would
be
valuable.
B
B
Kubernetes
is
going
to
be
unhappy
with
you,
the
yes,
the
the
the
status
that
you
described
is
correct,
like
we
originally
had
the
kubernetes
cluster
key
and
one
value,
and
that
meant
we
couldn't
support
easily
support
shared
vpcs,
I
think,
was
the
big
one
bbcs
and
subnets
the
that's
why
we
went
to
this
format
with
the
cluster
id
in
the
key.
B
I
I
am.
I
know
that
I
think
we
actually
support
it
to.
Oh,
I
think
we
deprecated
it,
but
it
it
turned
out
to
be
possible
to
support
both
formats,
but
it
was
sort
of
tricky
so
that
I'm
wary
of
the
tag
that
you
need
to
select
on
when
it's
shared
being
different
from
the
tag
you
need
to
select
on
when
it's
owned
right,
you
don't
know
in
advance
whether
you're,
well,
the
cloud
provider
doesn't
know
in
advance
whether
or
something
that
is
shared
or
owned.
B
So
when
it's
trying
to
discover
the
subnets,
it
now
has
to
query
both
of
those
tags.
If
it's
as
in
this
proposal,.
A
No,
I
think,
yeah,
I
think
we
should.
We
should
use
the
previous
one
for
for
querying
from
the
cloud
provider
trying
to
discover
a
resource,
the
one
that
exists
should
be
used.
I
was
just
you
know,
trying
to
solve
the
billing
issue,
but
but
I
think
your
generic
solution
actually
does
make
sense,
just
allowing
the
customer
to
define
what
their
so-called
billing
tags
are
and
if
those
fail
to
be
added,
then
it's
just
a
you
know.
A
D
Yeah,
so
are
you
saying
that
there's
three
supported
tags
then,
like
one,
the
standard
cluster
name,
one
that
has
the
cluster
name
in
the
key
and
then
a
separate
billing
tag
used
for?
Are
we
saying
the
billing
tag
is
one
of
the
two
I'm
saying.
A
Yeah,
I'm
saying
we
we
should
choose
like
like
from
from
so
this
first
one
where
the
the
cluster
name
is
in
the
value
that
would
that's
solving
the
billing
problem.
So
we
could
do
something
like
that
or
we
could
do
something
more
generic,
where
the
cloud
provider
can
just
you
know,
add
tags
to
all
resources
that
it
creates.
B
D
B
I
personally
prefer
the
second
one
of
those
two
and
then,
if
we
want
to
solve
the
billing
one,
you
would
say
well
to
solve
billing
or,
like
I
understand
that
some
companies
have
systems
which
go
in
and
if
something
isn't
tagged,
a
certain
way
we'll
delete
the
resource
like
just
delete
it
underneath
like
immediately
and
so
like,
have
a
set
of
additional
tags.
I
think
that's
what
we
call
them,
or
I
think
we
call
them
cloud
anyway.
B
Let's
call
them
additional
tags
that
is
applied
to
all
resources
that
the
cloud
provider
creates,
but
the
cloud
provider
will
select
only
on
this
primary
tag
that
we
define
and
we
can
debate
whether
like
it
should
repair
the
tags
when
it
finds
like
missing
ones.
You
know
things
like
that,
but
okay,.
B
B
Yes,
but
you
do
also
want
to
do
it
on
create,
because
otherwise,
if
you
lose
the
race
with
the
the
evil
deletion
machine.
E
E
E
D
Winning
I
mean
I,
I
don't
so
I
don't
understand
the
whole
additional
text
thing.
So
what
we're
saying
is
that
the
second
tag
is
the
the
standard
tag
like
you
have
to
use
that
tag,
but
the
first
one
is
more
like
sugar
on
top,
because
we
want
to
make
billing
easier.
So
why
does
the
first
one
have
to
be
set
on
create.
B
I
I
think
we're
saying
we
would
only
use
in
the
two
that
are
there.
We
would.
We
would
only
use
the
second
one.
We
would
use
kubernetes.
cluster
cluster
id
and
then
equals,
and
actually
the
value
doesn't
matter.
That's
the
tag
we
use
when
we
create
any
resources
or
select
any
resources
that
we
expect
to
be
pre-created,
because
that
doesn't
work
for
billing
and
doesn't
work
for
a
bunch
of
other
scenarios
where
users
have
like
the
deletion
machine.
B
We
allow
users
to
additionally
specify
a
set
of
additional
labels.
Additional
tags.
Excuse
me
additional
tags
that
they
would
like
to
be
on
any
of
their
resources
that
are
created
by
kubernetes
so
that
they,
those
are
also
picked
up
by
their
billing
report,
so
that
we
would
expect,
for
example,
that
the
company,
the
the
company
creating
the
cluster
would
say
you
know
I
want
my
additional
tag
to
be.
B
Billing
label
equals
finance
and
then,
if
they
have
like
two
billing
labels
or
hierarchical
billing
labels,
because
they're
a
mega
company
or
whatever
it
is,
we
can
support
that
as
well
and
if
they
have
to
have
additional
labels,
like
let's
say
they
have
a
pci
label
as
well
right,
they
can
add
that
at
the
same
time
like
they
can
just
add
whatever
we
don't
really
care.
What
those
additional
labels.
D
B
Yeah,
yes,
but
don't
I,
I
think
we
should
discourage
people
from
using
that
particular
that
actual
value,
because
it's
going
to
cause
chaos
right
if
the
like.
So
now,
I
see
it
if
it's
kubernetes
cluster
slash
nothing
right
then
like
what
cluster
does
that
belong
to
it
belongs
to
the
empty
cluster,
which
is
a
weird
I
don't
know
don't
worry
about.
I
I'm
pretty
I'm
nitpicking.
Sorry,
I'm
pretty
misinterpreted!
You
don't
like
that.
It's
too
similar.
F
So
why
is
there
a
trailing
slash
in
this
case
like
just
curious
for
option,
one
like:
why
is
there
a
trailing
slash,
because
that's
not
something
that
I've
seen
commonly
for
the
tags,
any
specific
reason
for
the
trading.
I
think.
D
It's
just
a
typo,
I
don't,
I
don't
think
we
intended
it
to
be
there.
So
it's
just
about
the
kubernetes
I
slash
cluster
and
then.
E
I've
got
a
question
for
nick,
so
if
we
do
any
mechanisms
of
additional
tanks,
one
thing
we
noticing
kappa
is
resource.
Tagging
api
is
not
yet
available
in
some
of
the
govcloud
regions,
so
we
should
make
sure
we're
not
using
it.
I
guess
in
whatever
we
do
in
the
cloud
provider
as
well.
A
Yeah,
that's
that's.
That's
a
good
call,
but
like
everything
that
has
a
tag
on
create
should
be
should
be,
should
be
fine.
G
Okay,
nicole,
did
you
did
you
catch
all
that?
I
think
that
not
all
I'm
like
lost.
A
In
yeah,
so
there's
like
a
separate
api
that
that
is,
allows
you
to
tag
resources
and
and
it's
not
available
in
all
aws
regions.
A
So
when
we
can
just
when
you're,
when
we're
implementing
this
pr,
I
can.
I
can
take
a
look
at
what
what
you're,
what
you're
doing.
But
when
you
supply
tags
on
create
of
the
resource
that
should
be
supported
everywhere.
A
Go
ahead,
I
was
just
gonna
hit
andrew
because
you
mentioned
like
that.
You
would
prefer
this
to
be
in
a
separate
controller
and
I'm
kind
of
just
curious
what
you
were
thinking
there
like
what
the
benefit
would
be,
because
it
seems
to
me
that
it
would
be
easier
to
do
this
on
creation.
D
Yeah
without
having
to
occur
yeah
that
was
before
justin
said
the
thing
about
the
deleting
thing,
so
I
think
yeah
like
if
we,
if
we
need
a
race
with
other
things
that
can
delete
resources.
If
certain
types
of
this
exists,
then
then
yeah,
I
think
maybe
for
this
pr.
What
we
want
is
in
the
config
api,
add
a
field
called
additional
tags,
read
the
additional
text
and
then
just
append
that
into
everything
we
create.
B
Yeah
the
the
edge
case
that
I
was
the
reconciliation
is
if,
in
my
cloud,
the
cloud
config
object.
Now,
if
I
add
an
additional
tag,
what
do
we
do
with
the
existing
resources?
Sorry,
and
so
that's
where
you
might
want
to
say.
Well,
if
I
add
one
then
I
should
like
go
and
like
I
can
select
by
the
primary
tag
and
then
just
re
add
add
any
missing
tags
but
yeah,
so
that.
B
Separate
controller
yeah,
but
yeah
we
can
deal
with
that
the
when
you're
going
to
implement
this
by
the
way,
if
you
see
some
resource
creation
methods
that
do
not
pass
the
tags
in
on
creation,
it
is
likely
because
when
we
first
implemented
it
that
create
tags
on
create
was
not
there,
it
causes
all
sorts
of
race
conditions,
and
if,
if
you
can
pass
the
tags
on
create,
it
is
much
better
to
do
so.
So
don't
if
you
have
the
option,
please
please
do
it.
E
D
Okay,
so
nicole,
I
think
concretely
for
this
pr.
What
we
want
to
do
is
we
want
to
standardize
on
the
second
label
format
for
all
the
operations
in
the
config
api.
Add
a
field
called
additional
tags.
We,
where
users
can
put
arbitrary
key
values,
and
then
we
would
have
to
read
that
and
append
that
to
wherever
we
create
resources
with
tagging.
G
D
Actually,
I
want
to
talk
about
the
second
pr
there.
So
nicole's
also
been
kind
of
pulling
the
kubernetes
head
to
test
some
of
the
latest
changes
that
walter
and
cc
have
been
doing
around,
allowing
a
cloud
provider
to
pull
in
the
entire
like
cloud
controller
manager
module
without
kubernetes
kubernetes.
D
So
I
think
I
think
that
we
don't
import
kubernetes,
kubernetes
anymore
here,
so
maybe
nicole.
The
only
thing
you
have
to
do
is
like
go
into
the
go
modules,
file,
remove
the
kk
module
and
try
to
compile
it
and
see
if
it
breaks.
B
D
B
A
And
that
actually
reminded
this
is
somewhat
tangent
somewhat
of
a
tangent
but
andrew
since
we're
stopping
new
features
in
legacy
after
120,
which
is
now.
A
We
should
probably
move
the
code
over
here,
because
there's
going
to
be
probably
a
significant
amount
of
gap
between
you
know
now
and
the
time
that
the
the
v2
provider
is
is
ready
for
for
use
and
ready
for
new
features
and
if
we're
not
able
to
add
any
new
features
during
that
entire
time
I
mean,
I
don't
think
we
want
to
be
in
that
position.
What
do
you
guys
think
about
that?.
D
Yeah
so
walter-
and
I
have
been
like
going
back
and
forth
on
this
one
yeah,
the
tricky
thing
is
like.
If
we
block
features
in
tree,
there
are
a
few
providers
that,
like
pull
in
legacy
club
providers
to
build
out
a
tree
which
was
kind
of
like
we
did
that
initially,
because
we
wanted
people
to
be
able
to
build
their
out
of
tree
for
free
without
having
to
like
re-implement
it,
but
now
we're
blocking
features.
That
means
they
can't
add,
features
entry,
which
means
they
can't
add,
features
out
of
you
right.
D
So
I
see
two
approaches
to
this.
Both
are
not
really
ideal,
but
the
first
approach
is
like
yeah
like
we
can
migrate
the
code
out
of
tree
and
then
re-vendor
it
back
in
tree.
That's
like
that
kind
of
sucks,
but
that's
one
solution.
D
The
second
solution
is,
we
can
make
an
exception
for
our
feature,
block
saying
that
you
can
only
allow
you
can
only
merge
features
and
tree
if
you
know
that
the
feature
cool
path,
only
touches
is
only
called
from
the
out
of
tree
binary
like,
and
that
gets
tricky
because,
like
someone
has
to
go
through
the
code
and
reason
about
if
it's
actually
an
auditory
feature
and
not
accidentally
entry
right-
and
you
have
to
have
a
really
good
understanding
of
the
various
cloud
controllers
to
understand.
D
If
and
you
have
to
have
a
good
understanding
of
like
cubelet,
to
understand
if
certain
methods
of
interface
actually
called
entry
or
auditory
so
like
that's,
also
kind
of
meh
and
then
the
third
one
is
like,
we
can
just
fork
it
and
then
just
let
it
diverge.
I
don't
know
those
are
kind
of
like
the
three
options
we've
discussed.
If
there
are
better
options
on
this
one,
I'm
happy
to
chat
about
it.
A
Yeah,
I
I'm
you
know
so
obviously,
as
you
said,
they're
not
ideal,
but
I'm
somewhat
more
comfortable
with
forking,
because
you
know,
if
we're
not
allowing
we're
very
strict,
we
have
the
the
machinery
in
place
to
prevent
features
from
merging
into
entry.
Then
the
diversions
shouldn't,
be,
I
mean
there.
A
You
know
there
will
be
back
ported
bug
fixes
over
there,
but
other
than
that
there
shouldn't
be
much.
You
know
should
be
too
difficult.
D
A
B
Thought
so
I
don't.
I
also
agree
with
forking.
I
think,
because
also
it
will
enable
code
cleanups
or
refactoring,
which
could
happen
more
easily
if
in
a
copy
and
yeah,
I
feel
like
you
know
that
that's
sort
of
why
we're
doing
this,
like
I
don't,
I
don't
feel
like
we-
should
do
a
v2
and
then
hamper
ourselves
with
like
with
a
very
difficult
process,
to
to
make
changes.
D
Okay,
yeah,
that
makes
sense
I'll
I'll
relay
that
feedback
to
walter
as
well,
and
just
tell
walter
to
form
the
gcp
one.
B
F
A
C
I
think
load
balance
up
here
would
be
better
to
review
after
the
tax
pr,
because
it
like
depends
on
the
tax
implementation.
So
I
think
those
two
balance
appears
are
still
working
in
progress
and
we
probably
can
discuss
it
later.
I
think.
D
Actually
so
I
I
don't
because
justin
wasn't
on
the
call
last
time,
I
think
it
might
be
good
just
to
touch
base
on
the
the
naming
that
we
decided
on
and
just
make
sure
justin
doesn't
oppose.
So
I
think
the
naming
we
decided
for
low
bouncers,
or
at
least
the
default
name,
is
eight
stash,
the
first
eight
characters
of
service.
B
And
I
that
sounds
that
sounds
good
to
me.
The
something
which
happened
with
el
elbs
or
elb1
was
that
that
the
name
is
exposed
publicly
in
the
dns
c
name
or
the
dns
name,
and
I
don't,
I
think,
that's
also
the
case
with
nlb
actually
right,
question
mark
and
then
just
like.
B
Is
that
a
is
that
is
that
an
information
disclosure
problem
and
is
there
a
way
to
say
actually,
like
my
service,
is
super
secret
and,
like
I
mean
we
could
just
say
like
use
a
different
name
like
don't
call
it
like
evil
evil
service
to
overtake
the
universe
like
give
it
an
alias
name,
nice
service
or
something?
But
I
don't
know
if
like
that
would
be.
I
only
only
concern
because,
yes,
it
makes
sense
and
it
avoids
the
problem
of
mutability,
like
we've,
always
in
the
past,
have
problems
with
the
name
being
mutable.
B
If
you
like,
allow
an
annotation
on
what
we
do
about
that.
But
if
it's
a
namespace
name,
that's
immutable.
So
that's
good.
D
Yeah
I
I'd
also
like
to
introduce
so
like,
in
the
same
way
we're
talking
about
like
naming
policies
like
we
may,
we
definitely
want
to
support
host
names
as
a
name
but
there's
a
use
case
for
using
instance
ids.
As
a
name
like
I
I'm
I'm
thinking
that
in
the
config
api
we
should.
We
should
just
codify
known
naming
policies
so
like
for
load
balancer.
The
default
can
be
like
what
we
just
said,
but
then
you
can
set
like
a
naming
policy
to
uuid
and
it
preserves
the
behavior
of
like
a
plus
no
uuid.
B
Yeah
yeah,
and
so
I
think
I
need
to
look
at
the
pr's
and
now
you've
maybe
realized
so
the
the
other
challenge
is
going
to
be.
If
I
have
a
if
I
change
that
strategy
and
the
canonical
changing
will
be
when
I
upgrade
from
cloud
provider
v1
to
cloud
provider
v2
with
existing
load,
balancers.
A
B
We
might,
we
might
be
away
with
tags,
maybe
so
like.
I
know
that
the
like
the
load,
balancer
apis
in
the
past
have
not
been
as
rich
in
terms
of
querying
by
tag,
but
maybe
we
can
beg
our
nice
friends
aws
to
like
make
that
a
bit
richer,
or
maybe
it
already
has
been
made
richer.
B
B
F
The
aws
load
balancer
controller
and
that's
how
we
do
like
we
don't
depend
on
the
name.
We
query
using
the
aws
tags
for
the
load
balancer
so
like
something
that
we
implemented
recently
to
offload
nlb
ip
mode
from
that
cloud
provider,
so
cool.
F
Let's
have
a
look
how
you
did
that
we
query
the
load
balancers
and
then
we
match
the
tags.
That's
as
far
as
I
know,
that's
how
I've
been
doing
so
queries
in
this.
The
mold.
F
B
One
of
the
things
also
we
can
consider
like
when
we
were
doing
the
v1
when
we
were
doing
the
original
car
provider.
We
didn't
do
this,
but
we
could
do
more
of
a
polling
list
strategy.
I
think
we've
talked
about
this
a
couple
of
times
like
it's.
It's
a
much
more
predictable
load
to
every
30
seconds
list
or
load
balancers
than
it
is
to
in
response
to
user
actions,
start
like
listing
individual
balancers
and
we've
had
problems
with
volumes.
D
Yeah
I'd
love
to
explore
that
right,
because
kind
of
the
motivation
for
v2
is
like.
Let's
do
everything
we
did
wrong,
v1
right,
so
that,
like
I've,
seen
several
several
issues
like
in
kk
around
people
hitting
resource
quotas
and
whatever
or
api
quota
limits
so
yeah
like
if
we
can
do
something
that
like
pulls
the
entire
list
of
load
balancers
and
then
like
puts
it
in
memory
cache
or
something
that
would
prove
improve.
That,
like,
I
think,
that's
definitely
worth
exploring
early
on.
A
The
other
thing
justin,
were
you
a
part
of
the
discussions
about
like
where
the
load
bouncer
code
will
likely
end
up.
B
A
So
my
understanding
kishore,
you
can
correct
me
but
like
the
nlp
code,
will
probably
move
to
the
aws
load
bouncer
controller
because
we
already
have
the
the
the
instance
there's
two
difference:
there's
the
ip
mode
and
the
instance
mode,
and
one
of
those
is
in
the
load
monster
controller.
So
it
kind
of
makes
sense
to
just
put
the
nlp
code
together,
be
sure
when
I
add
anything
to
that.
F
Yeah,
so
we
had
to
do
the
ip
mode
out
of
the
tree
because
of
the
api
limitations,
and
then,
since
we
already
have
the
code,
it
might
makes
more
sense
for
us
to
support
the
instance
mode
which
we
are
targeting
like
pretty
soon.
We
are
also
going
to
support
instance
mode
out
of
the
tree,
and
this
like
because
of
the
comparative
issues
we
still
have
to
leave
some
of
the
code
in
the
cloud
provider
like
some
basic
support
for
the
load
balancer
for
customers
who
do
not
want
to
install
the
controller.
F
They
will
have
some
basic
load
balancer
support
from
the
cloud
provider
code,
but
if
they
want
more
advanced
support
or
faster
changes,
they
would
want
to
go
to
the
load
balancer
controller,
which
provides
both
nlb
and
alb
support.
So
that's
how
we've
been
planning
for
now.
We
already
patched
the
cloud
provider
code
to
ignore,
like
certain
annotations
like
if
the
load
balancer
type
is
external
or
ip
nlp
ip.
We
make
the
aws
cloud
provider
ignore
creating
resources
for
that.
So
that's
the
path
that
we
have
been
taking.
F
That's
what
we
discussed
last
week
as
well,
and
the
eventual
plan
is
like
even
out
of
the
tree
cloud
provider.
If
we
can
provide
some
sort
of
library,
the
aws
load
balancer
code,
they
can
actually
use
the
same
code
for
like
cloud
provider
as
well.
So
that's
what
we've
been
talking
about.
B
Lately,
yeah,
that
makes
a
ton
of
sense
to
me
like
put
the
code
in
the
best
place,
and
then
we
can
almost
have
the
discussion
of
how
to
how
to
deploy
it.
It's
sort
of
a
separate
discussion
right
like
we
could
have
two.
I
always
get
this
wrong.
Two
containers
in
the
same
pod,
right
and
sort
of
say,
like
you
know
like
do
this
or
we
could
actually
vendor
the
code
into
cloud
provider
aws
and
they
have
different
advantages
on
each,
but
you
know
like,
I
think,
the
it's
not
necessarily
something
user.
B
D
Yeah
so
so
I
think,
like
concretely
what
that
means
for
v2
is
like
we're
only
going
to
implement
like
elb,
and
it's
not
gonna
call
nlp
or
anything
like
yeah
like,
like
you
said,
maybe
if
the
controller
is
like
vendorable
like
we
could
pull
it
in
but
like.
I
think
it
makes
sense
to
just
make
that
clear,
cut
line
and
say
like
by
default.
We
support
you'll,
be
if
you
want
nlp
or
whatever
fancy
stuff
go,
go
install
the
other
thing.
F
B
The
the
what
I
was
going
to
say
is,
I
guess
the
flip
side
is.
If
we,
how
would
you
feel
if
you
want
to
deprecate
elv1
and
we're
like
well
cloud
provider?
V2
is
only
supports
elb
v1
and
if
you
want
the
the
wbv2
like
you
have
to
go
install
this
other
thing.
That
feels
like
a
little
off
message.
There
right.
F
Yeah,
so
there
are
some
hesitations
from
the
customers
like
they're
kind
of
like
having
to
install
a
controller.
They
kind
of
don't
like
that
idea.
Some
of
them
don't
prefer
that
so
we'll
have
to
come
up
with
some
story
in
that
case,
and
one
more
concern
is
like
you
would
also
involve
some
backward
comparative
ratios
right
now.
Suddenly,
like
some
load
balancer,
the
customers
had
in
v1
is
gonna.
Stop
working
v2,
so
we'd
have
to
also
work
around
that
story.
A
little
bit.
B
I
don't
know
if
peter's
listening,
but
we
had
a,
we
are
doing
elb
or
yeah
you'll
be
to
nlp
or
eop
one.
Yes,
and
it's
proving
tricky
in
that-
and
I
think
we're
like
talking
about
like
how
you
move
from
one
to
the
other
like
cross
deletion,
so
it
may
be
that
putting
them
into
one
project
might
enable
that
migration
to
go
more
smoothly
peter.
Do
you
wanna
give
us
a
little
bit
of
background
on
what's
going
on.
H
H
H
Cops
exports
admin
credentials
for
a
cube,
config
file
for
cluster
admins
by
default.
It
was
exporting
client
certificates
and
basic
auth
credentials,
and
it
turns
out
when
you
provide
an
acm
cert
the
client
certificate.
What
wouldn't
couldn't
be
used
because
it's
the
cert's
not
being
passed
through
to
the
api
server
pods.
It's
only.
The
tls
session
is
only
between
the
client
and
the
classic
load
balancer,
so
that
meant
cube,
control
or
client.
H
Go
was
falling
back
to
basic
auth
credentials,
automatically,
silently
and
so
with
kubernetes
119,
which
removes
support
for
basic
auth.
That
meant
that
our
admin
credentials
could
no
longer
work
on
119
clusters.
So
what
we
ended
up
having
to
do
was
migrate.
H
The
api
server
load
balancer
to
an
nlb
and
set
up
a
second
listener,
so
that
there
will
always
be
one
listener
that
does
not
have
an
acm
server
and
then,
if
you
provide
an
acm
cert
that
will
end
up
on
a
second
listener
and
then
the
cube
config
file,
that's
generated
if
you're
exporting
it
to
hit
the
acm
listener.
It'll
use
the
second
port
and
it
will
not
have
the
cluster
ca.
H
But
then,
in
order
to
use
that
you
have
to
provide
an
external
credential
provider
or,
like
a
you,
know,
an
exec
whatever
I
forget
the
terminology,
but
then
that
also
allows
other
clients
that
do
use
client
certificates
to
use
the
tcp
listener,
and
so
then
the
way
that
we
had
it
configured
was
both
listeners
have
target
groups
that
target
the
exact
same
api
server,
ports
on
our
control,
plane,
nodes,
and
so
we've
had
challenges
with.
How
do
we
migrate?
H
H
It's
been
tricky
like
the
nlbs
have
a
status
field
that
goes
from
like
provisioning
to
active
or
something
that
we
can
watch
for,
but
classic
does
not
so
there's
just
a
bunch
of
gotchas
that
come
with
that
and
how?
How
far
are
we
willing
to
go
to
avoid?
You
know
a
couple
minutes
of
down
time,
so
that's
kind
of
where,
where
that's
at
sorry,
for
the
ramble,
yikes.
B
It's
it's
good,
it's
been
like
and
thank
you
peter
for
all
the
work
you've
been
doing
on
that.
But
it's
I
mean
it's
something
that
if
we
want
to
ever
get
users
that
have
created
these
lbv
ones
elsewhere
like
to
what
extent
do
we
want
to
take
that
on
and
help
them,
or
are
we
just
going
to
have
elbb
ones
around
forever
in
practice,
or
at
least
as
long
as
clusters
live.
F
A
So
it
sounds
like
there's
a
lot
of
possibilities
and
a
lot
of
things
we
have
to
think
about,
and
the
separation
between
load,
bouncer
controllers
and
the
elb
classic
in
cloud
provider
is
not
necessarily
set
in
stone.
So
we
might
want
to
just
kind
of
keep
revisiting
this
and
as
customers
start
using
load,
bouncer
controllers
and
once
we
get
to
v2,
we
can
revisit
you
know,
do
we
want
to
vendor
the
other
load,
balancers
back
into
cloud?
A
D
Yeah,
like
I
think,
one
of
the
compelling
reasons
I
see
for
people
like
not
using
the
club
provider
for
their
load,
balancer
implementations
is
like
the
interface
is
a
little
rigid
and
it's
hard
to
like
you,
don't
have
the
flexibility
to
like
reconcile
the
load
bouncer
in
the
exact
way
you
want
to.
D
D
A
Any
other
topics
that
people
want
to
cover-
I
I
wanted
to
just
so
since,
since
we're
moving
out
of
tree,
what
do
people
feel
about
like
there's
a
nlb
cap?
I
think
it
was
actually
deleted,
but
there
was
an
issue
related
to
it.
So
I
just
kind
of
commented,
like
you
know,
we're
gonna
stop
accepting
features,
so
I
don't
know
that
this
my
monitor,
my
computer
just
want
to
sleep.
A
Okay,
I
don't
know
that
you
know
this
nlb
feature
is
ever
going
to
get
to
ga
via
this
kept,
because
I
I
think
the
cup
was
actually
deleted.
So
do
we
want
to
have
like
some
some
kind
of
a?
I
don't
know
directory
of
of
like
proposals
or
do
we
do?
We
just
want
to
use
issues
or
do
we
actually
want
to
continue
writing
caps
in
in
the
kubernetes
enhancement.
D
D
A
A
A
Cool,
I
think
that
does
it
for
the
meeting.
Unless
anybody
has
anything
else
justin
you
can
stop
recording.