►
From YouTube: Kubernetes - AWS Provider - Meeting 20201002
Description
Recording of the AWS Provider subproject meeting held on 20201002
A
A
I
can't
recall
if
I
said
it,
but
if
I
did
not
say
it
today
is
friday
october,
2nd
2020..
We
have
a
couple
of
things
well,
with
a
couple
of
things.
We
have
a
more
than
a
handful
of
things
on
our
agenda.
Please
do
feel
free
to
add
your
name
and
any
other
items
to
the
agenda,
so
we
can
be
sure
to
go
through
them
and
get
to
them
all
in
order.
A
We
probably
don't
need
it,
but
if
we
do
need
it,
please
do
feel
free
to
use
the
raised
hand
feature,
and
otherwise
I
guess
we
can
jump
right
in
with
the
first
topic,
which
is
nadir.
Who
wants
to
talk
about
the
v2
provider.
B
Yeah
hi,
so
I
think
andrew
tycoons,
like
this
worked
so
we're
doing
a
v2
provider.
We
is
just
getting
started.
We
need
to
do
some
common
things,
such
as
adding
prometheus,
metrics,
retry,
behavior
and
throttling.
B
I
was
talking
with
jay
jay
price
a
couple
of
weeks
ago,
and
he
suggested
we
look
at
the
app
mesh
controller
for
kate's
and
there's
a
sort
of
private
package
in
there.
That
does
some
really
nice
throttling
and
just
does
the
metrics
quite
cleanly.
B
I'm
just
wondering
if
there's
a
case
for
making
a
sort
of
shared
like
a
repo
for
a
lot
of
the
common
behavior
that
we
have
across
sub
projects
that
work
with
aws.
I
mean
there's
also
similar
use
cases
in
the
cluster
autoscaling.
Maybe
I
mean
for
the
first
time
we
can
just
copy
copy
these
into
these
separate
replays,
but
I
think
over
time
might
make
sense
to
have
a
shared
repository
for
this.
Just
wonder
what
people
think.
A
I
think
it
sounds
like
a
great
idea.
I
think
a
nice
way
to
get
started
is
to
put
it
into
your
repo
and
make
it
a
go
module,
and
then
it
should
be
consumable
elsewhere
without
even
needing
your
repo,
and
I
don't
know
if
we
need
to
do
more
than
that
right
away,
but
I
feel
like
it's
it's
a
great
way
to
to
do
it
and
demonstrate
it.
I
have
some
other
thoughts
about
what
this
should
look
like,
but
I'll.
Let
andrew
go
first.
C
Yeah
I
I
was
basically
yeah.
I
agree
with
you
justin.
There
was
some
useful
like
utils
and
libraries,
and
even
like
the
current
legacy
provider
that
would
have
been
useful
useful
to
port
over,
but
it's
not
like
its
own
standalone
library,
which
makes
it
hard
and
yeah
like
across
all
the
repos
in
the
kubernetes
community
that
talk
to
the
aws
apis.
It
seems
like
there's
a
lot
of
duplicated
code,
so
I
think
it
would
be
great
if
we
can
agree
on
a
central
place.
Maybe
it's
cloud
provider
aws?
C
Maybe
it's
something
else,
but
as
we're
kind
of
like
building
up
the
initial
like
v2
implementation,
there's
a
lot
of
code
that
we're
just
copying
over
that
and
it'd
be
great.
If
it
was
just.
You
know,
shared
libraries.
A
Yeah,
the
thing
I
wanted
to
say
was:
we've
had
a
lot
of
in
the
past.
We
historically
had
trouble
with
rate
limits
and
the
and
the
the
throttler
in
the
legacy
cloud
provider
is
non-obvious
in
its
behavior,
in
that
it
will
back
off
all
calls
to
the
aws
api
across
the
entire
process,
based
on
observing
rate
limit
exceptions,
rate
limit
errors,
which
is
not
the
default
behavior
of
the
aws
sdk,
which
backs
off.
A
If
I
recall
correctly,
that
particular
call
and
so
the
the
reason
we
did.
This
was
because
we
had
some
bugs
and
sometimes
a
bug,
but
sometimes
a
like
just
aggressive
behavior,
where
we
would
call
the
aws
apis
in
a
effectively
in
a
loop
and
a
reconcile
loop,
and
we
would
dos
hit
a
rate
limit
and
basically
shut
down
or
get
the
entire
aws
account
into
a
state
where
it
was
unable
to
serve
or
un
unable
did
not
have
did
not
allocate
capacity
and
try
not
to
choose
threat.
A
They
would
not
allocate
capacity
to
any
more
api
calls
on
that
account
so
like.
If
you
had
anything
else
running
in
your
aws
account,
it
would
not
be
able
to
execute
aws
goals.
D
One
of
the
things
we've,
I
think,
generally
they're
they're
api
specific
from
service
specific,
so
I
feel
like
going
with
the
per
per
service
rate
limit
would
be.
A
Yes,
yeah,
I
mean
it
should
yes,
it
should
be.
I
can't
remember
whether
we
did
it
on
all
services
or
not,
but
if
I
recall
correctly,
the
the
default
aws
sdk
does
not
do
per
service
rate
limit
does
per
call
like
it
does.
It
does
per
call
backups,
and
so
that's
that's
a
good
thing
to
a
good
behavior
to
look
at
the
other.
One
is
the
sort
of
meta
point.
A
There
is
one
of
the
reasons
we
had
this
problem,
I
think
was
because
you
know
if
you're,
if
you're
a
hundred
volumes
concurrently,
we
tried
to
treat
those
as
100
separate
operations
and
the
other
approach,
which
I
think
I
want
to
say
blocks
or
something
like
eight
of
us
had
a
project
called
blocks
or
box
or
something
anyway
it.
A
It
showed
a
way
where
you
basically
had
you
would
bulk
fetch
the
volumes
like
get
all
the
volumes
in
in
a
sort
of
bounded
synchronization
loop,
so
that
it
would
basically
be
constant
if
you
were
reconciling
you
would
make
you
know,
one
describe
volumes,
call
every
n
seconds
as
opposed
to
a
variable
number.
So
it
was
like
that
was
a
nice
model,
and
that
might
be
worth
thinking
about
in
terms
of
the
other
way
to
address
this
challenge.
B
Yeah,
I
think
that
makes
sense.
The
one
from
the
atmos
controller
is
pretty
nice,
so
you
basically
define
a
bucket
on
regex
on
regex's
for
different
api
service
calls
and
it's
all
configurable
on
the
command
line
as
well.
So
you
can
override
everything
and
it
will
just
block
the
entire
sdk
on
that
particular
on
those
lists,
of
course,
that
used
to
find
so
it's
pretty
neat
in
terms
of
intensive
way.
It
operates.
A
That
sounds
nice.
It
sounds
like
the
a
much
more
general
and
perhaps
better
implemented
version
of
the
thing
I
hacked
together
a
couple
of
years
ago.
Yes,
because
you
you
want
to
cross,
call
bucket
effectively,
yes,
and
I
like
that
approach,
the
regex
approach.
That's
nice.
D
So
if
we,
if
we
like
the
the
throttling
library
there
then
should
we,
I
feel
like
cloud
provider-
would
be
a
better
place
for
a
central
library
than
the
app
mesh
controller.
So
should
we
maybe
try
to
move
that
I
can
bring
that
up
with
the
maintainers
over
there.
B
D
C
A
Okay,
cool,
if
there's
anything
else
on
that,
if
not,
we
can
go
on
to
the
next
topic,
which
is
which
is
ede
nadir
on
edd
testing
and
some
ssm
permissions.
B
Yeah,
so
now
that
we've
got
the
conformance
cube
test
stuff
into
the
cluster
api
test
framework,
gonna
start
adding
e2e
tests
to
the
cloud
provider.
Aws
repository,
one
of
the
things
that
we
have
is
we
look
over
all
these
youtube
instances
that,
in
that
bus,
close
account
and
just
scrape
we
just
great
files
on
them.
We
do
that
using
ssm
session
manager,
but
our
power
accounts
don't
are
sort
of
completely
blocked
on
using
ssm.
I
think
it's
set
up
on
some
organizational
policy.
B
It's
been
like
that
for
ages,
so
we've
not
been
able
to
have
interest
logs
in
for
in
cluster
api
for
quite
a
while.
It
would
be
good
to
get
that
cleaned
up,
because
often
if
something
is
badly
wrong
with
the
cloud
provider
or
something
you're
not
going
to
have
a
working
cluster,
you
need
some
out-of-bound
process
to
be
able
to
get
logs
for
debugging.
A
Yes,
I
think
this
might
be
blocked
on
me
and
I
think
I
was
blocked
on
getting
a
pr
pr
merged
to
that.
Actually,
because,
like
I
scripted
the
setup
of
these
and
then
oh
yeah,
it's
still
working
now
it's
been
open,
it
has
azure
team,
it
just
hasn't
been
approved.
Why
is
it
not
been
approved?
A
But
yes,
if
you
look
at
this
pr
that
I'm
about
to
paste
in,
of
course,
I'm
sure
when
I
scroll
down
the
last
comment,
it's
gonna
be
something
I
have
to
do,
but
anyway,
this
pr
yeah,
that's
the
pr
which
adds
the
accounts
and
I'm
just
trying
to
see.
A
Okay,
so
I
guess
I
should
put
it
in
a
gcs
bucket.
I
guess
that's
what
we're
blocked
on,
but
those
are
the
those
are
the
accounts
I
hope
you're
using
and
if
we
scroll
down
somewhere
there
should
be
a
list
of
the
policies.
I'm
just
sort
of
scrolling.
A
B
A
See
I
am
full
access,
amazon,
ec2
for
access,
aws,
deep
racer
cloud
information
access
policy,
which
seems
to
be
the
pre-built
one
for
cloud
formation,
so
we
effectively
need
to
add
we
need
to
add
ssm
there.
A
That's
that's
that's
a
topic
for
another
day.
Okay,
the
one
you
want
is
you
want
ssm
for
I
just
yes.
If
I
get
this
this
merge,
we
should
be
able
to
pr
in
amazon,
ssm,
full
access
policy,
and
then
it
sounds
like
so
I
should
rerun
it,
but
it
sounds
like
I
need
to
get
the
gcs
bucket
as
the
store
the
back-end
store
for
the
terraform
state,
and
then
that
will
unblock
the
objections.
I
guess.
A
B
Yeah,
I'm
pretty
sure
we're
using
them
and
in
terms
of
the
e2e
test,
we
should
use
the
same
accounts
for
cloud
provider
testing
the
same
ones
that
we're
using
for
cluster
api.
A
From
my
point
of
view,
I
I
don't
see
why
not
the
there
is
the
potential
to
set
up
different
pools
of
accounts,
different
boss,
cost
pools
of
accounts,
I'm
entirely
sure
how
that
works.
So
if
we
did
want
to
start
doing
that
that
we
could
do
that,
I
don't
know
if
it
really
solves
a
problem.
To
be
honest,
all
right.
A
Okay,
so
it
sounds
like
I
have
some
stuff
to
do
there,
but
if
there's
nothing
else,
the
next
item
on
our
agenda
andrew
the
new
node
naming
policies
for
the
v2
provider.
Hopefully
there
will
be
no
policy
but
carry
on.
C
Yeah,
so
I
guess,
like
that's
part
of
the
discussion
is
like:
do
we
even
need
a
config
for
user
to
indicate
what
the
naming
of
a
node
should
be,
and
I
think
this
is
going
to
be
so
so
right
now,
nicole,
a
co-worker
of
mine
is
working
on
the
initial
instances,
v2
implementation
and
the
thing
that
we're
probably
gonna.
The
thing
we
need
to
decide
is
like
how
how
because
we
can't
depend
on
the
cubelet
assigning
the
provider
id
ahead
of
time.
So
how
do
we
decide
and
discover
nodes?
C
C
A
Andrew,
can
you
clarify
the
issue
on
or
the
the
why
we
can't
depend
on
the
cubelet
or
the
the
provider
id
field.
C
Yeah
sure
so
yeah,
I
probably
should
start
with
that
context
so
like
in
the
external
case,
the
cubelet
doesn't
understand
so
so
today,
like,
if
you
said,
provided
aws
in
the
cubelet,
it
knows
how
to
query
the
metadata
service
and
find
this
provider
id
and
it'll
set
its
provider
id,
but
it
registers
in
the
external
case.
It
just
registers
itself
by
and
it's
up
to
the
provider
to
discover
the
node
by
name
which
is
tricky
because
it
kind
of
assumes
certain
properties
of
the
node
and
the
instance
that
make
it
possible.
C
There
needs
to
be
some
known
semantics
on
how
instance
names
match,
to
instance,
nodes,
and
maybe
it's
auto-discovered
using
a
set
of
you
know
unknown
formats
that
the
node
the
instance
should
be
named,
or
there
should
be
a
config.
That
says
the
name
has
to
be
the
private
dns
or
a
name
has
to
be
the
inside
the
instance
name
or
the
instance
id
or
whatnot.
In
order
for
the
external
cloud
controllers
to
discover
and
register
the
instance.
A
So
the
there
is,
the
other
option
would
be
to
have
a
component
that
can
effectively
immediately
after
the
node
has
been
registered
by
the
cubelet
like
assign
the
provider
id
and
write
it
in
so
cluster
api,
I
think,
would
have
a
mechanism
to
do
that.
A
I
think
cops
on
aws
now
has
a
mechanism
to
do
that,
because
we
do
our
own
handshake
with
the
cubelet.
Essentially,
if
you
give
the
cube
a
a
unique
bootstrap
token,
I
think
but
then,
when
you
need
credentials,
then
you
can
sort
of
thread
it
through.
We
are
doing
in
cops
for
security.
A
There
is
still
some
form
of
mapping
from
a
a
cubelet
created
node,
which
does
not
have
a
private
provider
id
to
a
provider
id,
but
that
can
be.
That
can
be
a
sort
of
first
step
before
anything
else
is
allowed
to
happen
like
before.
You
start
attaching
volumes
and
stuff
to
it.
C
A
I'm
not
sure
I'm
following
so
so
you
would
you
would
then
not.
There
wouldn't
be
a
requirement
on
the
node
name,
necessarily
the
aws
cloud
provider.
V2
would
rely
on
provider
id
and
there
would
be
some
component
which
does
implement
a
policy
for
populating
the
provider
id
that
might
be
by
a
policy
on
the
node
name
or
in
the
case
of
cluster
api
or
cops.
It
might
be
a
policy
on
how
the
node
got
created.
A
I
think
I'm
sure,
like
clustering
cops,
were
both
very
happy
to
like
sort
of
standardize
that
that
flow
to
make
it.
The
reason
cops
is
doing
it
for
security
like
to
make
sure
that
you
can't
just
arbitrarily
register
a
node
and
that
there's
a
basically
a
cross
check
against.
In
our
case,
the
aws
api
is
to
like
go
and
assert
the
validity
of
the
nodes
joining.
C
Okay,
yeah,
that
makes
sense,
but
do
we
so
I
guess,
like
my
assumption
here,
is
that
there
needs
to
be
a
mechanism
in
the
cloud
provider
like
if
I
rolled
my
own
cluster
using
manually
created
everything.
There
needs
to
be
a
mechanism
where
the
provider
id
can
be
discovered,
and
so
we
do
so
like
by
default,
the
actual
logic
in
the
controller
if
it
sees
the
provider
id
already
set.
On
a
note,
it
actually
just
skips
that
step
and
just
assumes
that
the
predefined
provider
id
is
valid.
So
I
think,
like.
C
So
I
think
the
cops
case
and
the
cluster
api
case
is
already
solved.
We
mostly
need
to
be
concerned
of
the
case
where
the
provider
id
isn't
set
and
there's
nothing,
that's
going
to
set
it,
and
we
need
to
discover
what
that
is.
A
That's
makes
sense,
I
think,
perhaps
for
the
roll,
your
own
cluster
case.
Maybe
it
is
okay
to
say
like
if
you're
not
using
an
some
sort
of
auto
provider.
Id
populator
then
set
the
provider
id
using
that
flag
you're
talking
about
like
pass
the
provider
id
to
the
cubelet.
C
Okay,
and
so
I
guess
the
question
is
like
do
we
do
we
think
that's
acceptable
because
it
again
like
it
does
put
the
burden
on
the
user
to
know
what
the
idea
is,
and
you
know,
write
a
script
or
something
that
will
dynamically
fetch
the
id
and
put
it
in
the
cubelet
and
whatnot.
It
sounds
like
something
that
we
should
support
in
some
way
to
do
automatically,
but
I
don't
know
yet
yeah.
What
do
people
think.
A
I
don't
see
anyone
I'm
muting
to
me,
I
mean
I
feel
like
having
having
an
example
like
that
it
wouldn't
be
that
hard
right.
It
would
be
like
a
a
shell
script
that
effectively
called.
I
don't
know
if
there
is
an
aws
command
to
query
the
metadata,
but
you
could
curl.
A
C
Right,
I
think
the
other
tricky
part
is
that
the
provider
id
is
it
would
be.
It
could
be
error,
error
prone
to
generate
the
provider
id,
because
it's
like
aws
colon,
slash
zone,
slash
instance
id
or
whatever
it
is.
A
Yeah
and
maybe
you
could
maybe
we
could
like
relax
that
and
like
just
tolerate.
I
don't
know
whether
we
want
to
relax
it,
but
like
just
tolerate
just
a
bare
instance
id.
A
I
think
some
tools
use
the
use
the
aws
colon
prefix
to
recognize
whether
this
is
a
you
know
like
a
kubernetes
cluster
running
on
aws
or
running
on
gcp,
so
that
might
be
a
little
bit
breaky,
but
maybe
I'll
just
do
aws
slash
and
then
the
instance
id
or
something
like
that.
C
Yeah
so
the
other
option
here-
and
this
is
what
the
the
azure
folks
do-
which
I-
which
I
really
want
to
avoid
doing,
but
it
is
an
option.
They
run
a
daemon
set
on
the
cluster
that
acts
as
the
node
registration
control,
loop
and
so
it'll
catch
the
instance.
It's
on
it'll
it'll
catch
like
the
node
event
when
the
instance
registers
and
then
it'll
query
the
instance
metadata,
because
it's
a
daemon
set
and
it'll
have
permissions
to
like
register
the
node
back
to
api
server
so
that
the
external
control
loops.
C
A
Yeah,
it's
not
a
terrible
fallback,
though
right
like
so,
we
have.
It
sounds
like
a
bunch
of
options
right
like
if
you're
running,
cluster
api
or
cops
or
other
tooling,
like
you
might
have
some
different
flow
if
you're
gonna
register,
if
you
want
to
keep
it
simple,
if
you're
doing
kubernetes
the
hard
way-
and
you
want
to
like-
do
it
in
a
bash
script,
you
can
like
like
pass
it
by
the
flag,
and
if
you
want
to
do
it
in
your
own
tooling,
you
can
run
the
daemon
set.
A
If
you
want
to,
I
don't
know
it
feels
like
the
so.
The
big
advantage
for
me
is
like
we
just
don't
no
matter
what,
if
we
choose
any
policy
we're
gonna
upset
people
we're
gonna,
we're
gonna
exclude
some
people.
I
think
in
terms
of
the
the
mapping
like
someone's
going
to
want
to
use
a
different
mapping
on
the
node
name,
and
it
feels
so
niche
like.
C
Yeah,
so
actually
what
else?
What
I
was
actually
going
to
end
up
proposing
is:
should
there
be
a
config
toggle
that
says
that
defines
the
policy
of
no
naming
that
is
in
line
with
how
you
provision
your
cluster.
A
C
Sure,
like
maybe
it's
a
config
file
or
a
config
map,
I
don't
know
yet
how
would
we
store
it?
But
there
is
a
config
field,
maybe
named
node
name
policy
and
its
initial
values
can
be
private,
dns
and
instance
name
and
so,
depending
on
you
know,
whatever
the
user
set
in
the
config
map
or
whatnot,
the
the
controller
will
will
try
to
find
that
corresponding
dns
same
with
instance,
name
based
on.
I
guess.
We'd
also
need
a
cluster
tag
depending
on.
C
So
it
assumes
that,
like
the
the
way,
the
node
registers
itself
and
the
name
that
it
gives
itself
is
mapped
back
to
instances
in
a
known
in
a
known
way,
and
the
controller
knows
how
to
query
it.
Based
on
that.
D
A
One
thing
that
springs
to
mind
is:
are
we
you
know?
We've
we've
made
great
strides
in
security
over
the
past
couple
of
years
and
are
we
going
to
like?
Do
we
want
to
allow
the
cubelet
to
be
the
start
of
this
route
like
this
chain
of
trust
right?
Do
we
do
we?
Do
we
have
any
way
to
prevent
a
cubelet
registry
with
another
node's
name?
Is,
I
guess
my
question.
A
Or
yes,
obviously,
not
one
that
already
exists,
but
like
another
node's
name.
A
I
mean
couldn't
it
do
that
today?
Yes,
and
that's
what
we're
plugging
with
like
some
of
these
strategies,
where
we
basically
like
give
it
a
a
a
a
single
like
a
unique.
A
We
give
each
node
a
token,
either
through,
like
cluster
api
or
through
cops
that
that
that
token
is
only
is
associated
with
a
user
yeah.
I
think
a
username,
and
so
we're
able
to
like
effectively
prevent
the
prevent
incorrect
registrations
prevent
impersonation.
D
D
B
Yeah,
so
actually,
cluster
api
is
not
generating
pride
id
ahead
of
time
so
because
it's
relying
on
cube
adm.
So
we
still
have
that
security
problem
today
in
cluster
api.
What
we
have
is
the
node
rest
controller
that
matches
the
node
instance
id
to
the
machine
cluster
api
machine
and
then
fills
in
a
node
reference.
B
B
Then
we
check
the
that
the
aws
either
dns
host
name
matches
on
the
I
am
called
so
we
use
im
as
a
host
attestation
mechanism,
so
yeah,
so
private
dns
then
breaks
that
mechanism
in
terms
of
hosts
host
data
station
for
that
node,
which
I
think
is
again
another
reason
why
most
customers
often
just
solve
for
the
instance
id.
C
Yeah,
I'm
having
a
hard
time
figuring
out
where
the
security
implications
are
because,
like
the
no
naming
policy
gives
you
the
flexibility
to
choose
a
name
policy
that
would
enforce
that
the
registered
node
name
is
like
yeah
like.
If
you
want
to
keep
it
as
the
private
dns,
you
can
do
that.
But
if
you
wanted
to
switch
it
to
something
else,
like
instance
id
you
can
do
that
too,
because
right
now
like.
C
C
A
Yeah
you're,
I
think,
you're
right.
I
think
I
I
want
to
be.
I
want
to
try
to
avoid
us
baking
into
the
cloud
provider
the
assumption
that
the
node
name.
A
Well,
I
don't
know,
I
always
gonna
say
that
the
node
name
can
be
trusted,
but
I
guess
we
could
always
enforce
enforce
that
trust
like
through
an
emission
controller
type
thing.
So
maybe
it's
fine.
A
Yeah
I
mean
so
historically
cops
gave
nodes
permission
to
register
with
any
node
name
so
like
in
theory,
if
you
spotted
another
instance
booting
up
that
was
in
a
different
instance
group
or
you
know
whatever
you
could
like
take
that
node
name
and
then
you
would
be
able
to
access
the
secrets
from
that
node,
which
might
be
more
privileged.
A
We
need
to
like
figure
out.
I
don't
think
it
was
more
of
a
theoretical
like
problem
we
were
closing.
I
think
we
need
to
figure
out
whether
it's
a
real
problem.
C
Yeah
that
makes
sense
to
me
it's.
It
sounds
like
like
not
a
cloud
provider
problem,
but
a
problem
for
who,
whoever
is
life
cycle,
managing
or
provisioning
cluster,
but
yeah,
I'm
I'm
missing
context.
So
maybe
I'll
do
some
reading
on
that
and
and
we'll
we
can
discuss
this
next
time
so
yeah.
I
can
also
think
about
it
tomorrow.
D
D
A
Okay,
so
I
guess
we
can
do
the
next
topic,
though,
which
is
well
nick.
I
think
you
were
going
to
talk
about
prow,
build
container
and
push
job.
That's
what
it
says.
D
There's
a
link
yeah,
I
just
opened.
I
just
kind
of
I
was
looking
at
the
vsphere
provider,
so
I
just
did
some
kind
of
straw.
Man
like
build
and
release
script
for
proud
job
and
a
some
config
and
the
test
and
prep
repository,
which
I
think
should
be
linked
in
that
pr.
Hopefully,
so
I
just
wanted
someone
who
was
familiar
with
the
vsphere
provider
to
take
a
look
at
it.
D
I
tried
to
like
kind
of
simplify
what
you
guys
had,
because
I
wasn't
sure
what
what
was
actually
necessary
and
what
wasn't
you
had
some
stuff
where
you
actually
give
the
ability
to
choose
whether
or
not
to
like
mount
the
docker
socket
in
the
container.
But
there
was
some
documentation
saying
that
in
prow
this
isn't
necessary
anyway.
So
I
I
left
that
out
for
now,
just
if,
if
anybody
wants
to
take
a
look
and
see
if
it's
heading
in
the
right
direction
or
not,
that
would
be
cool.
C
Yeah,
I
could
take
a
look.
Those
crowd.
Jobs
are
pretty
old,
so
I
wouldn't
they're
not
the
best
example,
but
good
starting
point
so
feel
free
to
iterate
on
it.
But
yeah
I
can
take
a
look.
Yeah
sounds
good
cool.
A
Yeah
fyi
I've
been
I've,
been
wrapping
it
in
a
cloud
build
as
well.
So
I
don't
like
proud,
kicks
off
a
cloud
build
which
kicks
off
a
make
which
kicks
off
the
build.
The
the
reason
for
doing
that
is
that
we,
I
don't
know
it
feels
a
little
more
trusted
in
that
there's
the
extra
sandbox,
but
it
runs
on
the,
but
just
all
right
yeah
I
mean
if
you,
if
you
I
want
to
add
another
direction,.
D
Yeah,
exactly
if
you
want
to
just
throw
if
you,
if
you
feel
strongly
about
that
approach,
just
throw
a
comment
on
the
pr
I
don't
know.
I
feel
that
strongly.
B
B
D
Got
it
sure
yeah,
if
you,
if
you
have
an
example,
to
link
that'll,
be
something
to
compare
to
would
be
good.
A
The
next
topic
is
nick
again,
a
ux
for
cops
configuration.
D
Yeah,
so
this
just
wanna
get
this
pr
wrapped
up
soon.
There's
two
two
sub
topics
here,
actually
one
which
I
didn't
mention,
but
me
and
andrew
were
talking
about
so
I'll.
Just
mention
that
really
quickly.
D
So
I
was
looking
a
little
bit
into
the
failure.
So
basically,
with
this
cops
pr
that
runs
external
cloud
provider
running
the
et
tests
on
the
clusters
that
I
create
with
that
where
everything
is
set
to
cloud
provider
external,
I
get
failures
for
volume
et
tests,
because
I
haven't
completely
dug
into
it.
B
D
It
can't
find
where
the
volume
is
mounted
so
the
attach
succeeds,
but
there's
something
going
on
where
it's
not
able
to
mount
because-
and
I
think
it's
because
it's
actually
not
like
it's
missing,
like
this
initialization
of
this
device,
bouncer
or
attacher
piece
of
code
anyway.
So
I
can
get
around
those
errors
by
passing
aws
to
cubelet
or
you
know
if
we
were
to
use
csi.
D
That
should
work
as
well,
but
my
my
so
that's
one
piece
that
sort
of
led
into
the
other
thing,
which
is
my
question,
is:
do
we
want
to
allow
customers
to
or
users
to
configure
each
of
these
flags?
So
you
know
you
have
your
cube
api
server
cloud
provider
flag?
D
You
have
your
cubelet
cloud
cloud
provider
flag,
there's
even
a
top
level
cloud
provider
flag
in
the
in
the
cops
configuration
and
then
there's
also,
if
you're
going
not
with
csi,
you
have
this
external
cloud
volume
plug-in
flag
which
I've
I've
added
as
part
of
this
pr,
so
is
generally
the
cops
strategy
to
let
customers
configure
each
of
these
flags
individually
and
then
validate
for
incorrect
configurations.
A
Both
is
the
answer
I
think
so.
There
are,
for
example,
in
these
in
the
cluster
spec,
there
is
a
cube
api
server
block,
which
maps
directly
to
flags
on
the
api
server
using
those
is
sort
of
considered
advanced.
A
You
don't
have
to
do
it
most.
People
should
not
have
to
do
it
for
normal
cops
operation,
the
those
are
sort
of
direct
flag
manipulation
and
you
get
into
undefined
unsupported
behavior.
We
can't
guarantee
that,
like
I
can,
a
flag
will
continue
to
be
supported
in
the
next
version.
We
try
to
do
some
validation,
but
in
practice
like
we
can't
validate
everything,
and
so
we
validate
the
things
which
catch
us
ourselves
and,
like
basically,
don't
do
too
much
the
top
level
fields.
A
Things
like
the
spec.cloud
provider,
that's
more
of
like
a
statement
of
intent.
As
in
like
I
want
to
use
cloud
provider.
If
I
provided
us,
and
so
what
we
will
do
is
we
will
then
expand
that
out
and
populate
the
flags
if
they
aren't
already
set.
A
So,
in
other
words
there
is,
there
is,
what's
called
a
full
spec,
so
you
can
do
like
cops,
get
cluster
name
full.
I
think,
and
you
will
see
the
fully
expanded
specification,
which
includes
all
the
inferred
fields
on
apis
on
the
cube
api
server,
so,
for
example,
or
specifically
here,
for
example,
you
have
this
kube
controller
manager,
external
cloud
volume
plug-in.
So
if
we
are
using
external
cloud
provider
and
aws
and
let's
assume
that
we
don't
support
csi
yet,
then
we
would
automatically
set
this
flag
if
assuming
it's
required.
A
We'd
automatically
set
this
flag
unless
the
user
had
overwritten
it
in
some
way
and
that's
why?
So
all
these
flags
looks
like
external
cloud
volume
plug-in
all
these
flags
should
be
possible
to
tell
whether
they're
set
or
not
so
a
lot
of
them
are
pointers
to
strings
or
pointers.
To
hints,
in
this
case,
I
think
we're
assuming
that
the
empty
value
is
considered
considered
the
not
set
value
so
there's,
there's
not
gonna,
be
a
way
to
clear
this
value,
but
that's
okay.
A
If
we,
if
that
becomes
a
problem,
we
can
actually
it's
safe
to
switch
to
a
string
pointer,
but
anyway,
that's
that's
the
sort
of
it,
the
the
that's
the
long
answer.
The
short
answer
is
effectively
in
package
model
components
as
sort
of
olay
said.
You
look
at
the
top
level
fields
and
if
the
user
has
not
set
the
like
deep
fields,
you
set
them
to
what
they
should
be
and
that
doesn't
get
persisted
into
the
cluster
spec.
But
you
can
see
it
in
the
full
spec
all
right,
see.
Okay!
A
Cool,
I
don't
know
if
we
currently
have
a
way
to
enable
csi
or
disable
csi
in
the
in
the
cup
spectrum.
C
Yeah,
so
the
volumes
test
failing
is
interesting
because
at
least
in
talking
with
michelle,
the
the
the
mount
logic
in
cubelet
should
be
independent
of
the
cloud
provider,
and
so
like
my
thinking
is,
maybe
the
node
was
not
registered
properly
or
something
or
like
the
cubelet
depends
on
some
node
field
that
the
cloud
provider
sets
and
maybe
the
the
volume
test
runs
without
those
things
set
yet
but
yeah,
I
think
that's
worth
digging
into.
Maybe
there's
a
bug
somewhere.
D
A
Matt
thanks
for
working
that
nick
eric!
Oh
sorry,
if
there's
something
else
on
that
eric
a
pr
for
legacy,
multicert.
E
Yeah
so
this
kind
of
came
about
for
our
internal
usage
of
like
the
load
balancers
and
like
multi-star
support,
and
then
I
saw
that
someone
else
had
an
issue
on
it.
E
So
if
you
were
to
give
it
a
try
and
just
patch
it
for
the
legacy
club
provider,
I
guess
I
was
like
mainly
looking
for
reviews,
but
also
some
just
feedback
in
general,
because
I
know
like
I'm
not
sure
what
the
process
is
for
doing
like
patching
like
the
legacy
club
provider,
because
I
know
like
working
on
the
v2
stuff
but
yeah.
So
like
I
tested
this
on
when
I
was
working
on
it,
I
don't
tell
that
against
the
118
tag.
E
A
Yeah,
although
it's
the
legacy
club
wow
all
right,
let
me
let
me
see
the
first
order
and
then
we
can
talk
about
something
else.
Although
it
is
a
legacy
cloud
provider,
I
wouldn't
imagine
you
would
backport
it
back
to
legacy
versions
like
this
would
go
into
the
next
version
of
kubernetes,
rather
than
like
going
backwards.
I
know
I
saw
a
thread
around
the
status
of
introducing
features
into
the
legacy
cloud
provider.
I
don't
know
where
we
landed
on
that
or
whether
we
consider
this
a
feature.
C
Yeah,
so
the
consensus
from
the
sig
was
that
we're
going
to
add
some
pro
automation,
starting
1.21,
where
we
are
not
allowing
vr
vr's
labeled
kind
feature
to
the
legacy
provider
and
if
you
do
need
to
add
it
for
some
critical
reason,
there
needs
to
be
some
discussion
around
it
from
the
sig,
and
this
is
just
to
make
sure
that
we're
investing
in
the
right
places
and
getting
the
external
fighters
you
know
going.
Why
not.
A
C
Technically
speaking,
yes,
but
if
you
ask
me,
everything
should
be
in
going
into
out
of
tree
and
so
it's
tricky
because,
like
we
support
the
legacy
provider
in
an
out
of
troop
mode,
and
so
that
should
be
okay
to
go
in
past
1.20,
and
so
I
think
at
some
point
we
need
to
fork
what's
what's
currently
legacy
cloud
providers
into
its
own
tree,
so
that
only
the
external
one
picks
it
up,
which
is
going
to
be
complicated
but
yeah
like
I'm.
I
don't.
D
I
think
it
might
make
sense
to
you
know
if
we're.
If
we're
stopping
new
features
from
being
merged
121,
then
we
have
that
sort
of
period
where
you
know
starting
121,
nothing
gets
merged
in
and
then
either.
D
I
don't
know
the
next
release
or
immediately.
We
we
fork
it
and
and
and
the
entry
code
just
is
forever
not
accepting
new
features.
C
A
Yeah,
I
don't
want
to
get
us
into
a
state
where
we
can't
add
features
somewhere
right.
I
don't
know
it's
like
it
feels
like
getting
this
into
120
into
the
legacy
provider.
Is
the
place
that
maybe
we
can
ask
eric
to
also
add
it
to
the
the
new
provider,
although
it
sounds
like
that
would
happen
automatically
anyway.
Is
that
true.
D
Yeah
so
there's
I
mean
that's
something
we
need
to
figure
out,
but
the
lb
controller
is,
I
mean
technically,
if
we're
just
talking
about
the
external
cloud
provider
that
has
that's
importing
the
legacy
cloud
provider,
then
yeah.
All
you
need
to
do
right
now
is
get
it
into
the
legacy
cloud
provider
and
it
will
automatically
be
imported
right.
A
A
A
I
think
that
was
lasting
on
our
journey,
that
that
was
the
last
thing
on
our
agenda.
I
don't
know
if
anyone
says
any
other
topics
they
would
like
to
bring
up
or
whether
we
would
like
eight
minutes
back
of
our
fridays.
A
All
right
well,
thank
you
all
for
attending
and
we
will
see
you
in
two
weeks.
I
think
there
are
some
could
be
some
great
topics
around
the
code
organization.
I
guess
or
code
structuring,
so
enjoy
your
two
weeks
and
see
you
in
two
weeks.
Thank
you.