►
From YouTube: Kubernetes - AWS Provider - Meeting 20220610
Description
Recording of the AWS Provider subproject meeting held on 20220610
Agenda - https://docs.google.com/document/d/1-i0xQidlXnFEP9fXHWkBxqySkXwJnrGJP9OGyP2_P14/
* KIT (Kubernetes Iteration Toolkit) Demo (https://github.com/awslabs/kubernetes-iteration-toolkit)
* CCM: https://github.com/kubernetes/cloud-provider-aws/issues/400 - new images that match releases
* CCM: https://github.com/kubernetes/cloud-provider-aws/pull/388 - release tagging
A
All
right,
hello,
everyone
welcome
to
the
provider
aws
meeting
on
june
10th.
We
are
going
to
start
with
a
demo
today,
assuming
we
can
figure
out,
screen
sharing
and
then
we'll
move
on
to
the
agenda.
So
let
me
see
if
the
option-
okay,
I
can
looks
like
I
can-
make
you
co-host
here
critique.
A
So
there
we
go.
I
think
that
should
be
enough.
A
B
Okay,
let
me
know
if
you
guys
are
able
to
see
my
slides
cool,
so
some
of
you
guys
might
not
be
familiar
with
this
project
that
I'm
trying
to
demo
today,
it's
a
kubernetes
iteration
toolkit,
it's
an
internal
project,
although
it's
open
source-
and
I
just
wanted
to
give
a
quick
demo
and
get
some
feedback
from
the
group
like
what
others
think
if
it
might
be
useful
and
if,
if
there
are
some
other
set
of
feature
or
functionality,
you
guys
think
might
be
useful
to
add.
So.
B
The
idea
here
is
I'll,
spend
like
initial
five
minutes
just
to
go
over
the
architecture
and
some
background
around
what
the
project
does
and
how
it
is
being
designed
and
why.
Why
are
we
building
this
project
and
then
I'll?
Do
a
quick
demo
to
show
how
to
start
using
it
in
in
while
I'm
demoing
or
presenting,
if
there
are
any
questions,
feel
free
to
stop
me.
B
B
What
we
are
trying
to
solve
is,
as
I
said
in
as
eks
team
or
scalability
teams,
we
tweak
too
many
flags.
We
we
have
to
check
the
for
the
performance
in
kubernetes
or
on
the
instances
where
kubernetes
is
running,
so
we
wanted
to
accelerate
that
testing
and
be
able
to
get
the
results
faster,
because
I'll
show
that
in
a
second,
what
I
mean
by
accelerating
the
testing,
so
the
goal
here
is
all
the
all
the
configurations
that
we
have
for
the
infra
for
the
cluster.
B
All
the
test
cases
is
all
declarative
and
it
all
lives
in
github
repo
and
it
all
gets
synced
to
these
clusters.
It's
easy
to
iterate
over.
If
you
need
to
change
a
flag,
it
takes
about
30
seconds
to
update
a
flag
in
your
control
plane,
and
then
you
can
run
test
against
that
and
then,
if
you
need
to
swap
out
the
instance
with
the
larger
instance
or
a
smaller
instance
that
takes
about
60
to
90
seconds,
based
on
the
boot
time
for
the
instance
and
everything
gets
configured
based
on
that.
B
So
some
of
the
use
cases
that
we
are
trying
to
cover
here
as
a
kubernetes
user
or
a
developer,
I
want
to
create,
let's
say
x,
number
of
secrets.
Now
each
use
case
is
different.
Somebody
might
have
a
secret
object,
which
is
like
10
kilobytes.
B
Somebody
might
have
a
secret
object,
which
is
like
a
1mb,
for
example.
So
now
there
is
a
limit
to
how
many
secrets
you
can
create
in
each
in
each
of
these
scenarios,
and
you
want
to
measure
what
my
slos
are
when,
when
I'm
creating,
let's
say
10
000
secrets,
20
000
secrets,
and
how
does
it
impact
my
api
server
and
if
I
change
some
flags
on
control
plane
or
on
xcd?
Does
it
impact
my
the
performance?
B
And
now
I
want
to
make
sure
that
when
these
control
panes
are
running
on
different
instance,
types,
let's
say
medium
or
up
to
like
24
x
large?
How
does
that
impact
performance,
or
there
is
no
change
at
all
when
I'm
changing
these
instance
types
or
the
ebs
volumes
now
as
an
advanced
user?
I
have
some
changes
that
I
make
to
the
api
server
image
and
I
want
to
run
that
image
easily
and
test
my
changes.
B
But
I
want
to
now
redo
all
the
testing
that
I
have
done
with
the
number
of
objects
that
I
have
created
the
flags,
the
flag
that
I
want
to
tweak
and
also
I
want
to
run
these
control
plane
on
different
instances
with
my
custom
image.
So
there
can
be
like
these
multiple
iterations,
as
a
user
or
as
a
kubernetes
developer,
that
you
need
to
go
through
which
in
some
cases
might
cause
delay
and
as
part
of
these
testing,
you
are
always
checking
out
for
how
is
my
resource
usage?
B
Looking
like
how
is
my
latency,
how
is
my
slos?
I
want
to
collect
all
the
logs
and
metrics
from
the
instance
level,
from
kubernetes
control,
plane
and
at
cd
and
and
be
able
to
view
them
easily
and
quickly
enough.
So
this
was
at
a
very
high
level.
The
use
case
that
we
are
trying
to
solve
for
just
go
over
it.
How
are
we
solving
it
using
kit?
So,
as
I
mentioned,
it's
a
collection
of
these
open
source
tools
at
a
very
high
level.
We
divide
the
whole
toolkit
into
four
components.
B
One
is
your
environment.
Environment
is
basically
a
new
vpc
that
we
create
where
we
configure
everything.
We
run
the
test
and
we
tear
down
everything.
So
we
call
it
a
kit
environment
where
we
have
a
management
cluster
and
these
management
clusters
manage
your
new
test
kubernetes
clusters,
so
you
bring
up.
You
can
bring
up
multiple
kubernetes
test
clusters
in
that
environment
and
tear
it
down
in
the
same
environment,
and
then,
if
you
don't
want
the
environment
to
be
a
long
running
environment,
you
can
also
tear
down
that
environment.
B
But
the
idea
here
is
environment
can
be
like
a
short-lived
environment
just
for
one
single
test
or
it
could.
It
can
be
like
a
long
running
environment
that
stays
for
like
days
or
months,
and
then
you
clean
up
later
on
the
second
component
that
we
have
is
an
operator
component
that
helps
us
bring
that
helps
us
create
vanilla
cuban.
B
It
is
in
two
to
three
minutes
on
ec2
instances
that
are
very
closely
that
are
very
closely
designed
as
an
eks
control,
pin,
so
that
you
can
run
tests
against
a
vanilla,
kubernetes
cluster,
where
you
have
control
over
how
you
change
these
flags
instance
types.
Whatever
configuration
you
want
to
change,
you
can
change
on
these
guest
clusters
run
some
tests
collect
results
when,
whenever
you
are
satisfied
that
this
is
the
flag
value
I
want
you
can
we
can.
B
We
can
then
port
it
over
to
into
the
eks
control
pane
and
then
run
the
test
against
the
eks
control.
Pin
again
so.
The
goal
here
is
to
iterate
faster
over
these,
so
that
so
that,
once
we
are
ready
with
those
changes
you
can
upstream
those
changes
to
eks.
B
The
third
piece
that
we
have
is
that
the
ci
the
ci
pipeline
we
use
tecton
for
it,
because
it
gives
us
ability
to
create
these
modular
tasks
and
we
can
reuse
these
tasks
in
different
pipelines
and
I'll
go
over
those
some
of
these
tasks
to
demonstrate
why
we
chose
a
tecton
and
the
fourth
final
piece
is
prometheus
and
grafana.
So
anytime,
you
create
a
test
cluster.
B
We
configure
prometheus
and
grafana
to
con
to
to
scrape
all
the
metrics
from
that
test.
Cluster
and
grafana
comes
pre-loaded
in
this
kit
environment
to
visualize
all
the
metrics
from
api
server,
scheduler,
kcm,
hcd
and
node
level
matrix.
So
you
get
all
the
metrics.
So,
as
a
user
who
is
getting
started
with
testing
once
they
create
an
environment,
they
have
all
of
this
ready
and
once
they
run
the
test,
they'll
be
able
to
get
all
the
metrics
immediately.
So
you
you,
so
the
idea
is
you're
not
starting
from
scratch
where
you
are
coming.
B
So
when
you
need
to
test
you're,
not
bringing
up
a
new
kubernetes
cluster,
you
don't
have
to
install
prometheus.
You
don't
have
to
write
all
the
small
test
cases
they
are
already
created
in
tecton.
You
just
need
to
create
your
pipeline
based
on
what
you
are
trying
to
test
and
get
the
results
quickly.
B
So
how
does
a
workflow
look
like
for
a
user
user
uses
a
kit
ctl
to
create
an
environment?
So
we
added
in
this
project
we
added
a
very
small
cli.
It's
still
very
lightweight.
All
it
does
is,
creates
an
environment
and
deletes
an
environment
at
this
point.
B
I'll
show
that
in
a
second
like
how
it
created,
and
once
the
kit
environment
is
created,
it
consists
of
a
management,
kubernetes
cluster
and
it
also
installs
all
the
operator
that
we
need
to
manage
the
life
cycle
of
these
clusters.
These
operators
are
teched
on
carpenter
load,
balancer
csi
driver
whatever
we
need
to
manage
these
cluster
promises
graphene
also
it
installs
all
of
that
and
then
and
then
the
user
can
go
either.
B
So
at
a
very
high
level.
This
is
how
the
architecture
looks
like
so
once
I
bring
up
a
kit
environment
on
top
it
has
a
management
cluster
which
has
master
components
and
lcd
and
as
a
user
on
the
left,
I
can
talk
to
this
to
this
management
cluster
and
when
the
environment
comes
up,
it
comes
up
with
all
these
operators
installed.
Now.
The
next
step
is
for
me
to
test
kubernetes
now.
B
B
Now,
if
my,
if
my
use
case,
is
to
bring
up
a
vanilla,
kubernetes
cluster,
it
doesn't
have
to
be
an
e
case.
Cluster
I'll,
take
I'll,
tell
tecton
to
bring
up
a
vanilla,
kubernetes
cluster,
of
whichever
version
I
want
through
a
crd,
spec
I'll,
show
that
and
then
it
will
bring
up
a
guest
cluster.
We
call
it
kit
guest
clusters.
This
is
these:
are
vanilla,
kubernetes
cluster
running
on
ec2
instances
and
they
use
eks
disto
images,
and
you
can
run
the
same
test
that
you
run
against
eks
now
against
this
vanilla,
kubernetes
cluster.
B
Now
you
have
this
control
of
changing
all
the
flag
settings
that
you
want
to
change
on
a
control
plane
or
an
underlying
ec2
instance.
Whatever
you
want
to
tweak
in
these
guest
clusters
and
same
way,
you
can
add
workload
nodes
to
this
guest
cluster
and
collect
all
the
metrics
and
share
all
the
metrics
with
us,
so
so
at
a
very
high
level.
This
is
how
the
whole
architecture
looks
like,
and
this
is
the
link
for
the
repo
I
can
put
it
in
the
chat.
B
So
the
ctl
that
I
was
talking
about
is
a
very
lightweight
cli.
It's
in
the
kit
repo
only
and
then
the
one
of
the
commands
that
we
support
is
to
bootstrap
and
environment.
This
is
my
environment
name,
and
it
takes
about
three
to
four
minutes.
What
it's
going
to
do
is
it's
going
to
create
a
vpc,
all
the
I
am
roles,
security
groups,
routes
and
whatever
we
need
for
this
environment,
nat
gateway
and
and
the
management
cluster.
So
once
this
comes
up,
we
the
end
goal
will
be.
B
A
I
think
I
think
my
first
question
is
why,
like
I
would
be
interested
in
seeing
some
some
information
on
like
why
you
chose
to
create
you
know
I
mean
I
don't
know
what
you
reuse
if
any
of
these
operators
are
reused
but,
like
a
lot
of
this,
seems
brand
new,
so
you
know
what
was
the
issue
with
the
existing?
You
know.
B
A
See
like
for
for
cluster
provisioning,
there's,
like
you,
know,
cluster
api
and
cops
and
other
tools
and
then
for
testing.
There's
pro
so
like
what
was
lacking
with
those,
and
you
know,
maybe
it's
just.
C
B
C
C
B
Yeah
so
yeah
to
answer
next
question
I'll
go
by
components.
So
the
reason
we
started
with
kit
operator
to
bring
up
these
guest
clusters
was
we
wanted.
B
Let's
say
I
want
to
test
a
qps
flag,
value
of
20
or
a
50
I
didn't
want
to.
We
didn't
want
to
wait
for
like
longer
and
that
and
then
reconcile
much
faster
on
the
kcm.
What
I,
when
I
used
cops,
what
I
saw
was
when
I
was
changing
the
flags
on
the
api
server
or
on
kcm.
It
was
taking
around
like
a
minute
or
three
minutes,
because
I
think
they
changed
the
instance.
B
The
whole
launch
template
gets
changed
and
then
it
takes
time
for
this
update
to
reflect.
But
what
we
wanted
was
how
quickly
we
can
change
it.
So
we
went
with
this
approach
of
updating
the
pod
spec
itself.
So
the
only
new
component
that
we
added
here
was
kit
operator
and
what
it
does
it
takes
a
crd,
spec
and
converts
it
into
a
pod
spec
and
that
part
specs
are
api.
Server
parts
by
kcm,
part
spec,
scheduler,
part
spec,
so
everything
is
running,
is
being
managed
by
this
management
cluster.
At
this
point
now,
if.
C
B
B
So
this
is
the
eks
actual
management
control
plane
against
which
you
get
the
api
server
endpoint
and
you
can
run
the
tests
you
want
to
do,
but
on
the
right
which
I'm
showing
the
case
kit
guest
cluster,
it
is
not
an
eks
cluster.
It
is
an
eks
like
cluster,
which
is
created
using
eks,
distro
images.
Now
all
the
configuration
for
this
guest
cluster
is
being
managed
through
a
crd
file.
B
I
can
show
that
in
a
second
and
if
you
want
to
tweak
any
flags,
you
can
only
tweak
in
this
guest
cluster,
not
on
the
eks
lecture,
just
to
clarify
all
that:
okay,
so
yeah.
So
since
the
I
triggered
the
environment
here
now
it's
complete
and
it
brought
up
the
management
cluster.
It
deployed
all
the
operators
that
we
need.
B
At
this
point
I
can
go
and
access
text
on
here.
B
B
So
now
that
I
have
created
now
that
we
have
created
the
the
environment,
what
I
can
go
over
is
to
the
tecton
dashboard
and
it
comes
pre-populated
with
all
these
tasks
that
we
need
to
run
so
I'll
go
over
a
few
examples.
B
So
let's
say
I
want
to
test
against
an
eks
cluster
now,
so
that
we
have
one
task
which
will
use
the
aws
cli
create
an
eks
cluster.
Now,
once
you
run
this
test,
it
has
parameters
saying
which
version
you
want,
so
you
can
just
when
you
are
running
these
tests,
you
can
just
give.
I
want
a
version
120,
121
or
222,
whatever
you
want,
and
then
all
these
are
like
modular
tasks
like
tearing
down
cluster.
Creating
a
forget
profile
and
tearing
down
the
target
profile.
Similarly,
we
have
tasks
like
create
me.
B
A
kit,
guest
cluster,
clean
up
the
guest
cluster
set
up
the
control
plane
run
some
load
against
whichever
cluster
you
want:
port
density,
a
test
and
tear
down
these
clusters,
so
these
are
like
individual
modular
tasks
that
you
can
run
and
how
we
run
them
is
through
pipelines.
B
So
we
we
have
this
sample
pipelines
which
are
like
easy
to
understand.
We
have
descriptions
also
in
there,
so
just
to
give
you
an
overview
of
what
these
pipelines
are
doing.
They
are
basically
combining
these
individual
tasks
that
I
showed
so,
for
example,
the
first
pipeline.
It
is
going
to
run
a
cl2
load
test
on
an
eks
cluster
which
was
created
using
aws
cli.
B
So
in
this
we
are
combining
three
or
four
tasks.
The
first
one
is
to
create
an
eks
cluster
using
aws
cli.
The
second
one
will
add
some
workload
nodes.
The
third
one
will
run
some
cl2
test
and
then
it
will
just
clean
up
everything
same
way.
I
have
this
task
here
which
will
deploy
some
pods
on
a
kit
kit
cluster.
B
B
We
will
be
having
these
pipelines,
which
are
like
sample
pipelines
to
just
show
like
when
you
need
to
when
you
need
to
combine
these
tasks,
you
can
either
like
change,
tweak
these
settings
like
how
you
want
to
whatever
you
want
to
test
now.
Let's
say
you
want
to
maybe
create
config
maps,
so
you
can
combine
a
task
to
create
an
eks
cluster
and
create
config
maps
and
then
clean
up,
so
that
could
be
like
one
pipeline.
B
So
that's
the
goal
here,
so
I
can
go
trigger
these
pipelines
either
using
crd
ml
files,
or
I
can
just
go
in
here
and
just
trigger
these
so
I'll
just
trigger
one
just
to
show
how
easy
it
can
be
just
to
now
test
this.
So
what
this
pipeline
is
going
to
do
is
it's
going
to
set
up
a
control
plane
for
us
and
then
set
up
a
data
plane?
B
So
when
I
ran
this
pipeline
in
the
pipeline,
we
give
the
name
of
the
cluster
that
we
want
and
because
these
are
guest
clusters,
so
we
can
manage.
Excuse
me
we
can
manage,
which
instance
type
is
used
to
run
this
api
server.
So
by
default
we
are
using
2x
large.
But
if
you
want
to
test
it
on,
let's
say
8x,
large
or
16x
large,
you
can
give
any
instance
type
here.
What
kubernetes
version
you
want
to
test
the
number
of
parts
you
want
to
run
by
default.
B
B
So
that's
what
I
did
here,
what
it
is
doing
is
it
is
going
to
create
a
control
plane
and
then
add
some
worker
nodes
to
that
control,
plane
and
then
run
the
and
then
create
the
number
of
parts
that
we
specified
and
make
sure
that
those
pods
are
ready
within
that
time
and
if,
if
they
are
not
in
the
end,
we
will
just
see
a
status
here
if
it
was
pass
or
a
fail.
So
like
this,
we
are
running
like
multiple
tests
using
tecton
and.
B
So
essentially,
that's
the
goal
like
now.
As
these
tests
are
running,
you
can
also
go
to
grafana
and
collect
all
the
metrics,
because
all
these
dashboards
are
pre-loaded
inside
grafana,
for
you
to
collect
all
the
metrics
so
in
the
scalability
team
or
the
teams
that
are
using
it
currently
how
they
are
running
it.
B
Is
they
create
these
multiple
pipelines
in
parallel
with
the
with
the
configurations
that
they
need
to
tweak,
and
they
can
run
these
parallel
tests
and
collect
all
the
results,
and
it
saves
a
bunch
of
time
for
the
developers
and
whoever
wants
to
test
these
and
collect
all
the
results.
So
yeah
sorry
go
ahead.
Jay
I
just
saw
your
hand
raised.
C
No
worries
actually
yeah.
I
wanted
to
know
if
you
support
some
sort
of
like
a
matrix
or
permutations
or
variance
type
configuration
that
you
you
actually
just
refer
to
this,
like
the
scalability
team
runs.
I
think
you
said
you
know
different
different
iterations
different
scales
and
that
kind
of
thing
yeah
is
there
a
way
to?
B
Yeah
yeah,
I'm
just
trying
to
think
like
what
will
be
the
right.
I
think
we
don't
have
it,
but
at
the
top
of.
C
Like
if,
if
you
can
think
about
like
github
actions,
you
know
how.
B
B
C
To
create
those
set
of
expected
permutations
that
you'd
like
to
test
yeah.
B
Yes,
I
think
we
can
run
that
and
the
way
would
be
like
you
create
one
single
pipeline,
and
then
you
run
these
individual
tasks
and
you
can
run
those
tasks
in
parallel.
So
let's
say
you
are
trying
to
create
a
kubernetes
cluster
with
version
121,
21
and
22
you
can
create.
You
can
have
a
pipeline
now
in
that
pipeline.
What
it
will
look
like
is
when
this
pipeline
is
running
it.
B
C
I'm
really
glad
it's
recorded,
because
I
want
to
share
this
with
the
ack
team.
Sure.
Thank
you
all
see
you
bye.
B
Right,
I
guess,
if
not
yeah,
that's
the
end
of
my
demo,
then
thanks
evan
cool.
A
Thanks
for
doing
that,
really
appreciate
that,
thank.
D
A
Thanks
all
right,
I
will
hop
over
into
the
agenda,
so
let
me
share
screen.
A
First
item
is
looks
like.
A
Issue,
400
where's
the
image
for
1223
yeah.
I
think
somebody
pointed
this
out
to
me
yesterday,
so
I
was
gonna
promote
122.3
today
and
hopefully
we
can
resolve
this.
I'm
guessing
shipping,
you
like
this
one.
I.
D
I
wrote
all
four,
I
think,
so
it's
pretty
easy.
The
thing
is
that
none
of
the
releases
from
two
weeks
ago
were
promoted.
I
guess.
A
D
Yeah
plus
the
change
log
is
still
a
work
in
progress.
I
guess
for
everything:
okay
did.
D
A
D
No
worries
just
so.
B
A
A
D
A
A
Include
it's
all
of
these.
I
think
it's,
these
three
yeah
we
I
this
is
like
a
rush
to
to
to
create
these
releases
because
sorov
needed
it
for
tagging
controller,
and
I
never
got
around
to
promoting
the
images
after
that.
So.
B
A
Oh,
the
promoting
part
is
probably
separate,
but
let
me
show
you
what
our
docs
have.
We
basically
just
have
some
documentation
on
and
promoting
yeah.
I
guess
we
just
link,
that's
what
you
need
to
do
so,
there's
some
some
stuff
that
needs
to
happen
in
basically
adding
the
sha
of
the
image
to
this
file,
and
then
the
promotion
will
happen
to
the
production.
B
A
A
And
sometimes
I
forget
to
go
back
and
like
update
the
the
github
release
and
stuff,
it's
just
an
unfortunate
manual
process
that
you
know
we
need
to
take
more
care.
I
mean
if
we
also
like
there's,
probably
more
stuff,
we
can
automate
here.
If
we
spend
some
time
yeah.
I
don't.
I
don't
know,
though,
like
promoting
the
image.
I
don't
see,
there's
a
way
because,
like
you
have
to
you,
know,
update
yeah
the
skates.I
o
repository
so.
A
The
one
thing
that
I
was
thinking
of
is
we
can
for
the
change
log
like
we
could
have
probably
have
a
test
in
the
change
log
pull
request
like
a
pre
submit
that
checks
to
see
if
the
images
are
present.
I
imagine
we
can
do
that
without
permissions.
A
D
Think
the
bigger
issue
is
not
doing
the
git
relationship
github
release,
no,
no,
the
go
to
releases
in
github.
D
D
A
Yeah,
that
makes
a
lot
of
sense.
Let's,
let's
add
that,
to
the.
A
Providing
the
image
to
prod,
in
fact,.
B
D
A
D
A
Okay
got
some
action
items
there.
I
think
that
should
cover
it.
A
D
A
A
A
Yeah
sounds
good
skip,
reporting
network
interfaces
created
by
pods.
D
Yeah,
so
we
received
some
issues,
and
we
noticed
also
in
some
test,
runs
that
ccm.
D
D
A
I
was
actually
considering
bringing
this
up,
and,
as
so
basically
what
the
problem
is
is
that
you
need
a
pull
request,
a
patch,
basically
that's
not
quite
merged
yet
so.
A
Let
me
see
if
I
can
find
it
so
that
this
is
basically
due
to
the
fact
that
the
the
node
ip
flag
in
cubelet-
let's
see
if
my
recent
notifications.
D
D
D
So
I
guess
maybe
there's
a
way
for
to
recognize
the
vpc
cni
ones
or
to
recognize
the
node
ones,
at
least
from
metadata.
You
could
have
gotten
the
primary
ip
address.
I
guess,
or
I
don't
know,
I
guess.
A
I
I
do
believe
that
it's
always
primary
for
the
note
ip
and
then
secondary
enis
for
everything
else,
so
there
probably
is
a
way
to
differentiate
yeah.
My
question
would
be:
are
there
exceptions
and
you
know
for.
A
Yeah
there's
a
lot
of
different.
You
know
cluster
configurations
out
there,
so
you
know.
Do
we
want
to
check
to
see
that
it's
actually
managed
by
the
aws
c9
first
before
we
make
assumptions
about
the
ip
addresses?
A
But
I
do
I
do
like
the
idea
yeah.
So
this
was
this
was
the
issue
that
I
was
talking
about.
Basically,.
D
A
And
the
problem
is
that,
right
now
this
only
is
provided
when
cubelet
has
cloud
provider
equals
external,
but
the
the
fix
to
to
upgrade
is
that
you,
you
have
all
cubelets
set
this
annotation,
so
you
know,
and
the
reason
this
helps
is
because
those
secondary
ni's
and
ips
are
filtered
out
when
the
provided
node
ip
is
set
to
the
you
know,
whatever
the
the
primary
ip
addresses.
D
Last
time,
which
was
half
a
year
ago
that
I
checked
this
code,
it
did
a
lot
of
work
around.
D
A
Yeah
it
does,
it
does
do
the
filtering
when
ccm
is
present,
but
again,
that's
only
it's
only
when
you
have
this
annotation,
so
this
annotation
is
supposed
to
be
the
alternative
to
the
flag
for
cubelet.
When
you
have
ccm
enabled
like
that
before.
A
Existed
there
was
no
way
to
filter
addresses,
so
there
was
only
the
way
to
do
it
with
cubelet
and
then
cesium
had
nothing.
So
this
annotation
was
added
to
give
ccm
that
same
behavior,
but
the
problem
with
the
existing
behavior
is
that
the
annotation
was
not
added
to
cubelets
unless
they
had
cloud
provider
equals
external
and
when.
A
Don't
right
so
the
upgrade
case
was
still
totally
broken,
because
the
cubelet
and
the
ccm
would
fight
over
what
the
node
addresses
should
be
and
because
one
has
the
node
ip
flag
and
the
other
one
doesn't
you'd
get
just
the
the
the
note
ip
and
then
you
get
all
the
ips
and
then
just
the
note
ip,
all
the
ips.
So
that's
what
this
reference
to
the
note
address
is
flapping
is
and
then
mm-hmm.
D
It
didn't
know
that
part
until
we
added
that
flag
to
ccm.
If
you
would
do
the
same
without
the
external
ccm,
then
it
would
have
worked.
It
was
coding
kubernetes
that
was
switching.
You
just
had
to
say
that
you
want
an
ipv6
address
there
in
node.ip,
so
yeah.
I
think
there
are
still
quite
things
to
to
fix
on
this
side-
probably
oops.
Probably
when
things
are
closer
to
disabling
in
entry
cloud
providers,
people
will
notice
differences
more.
I
guess.
A
Yeah
well,
if
you
have
the
the
ipv6
issue
that
you
just
referenced,
I
don't
know
that
I'm
familiar
with
that
one,
so
that
would
be
good
to
to
bring
up,
but
also
probably
creating
an
issue
for
this.
Like
I
like
the
idea,
I
don't.
B
A
A
Already
one
and
then
we
can
investigate
and
get
it
fixed.
A
Yeah,
I
think
I
got
it
covered
up
here.
Okay,
anything
anybody
else
has
before
we
end.