►
From YouTube: SIG Cloud Provider 2021-08-18
Description
- consuming cloud credentials (Red Hat) is implementing the CCMs in OpenShift we are running into complications around how the individual CCMs consume their cloud credentials.
- for example, Azure CCM referencing a ConfigMap
related, https://github.com/kubernetes/kubernetes/pull/104314
curious to hear if the community has any thoughts about this , or if there are plans to make this more standardized?
not specifically, each provider has its own method
- WIP PR for new node condition representing intialization state https://github.com/kubernetes/kubernetes/pull/104436
no KEP requried but need to improve API docs
A
All
right,
hi
folks
today
is
august
18th
2021,
and
this
is
the
bi-weekly
top
provider
sig
week
meeting
just
reminder
that
this
is
a
open
source
cnc
meeting.
So
please
follow
the
code
contact
all
right,
so
yeah,
let's
get
started,
there's
two
items
on
the
agenda:
let's
go
through
the
just
run
through
the
sub
project,
updates
I'll,
try
something
new
today,
instead
of
calling
folks
out
for
updates
I'll,
just
read
whatever
folks
have
already
written.
A
So
if
you
wanna,
if
there's
an
update
that
you
you
wanna,
make
please
write
it
in
the
agenda.
Otherwise
that
would
just
skip
it.
So
I
I
see
the
ibm
folks
have
added
the
first
one
about
the
vpc
block.
Csi
driver.
B
So
this
is
the
one
we
just
added
andrew.
So
this
this
one
was
added
to
the
sub
project
right
for
ibm
cloud
last
week.
A
Awesome,
I
think
there
was
another
update
related
right
a
few
weeks
ago.
Was
it
the
same
one.
B
Yes,
yeah,
I
think
we
had
a
pr.
I
did
see
your
comment
from
you
and
I
think
walter
as
well,
but
this
was
the
one
that
was
you
know,
issue
was
created,
appear
was
created
and
we
added
the
project
and
a
couple
new
members
to
the
cuban
news
arc.
A
Cool
cool
yeah.
I
remember
seeing
an
issue
about
this.
I
don't
remember
what
happened
but
yeah.
It's
good
to.
I
see
this
one
right,
yeah.
C
Yeah,
cool
cool
cool
yeah,
it's
good
to
know
that
we've
got
resolved
yep
awesome
thanks
for
your
quick
review.
There.
A
No
problem:
do
you
want
to
talk
a
little
bit
about
the
project
and
you
know
like
the
justification
for
like
why
it's
a
kubernetes
project
and
like
what
like
what
it
does.
I
mean.
B
Very
briefly,
I'm
not
so
much
involved
with
the
development
here.
It's
the
folks,
unfortunately
they're
because
of
the
time
zone.
You
know
I
could
not
request
them
to
attend
to
this
call.
They
are
in
india,
but
it's
been
in
development
for
some
time
and-
and
you
know
mainly
as
it
says,
the
csi
driver
right
for
the
block
storage
on
on
ibm
cloud.
B
So
I
mean
that's
pretty
much.
I
would
brave
I
mean
if
it's
sitting
good,
then
I
can
try
to
have
one
of
the
developer
there
to
be
part
of
the
meeting,
and
you
know
provide
a
bit
a
little
bit
more
in
depth
about
the
project
if
it's
needed.
A
Okay,
cool
yeah,
I
mean
I
was
curious,
like
what
the
relationship
between
the
vpc
and
the
the
block
storage
is.
But
yeah
I
mean
that's
just
my
own
personal
curiosity.
You
don't
have
to
dig
into
it.
I
would
also
imagine
that
the
like
the
previously
announced
ibm
cloud
provider
and
the
vpc
controller,
I'm
assuming
like
there's
plans
to
also
like
move
that
into
creatives
and
whatnot.
So
I'm
looking
forward
to
those
updates
in
the
future
as
well.
B
A
Okay,
cool,
okay:
I
see
someone
added.
D
I
think
yeah,
sorry,
that
was
me
yeah.
So
this
also
relates
to
the
last
week's
cloud
provider
extraction
meeting,
but
lkg
in
this
case
is
last
known
good.
D
So
this
has
to
do
with
us
trying
to
come
up
with
a
way
to
be
able
to
both
test
that
recent
changes
to
kk
like
if
we
think
of
it
as
the
kk
kernel
haven't,
haven't
broken
a
given
cloud
provider
in
this
case
lg
cloud
provider
gcp,
but
also
that
we
can
then
detect
that
recent.
The
gcp
works
with
the
last
known
version
of
kk
and
we're
hoping
to
make
this
so
there's
two
parts.
D
Also
specifically,
what
came
up
from
the
cloud
provider
extraction
meeting
is
that
a
significant
portion
of
the
storage
tests
are
written
and
node
and
storage
tests
are
written
specifically
against
the
gcp
system.
So
once
we've
done
cloud
provider
extraction,
all
those
tests
are
going
to
stop
working
unless
we
can
sort
of
run
them.
As
last
known,
good
against
cloud
provider
gcp
or
unless
someone
rewrites
all
those
tests,
which
seems
doubtful,
but
if
we
have
a
volunteer,
I
won't
complain.
A
D
Like
I
said,
I
won't
complain
if,
if
we
get
volunteers
to
do
that,
work,
I
just
you
know.
The
one
thing
I
will
say
is:
I
think
that
all
the
cloud
providers
are
going
to
want
an
lkg
system
anyway.
So
I
don't
think
it's
it's
a
loss
to
do
this.
E
So
just
out
of
curiosity
do
we
have
like
an
issue
or
something
or
would
it
be
worthwhile
to
put
together
an
issue
just
to
maybe
help
like
if,
if
it
would
be
easier
to
like,
maybe
get
a
community
member
to
rewrite
some
of
these
tests,
like
maybe
having
an
issue
to
track,
it
would
really
help
because
then
we
could
maybe
point
people
to
that
and
say
hey
if
you
want
to
get
involved.
Do
some
of
this
stuff.
D
E
Okay,
cool
yeah,
I
mean
like
I'm,
you
know,
I'm
always
looking
for
either
ways
to
get
involved
or
ways
to
direct
other
people
towards
stuff,
and
this
sounds
like
one
of
those
things
where
it's
like
someone
could
get
involved
and
really
understand.
What's
going
on
here
by
you
know,
going
over
the
tests
again.
D
Absolutely
one
and
one
of
the
wonky
things
is:
if
you
look
at
a
lot
of
the
mount
and
unmount
tests,
they're
kind
of
I
mean
there's
no
nice
way
of
putting
it
they're
kind
of
gross.
They
write
specific
containers,
they
assume
particular
storage
devices
and
then
the
the
pods
that
they
use
are
specifically
built
to
run
on
gc,
gke
or
gcp,
and
then
they
do
these
and
they
assume
the
the
linux
layout
when
they
do
like
various
mountain
unmount
and
various
other
things
and
they're.
D
E
Like
the
auto
scaler
tests
have
or
the
cluster
auto
scaler
tests,
have
they
suffer
from
a
similar
problem
or
they're
just
very
specific
to
gke
and
gcp,
not
that's
a
bad
thing
per
se,
but
yeah
it'd
be
nice
to
make
them
a
little
more
abstracted.
A
Cool
cool
yeah,
I
think
that's
it
all
good
points
all
right.
Okay-
and
I
I
guess
this-
this
topic
here-
is
the
same
same
thing:
alter.
D
Yeah,
this
is
essentially
the
same
thing.
I
don't
have
the
recording
up
yet,
but
hopefully
I'll
have
the
recording
up
this
weekend
and
I
strongly
recommend
people
go
and
look.
I
thought
it
was
quite
an
interesting
discussion
from
the
extraction
meeting,
cool
cool.
A
Okay,
all
right,
let's
jump
into
the
agenda,
I
see
omiko
added
topic
about
consuming
crap
cloud
credentials.
E
Yeah,
so
this
is,
you
know
something
that
we're
kind
of
I
wouldn't
say
struggling
with,
but
it's
it's
something
we're
toiling
with
right
now
at
red
hat
as
we're
starting
to
add,
you
know
we're
really
trying
to
prepare
to
get
the
ccms
ready
for
future
releases,
and
you
know
right
now
we're
just
working
on
putting
this
stuff
in
behind
feature
gates
and
whatnot,
but
something
that
we've
been
noticing
is
that
the
consistency
of
how
cloud
credentials
are
consumed
by
the
individual
ccms
doesn't
really
seem
to
be.
E
E
E
D
I
don't
think
it's
come
up
before,
but
I
like
the
discussion,
I
will
also
say
I
don't
think
we
are
done
with
the
I
mean,
we've
been
working
on
common
controller
manager.
I
think
there
are
several
things
on
the
ccm
still
outstanding
where
we
need
to
make
improvements,
and
this
just
seems
like
a
good
example.
A
Okay,
I
guess
I
think
the
there's
a
there's
like
the
the
way
that
each
provider
consumes
credentials
is
one
of
those
things
that
kind
of
just
grew
organically
and
like
they're,
and
it
kind
of
makes
sense
because,
like
every
provider,
has
a
different
different
authentication
mechanisms
right,
like
you
know
some
public
cloud
providers,
have
you
know
access
to?
You
know
a
metadata
service.
Some
are
on
prem
and
don't
some
are
except
like
tokens.
A
Others
are
you
know
basic
auth
or
whatever
right
like,
so
I
think
like
over
time
as
providers
like
adapted
the
way
that
they
can
authenticate
to
their
services.
A
Things
have
evolved
from
like
using
like
just
storing
the
credentials
and
plain
text
in
the
config
file
to
a
config
map,
two
secrets
to
like
actually
some
of
them.
I
think
some
providers
actually
just
removed,
like
only
support
off
with
the
I
am
attached
to
like
the
instances
and
so
like
I'm
not
sure.
If
there's
I'm
not
sure,
it's
possible
that
we
can
standardize
just
because
of
how
different
each
provider
is.
D
Well
so
so
the
other
thing
I
will
say
here
and
I
think
it's
a
really
good
point
andrew.
But
if,
unless
I'm
misunderstanding,
el
mico's
point
are
you
talking
about?
How
do
I
put
this?
Are
you
talking
about
in
taking
amazon?
As
an
example,
are
you
talking
about
running
kubernetes,
on
amazon
with
ccm,
or
are
you
specifically
talking
about
running
within
amazon,
managed,
kubernetes,
yeah.
E
D
I
think
I
I
think
in
anders
point
is
is
absolutely
right
in
the
second
in
the
former.
I
wonder
if
we
could
do
something
where
we
did
almost
like.
I
mean
there
are
at
least
two
efforts,
I'm
aware
of
for
plugable
credential
providers,
and
if
we
could
do
something
similar
here,
where
we
basically
say
look,
we
make
an
easy
way
to
plug.
In
your
credential
provider,
we
have
credential
providers
that
are
specific
to
each,
but
we
might
also
have
a
general
one
and
then
for
the
managed
providers.
D
F
What
like,
I
understand,
the
need
for
a
plugable
credential
provider
for
something
like
cubelet
when
it's
getting
image
credentials,
because
you
know
cubelet
is
part
of
core
and
you're.
Obviously,
gonna
have
different
mechanisms
to
get
image
credentials,
but
in
the
for
external
cloud
providers,
you're
already
separate
from
core
you
you
like,
I
guess
I
don't
really
understand
the
benefit
to
further
separating
the
mechanism
for
getting
credentials
when
it's
gonna
be
different
per
cloud
provider.
You
already
have
the
ccm
separate
per
cloud
provider
right
so
like
what
what
what
benefit
does
separate.
D
A
great
question-
and
I
I
I
will
give
my
attempt
and
then
I'll-
let
almico
speak,
but
my
thought
is
much
in
the
way
that
we
make,
which
controllers
run
easily
pluggable
I
mean,
arguably,
you
can
just
build
in
the
set
of
controllers.
You
want
not
worry
about
it,
but
we
put
some
effort
into
the
ccm
to
allow
it
to
be
easy,
especially
for
reference
implementation.
D
Making
it
easy
to
plug
that
into
the
ccm
seems
beneficial
to
those
sort
of
customers.
I
agree
that
it's
it's
probably
not
helpful
to
google
or
to
amazon
or
vmware,
but
it
seems
like
it
could
be
very
helpful
to
you
know.
Walmart
or
you
know
you
know
you
know
riley's
or
cbs
or
whoever
it
is.
Who
is
wants
to
use
this
stuff,
but
doesn't
you
know
it
doesn't
want
to
use
a
managed
solution.
F
A
Yeah,
I
quite
understand
what
like,
why
we're
making
the
distinction
in
the
first
place
between
like
managed
and
not
managed
like
either
way
the
credentials
you
plumb
into
like,
like
what's
being
managed,
is
the
credentials
plumbed
into
the
ccm
and
the
ccm
is
eventually
used
by
both
and
it's
just
a
matter
of
like?
Is
the
user,
maintaining
the
credentials
put
into
a
secret
or
is
it.
D
My
thought
was
like:
if
it's
a
metadata
I
mean
in
google,
it's
a
metadata
server
right
and
it's
plumbed
into
google.
But
if
I
want
to
run
kubernetes-
and
I
have
some
infrastructure-
that's
not
you
and
let's
say
I
mean
as
an
example.
Let's
say
I
build
a
kubernetes
that
actually
spans
multiple
cloud
providers
and
I
don't
actually
want
to
be
tied
to
any
one
cloud
provider
I
want
to
plug
in
the
same
set
of
credentials
into
my
ccms
that
are
running
in
both
azure
and
vmware.
A
D
E
Well,
I
mean
yeah
walter
yeah
you're,
definitely
touching
on
and
kind
of
dancing
around
some
of
the
topics
that
we've
been
you
know
struggling
with
so
like
there
are
two
kind
of
sides
to
this.
One
is
the
this
kind
of
hybrid
cloud
notion
that
I
think
you're
kind
of
touching
on
where
it's
like,
you
might
have.
E
You
know
different
clouds
in
place,
or
you
might
have
different
credentials
that
you
need
to
synchronize
between
them
and
so
you'd
want
those
to
be
the
same,
but
like
some
of
the
real
world
troubles
we're
seeing
and
I
I
have
a
feeling
again,
this
is
not
going
to
be
super
relevant
for
someone
just
looking
at
gcp
or
just
looking
at
azure
or
whatever,
but
I
think
as
we
get
more
people
who
are
interested
in
building
kubernetes
solutions
that
can
provide
access
to.
You
know
to
many
different
platforms.
E
You
know
to
like
this,
this
hybrid
cloud
kind
of
experience.
What
we're
noticing
is
that,
as
we
build
the
platform
that
integrates
all
these
ccms,
there
is
a
very
uneven
api
surface
in
terms
of
how
the
credentials
are
injected
or
shared
to
the
ccms
and
that's
causing.
You
know
it's
causing
us
to
have
to
build
a
lot
of
automation
around
this,
and
I
think
that
others
are
going
to
have
to
go
through
the
same
toil
or
we'll
have
to.
F
Yeah
from
from
my
perspective,
this
would
be
best
served
as
like
a
open
source
project
that
somehow
you
know,
takes
credentials
and
puts
them
in
some.
You
know
to
find
some
specification,
and
then
you
know
I
don't
know
how
you
get
all
of
the
like
sdks
to
consume
the
credentials
from
your
from
your
defined
format,
but
I
feel
like
it's.
F
It
would
be
better
served
as
as
a
kind
of
external
project
type.
E
Thing
so,
and
that's
like
that's
totally
fine,
you
know,
I
think,
like
that
part
of
the
idea,
at
least
from
the
people
I've
been
talking
to
at
red
hat.
Is
you
know
they
just
wanted
to
see
what
the
community
thought
about
this
and
if
there
had
been
any
ideas
around
it,
so
you
know
if
the
notion
here
is
that
maybe
the
community
would
be
best
served
by
like
a
separate
project
that
does
this
or
you
know
a
separate
controller
or
something
that
can
do
this.
You
know
that's.
F
I
guess
it
depends
on
what
different
you
know,
types
of
credentials
you
want
to
unify,
but
like
what?
What
format
do
you
want
to
put
them
in
or
like
you
know,
how
are
you
like?
Are
you
using,
like
you
know,
for
aws
you
can
either
use
the
the
metadata
server
or
you
can
use
like
a
file
or
you
can
use
environment
variables.
F
So
are
you
kind
of
saying,
like
you
want,
you
know
all
of
these
ccms
to
consume
credentials
from
files
on
disk,
for
example?
Or
is
it
something
else.
E
There
was
this
notion
that
okay,
like
could
we
just
not
use
config
maps?
You
know,
could
we
have
these
things
in
secrets
at
least
or
something
right,
and
that
was
just
talking
about
azure,
but
then
it
start.
Then
we
started
to
talk
about
okay.
Well,
you
know:
could
it
be
the
case
that
just
all
these
things
expect
like
some
sort
of
secret
or
something
rather
than
expecting
a
config
map?
E
And
you
know
it
sounds
like
that
might
not
necessarily
be
appropriate,
but
this
is
kind
of
like
the
discussion
that
we
were
having
as
we
started
to
look
at
how
each
one
of
these
things
was
kind
of
acting
differently.
With
respect
to
the
api
for
how
those
credentials
are
delivered,
you
know
to
the
actual
binary
in
the
container,
so
you
know
like
at
this
point
like
this
discussion's
been
good
for
me
like.
I
could
certainly
go
back
to
some
of
the
people
who
had
more
complaints
about
this
than
I
did
and
say
like
hey.
E
Maybe
we
should
put
together
like
a
hack,
md
or
a
google
doc,
or
something
to
kind
of
you
know,
bring
together
our
ideas
about
what
we'd
like
to
see
and
then
we
could
kind
of
see.
Does
it
make
sense
to
you
know,
take
this
to
a
cap
or
try
and
generate
a
project
or
something
off
of
it.
I
mean
I'd.
Certainly
I'm
happy
to
go
that
way.
D
So
so
I
I'm
actually
going
to
suggest
that
I
mean
there's
nothing
wrong
with
the
dock,
but
if
you
look
at
our
designs
for
how
caps
are
supposed
to
get
handled,
the
first
stage
of
a
cap
is
actually
the
proposal
of
the
problem
and
getting
the
sig
to
agree.
It
is
a
problem
they
want
to
solve,
and
so
I
would.
D
I
would
suggest
that,
in
fact,
it
sounds
to
me
like
this
is
an
ideal
candidate
for
that
first
stage
of
cap,
where
all
we're
trying
to
do
is
outline
the
problem
and
gather
agreement
on
whether
or
not
we
want
to
solve
it.
E
A
Yeah,
like
my
my
current
thinking,
is
that
it's
actually
not
like
I'm
having
a
hard
time
convincing
myself,
that
it
is
a
problem
like
if
you
look
at
the
the
landscape
of
kubernetes,
that's
entry,
you,
you
still
have
the
same
problem
of
like
you
have
to
pass
the
cube
controller
manager
and
the
api
server
like
a
cloud
config
which
is
already
configured
so
sometimes
it
has
credentials.
Sometimes
it
doesn't.
A
But,
like
you,
you
have
that
problem,
whether
you're
entry
or
external,
like
I
think
the,
and,
like
the
the
go
interface
that
we
call
like
the
cloud
provider
interface
like
that,
is
the
api
and
like
how
you
authenticate
to
your
cloud
provider
and
implement
that
api
is,
is
an
implementation
detail
of
the
api
we
defined
so
like
if
we
wanna,
basically
we're
saying
like
we
may
wanna
expand
that
interface
to
also
account
for
credentials
but
like
that
feels
like
we're
leaking
details
about
implementation
up
into
the
common
interface,
and
I
feel
like
this
is
also
like
kind
of
like
a
management
problem.
A
All
right
like
like
we're
thinking
of
ways
to
manage
credentials
in
a
multi-cloud
world,
and
I
just
feel
like
no
like-
I
don't
think
I
mean
like
yeah
like
we-
can
maybe
think
about
projects
to
like
make
this
easier,
but,
like
I
just
feel
like.
That's
that,
that's
what
our
that's
kind
of
what
we're
all
trying
to
build
right
like
all
of
us
working
on
like
like
that's
the
competition
that
we're
all
in
is
to
how
do
you?
A
F
You,
let's
say
you
know
you
decide
that
you
want
each
ccm
to
consume
credentials
in
a
specific
way.
Then
you
know
it.
It
would
make
sense
for
you
or
you
know.
Whoever
cares
about
this
to
go
and
open
pull
requests
to
get
those
ccms
to
consume
it
like.
If
aws
doesn't
support
you
know
whatever
file,
then
you
know
you
add
that
support,
because
you
need
it
for
your
specific
use
case
rather
than
you
know.
Actually,
like
writing
a
specification
or
something
like
that.
Could
that
kind
of
make
sense.
D
E
Yeah
I
mean
yeah,
I'm
totally
down
with
like
not
trying
to
get
the
community
to
do
a
bunch
of
work
if
it
doesn't.
If
we
don't
think
it's
like
a
problem,
you
know
this.
We
may
you
know
we
may
learn
from
this
process
that
the
community
does
not
think
says.
This
is
the
same
problem
that
we
think
it
is,
and
in
that
case
you
know,
yeah.
We
just
have
to
write
our
own
solution.
You
know
for
how
we
want
to
handle
this
and.
D
I
will
also
say
that
one
of
the
reasons
I
would
like
the
cap
is
that
if
we
decide
I
would
like
I'd
like
that
road
sign
so
that
you
know
a
year
from
now.
If
we've
decided,
we
don't
think
this
is
the.
We
are
the
right
group
to
deal
with
it.
We're
not
re
rehashing
this
right
and
we
may
even
decide.
Maybe
we.
This
is
something
we
want
to
approach,
but
this
is
the
wrong
sick
right.
D
Cluster
life
cycle
may
end
up
being
a
better
fit
or
security
may
be
a
better
fit,
but
I
I
don't
think
that
cap
work
is
going
to
be
lost
even
if
we're
the
wrong
sick
and
I
in
fact
I'm
going
to
claim
the
cap
works
not
going
to
be
lost,
even
if
we,
as
a
total
kubernetes
community,
decide
this
isn't
the
right
approach.
D
A
Yeah,
but
I
do
want
to
like
echo
what
nick
was
saying
earlier,
like
one
of
the
benefits
of
like
pulling
everyone
external.
Is
that
like
it,
if
you
wanted
to
like
only
call
like
if
you
have
a
use
case
with
azure
and
you
you
want
azure
to
use
a
secret,
not
a
big
map
like
I
think
you
should
just
go
ahead
like
open
an
issue
with
the
azure
folks
and
the
azure
folks
can
accept
it
or
not
like
it's,
not
we're
not
tied
to
specific
standards
or
best
practices
from
core
kubernetes
right.
A
I
mean
obviously
like
having
a
standard,
so
we
can
all
be
consistent.
Is
nice
right,
but
in
the
absence
of
that,
like
I,
don't
see
why,
for
your
use
case
for
openshift,
you
can't
go
ahead
and
just
try
to
convince
the
azure
folks
to
support
secrets
or
convince
the
bee
spirit.
Folks
to
do
something
else
right.
I
think
that's
all
fair
game.
E
D
Whereas
if
the
sig
agrees,
then
I
think
we
own
those
those
I
mean
those
you
know
as
the
person
one
of
the
people
maintaining
cloud
provider
gcp
sure.
That's
the
google
reference
implementation
of
how
to
run
kubernetes
on
google,
but
it's
not
owned
by
google.
It
is
owned
by
the
cncf.
It
is
a
cates.
I
mean
it's
it's
six
kubernetes
repo
and
as
such,
it
is
owned
by
cncf,
not
by
google
yeah.
That's
a
good
point.
E
Yeah
I
mean,
I
think
another
side
of
this
too.
Is
that,
like
you
know
we're,
I
I
think
that
you
know
we're
running
into
a
into
an
issue
that
that
we're
running
into
several
issues
that
probably
others
are
going
to
run
into
as
well.
But
you
know,
red
hat
is
pushing
very
hard
to
try
and
get
the
ccm
and
out
of
tree
workflow
into
our
product
right.
E
Let's
get
this
problem
into
the
open,
see
what
the
community
thinks
about
it
and
if
the
community
thinks
this
is
not
an
issue,
that's
fine,
we'll
just
you
know
we'll
figure
out
what
the
next
step
from
there
is.
But
you
know,
maybe
there
are
voices
in
the
community
who
haven't
you
know
who
aren't
at
these
meetings
or
whatever.
Who
would
see
this
and
you
know
kind
of
agree
with
us
or
something
or
whatever.
So
I'm
happy
to
take
this
to
the
next
step.
A
Yeah
and
please
continue
to
like
raise,
you,
know,
points
of
friction
or,
if
you're,
if
you're
forever,
for
whatever
reason
like
struggling
to
adopt
external
copywriters
like.
I
think
this
is
the
right
forum
and
I
think,
more
than
happy
to
talk
about
ways
we
can
like
mold,
ccm
and
even
like
core
communities,
components
to
help
with
adoption
right.
So
yeah
plea,
please
keep
raising
them.
A
Okay,
yeah
next
topic,
so
this
is
kind
of
like
something
that
nick
mentioned
in
a
previous
call
about
node
conditions,
and
I
figured
I
created
a
work
in
progress,
pr
just
to
get
the
idea
out
there
and
I'm
not
sure
if
this
needs
a
cap.
I
mean
I'll
write
the
cap
if
we
feel
like
it
needs
it,
but
this
seems
like
a
small
enough
change
where
maybe
well.
The
change
is
small
but
like
it
affects
the
core
api.
So
maybe
maybe
it's
kept,
but
anyways.
A
The
the
premise
of
this
vr
is
that
we're
introducing
a
new
condition
the
naming
is
tbd
but
like
this
is
just
the
name
I
picked
for
now,
like
node
initialized
by
cloud
provider
where,
when
the
ccm
tries
to
initialize
a
node,
it'll
apply
the
condition
with
the
status
like
false
or
true
right
and
so
like.
A
If
no
registration
failed
for
whatever
reason
like
it
couldn't
figure
out
what
zone
an
instance
was
in
or
it
can
figure
out,
the
node
address
for
or
the
address
for
a
node
it'll
surface,
a
condition
with
the
reason
saying
like
this
node
failed,
because
you
know
this
api
request
returned
the
500
status,
500
or
400,
or
whatever
right
and
it'll
also
tell
users.
A
D
A
I
mean
it
it's
kind
of
like
the
same
thing
as
like.
Oh
so,
you're
saying,
like
only
apply
the
condition
if
there's
a
failure,
but
if,
if
there's
no
failure,
just
don't
just
don't
have
the
conditions.
A
Yeah
and
that's
the
logic
today,
like
that's
still
the
logic
where
like,
if
you
set
cloud
provider
external
on
a
cubelet
it'll,
apply
this
this
taint
like
node,
uninitialized,
taint
or
whatever,
right
right
and
like
you,
you
know
that
the
registration
was
successful
because
you
see
the
tank
removed
and
then
you,
you
see
other
properties
of
a
node,
but
like
I
have,
I
got
some
feedback
internally
and
from
from
other
folks
that,
like
it's
not
the
best,
it's
not
a
very
clean
api
right,
like
you're
checking
the
the
non-existence
of
a
taint
to
know
that
something
was
successfully
done,
is
kind
of
hacky
and
like
an
actual
condition
that
says,
node
initialize
is
true,
is
a
better
means
for,
like
things
integrating
with
your
cluster,
to
actually
know
that,
like
oh
this,
this
node
actually
got
initialized,
and
so
we
can
move
forward
with
other
operations
of
the
cluster.
A
D
D
A
But
but
that's
the
point
of
so
yeah,
I'm
not
really
following
your
logic
there,
but
like
it,
the
the
taint
that
we
add
is
specifically
for,
like
the
cloud
provider,
initialization
right
and
so
the
the
condition
we're
also
adding
is
like
a
condition
where
the
well,
the
condition
name,
is
specific
to
clarified
registration.
So
if
there's
another
taint
for
other
reasons,
it's
not
going
to
negate
those
taints
right.
D
Yeah,
you
know
that
that
is
fair,
I'm
just
how
do
I
I
mean
my
worry
is,
and
I
don't
I
like
again,
I'm
not
saying
we
shouldn't
have
the
label.
I
just
worry
that
we
could
trick
people
with
it.
I
think
about.
If
you
go
to
your
doctor
and
your
doctor
says
well,
you
know
I
I've.
I've
checked
you
for
cancer
and
you
don't
have
cancer,
that's
good
news.
Does
it
mean
I'm
healthy?
D
Well,
no.
It
turns
out
that
your
your
your
your
hcls
and
your
ldls
are
through
the
roof
and
your
blood
pressure
is
200
over
150,
but
you
don't
have
cancer.
I
this
is.
I
mean
I
and,
and
I'm
always
a
little
worried
about.
You
know
using
analogy
to
to
kind
of
make
a
point,
but
I
just
I
worry
that
this
is
sort
of
making.
This
may
trick
people
into
thinking.
E
Yeah,
but
in
in
some
respects,
though,
I
mean
just
to
kind
of
go
back
to
what
you
were
saying
before,
because
I
totally
dig
what
you
were
saying
before
walter
about
like
yeah
like,
if
we're
only
checking
for
taints,
then
that
is
totally
like.
That's
totally
future
proof
right
because,
like
we
could
add
more
taints
in
the
future
and
then
like
we're,
always
just
waiting
to
see
if
those
taints
like
disappear
or
whatever
right.
E
On
the
other
hand,
though,
like
we,
there
could
be
some
positive
effect
to
being
able
to
say
yes,
the
ccm
node
initialization
process
has
completed
now.
There
may
still
be
taints
on
this
node,
but
at
least
the
user
would
know
at
that
point
that
the
ccm
portion
of
the
initialization
initialization
has
completed
and
the
whatever
else
is
still
happening
is
outside
of
that
process.
So
I
mean
that
it
would
be
a
way
to
positively
indicate
that
one
stage
of
the
initialization
you
know
has
completed.
I
guess.
A
D
Was
I
I
was
trying
to
be
clear
like
my
worry,
is
not
that
we're
that
there's
anything
actually
technically
wrong.
I
just
I'm
a
little
worried
that
we
might
trick
people
into
checking
the
wrong
thing
right
and
so-
and
I
guess
like
concretely
what
how
would
I
change
this?
It
might
be
as
simple
as
just
more
and
I'm
trying
to
think
of
exactly
the
answer
to
that.
Maybe
it's
something
as
simple
as
more
in
the
const
comment
that
actually
links
to
the
right
way
of
checking
healthy.
D
You
know
I
I
or
just
indicates,
or
maybe
changing
the
ver,
the
verbiage
on
on
the
label
a
little
bit
that
just
says
ccm
initialization,
complete
right.
I
I
think
I
get
thrown
by
phrases
like
node,
initialized,
node
initialized
makes
me
think
that
the
node
that
we're
done
you
know,
whereas
maybe
that's
not
quite
what
we
mean,
and
I
I
realize
I'm
getting
I'm
getting
kind
of
pedantic
here
and
I
apologize.
F
I
think
nick
was
gonna
say
something
yeah
I
was
just
gonna
say
I
think
it's
worth
like
either.
I
don't
know
if
you
want
to
do
a
cup
or
just
in
the
description
of
this,
like
kind
of
the
user
story
of
what
is
going
to
be
watching
for,
for
the
condition
for
the
successful
condition
and
and
what
the
successful
condition
exactly
represents
just
to
get
an
idea
of
like
why
we
need
that.
D
Yeah,
I
I
agree
with
what
nick
said
and
in
fact
I
would
further
say
like
if,
if
this
isn't
the
gating
item
to
a
node
being
healthy
but
is
in
fact
kind
of
a
debugging
marker,
then
I
would
argue
we
don't
need
a
cap.
If
we
have
an
idea
that
this
was
somehow
a
gating
item
before
we
start
taking
on
workloads,
then
I
would
suggest
it
needs
a
cap.
A
Yeah,
so
let
me
clarify
the
point
it
it's
not
the.
It's
only
really
a
signal
right
like
that
is
still
the
blocking
thing
for
workloads
before
a
cloud
provider
is
officially
or
before
a
node
is
actually
registered.
The
condition
is
just
like
an
extra
signal
that
surfaces,
the
reason
why
it
might
have
failed
and
if
it
succeeded
right.
E
Yeah
I,
like
I
like
walter's
comment
about.
You
know,
changing
the
message
to
be
more
specific
about
what
has
been
initialized
or
what
just
completed.
A
A
I
also
want
to
like
make
one
comment
about
like
note
health
that
we're
talking
about
like
I
don't
think
this
is
represent
fully
representation
of
note
health
right
like
it's
just
one
of
many
no
conditions
that
would
surface
so
again.
It's
not.
A
I
wouldn't
anticipate
anyone
to
use
the
condition
to
proceed
with
or
like
assume
that
a
node
is
healthy,
like
they
should
always
check
the
full
list
of
no
conditions,
but
you
can
imagine
like
if
you're
integrating
with
a
platform,
you
would
use
this
as
a
way
to
know
you
know,
is
the
ccm
properly
configured
you
know?
Is
there
something
that
I
misconfigured
is?
Is
there
bad
credentials
on
this
cluster
that
I
need
to
rotate
like
things
like
that?
A
That
would
surface
through
a
condition,
as
opposed
to
like
trying
to
pull
that
data
out
of
checking
taints,
whether
or
not
they
exist
or
not,
or
things
like
that.
E
Yeah
I
mean,
I
think
the
real
concern
here
is
like
you
know,
because
I
totally
agree
with
what
you're
saying
like
we
wouldn't
want
a
user
to
get.
You
know,
build
automation
or
something
around
this,
but
you
know
like
someone's,
going
to
come
across
the
documentation
for
the
condition
and
they're
not
going
to
see
the
documentation
about
the
taint
or
something
and
then
they're
just
going
to
be
like
oh
yeah.
I
just
need
to
check
for
this.
Can
you
know
it's
going
gonna
happen
right.
A
Yep
yep,
that's
fair,
okay,
but
I'm.
What
I'm
hearing
is
that
we
need
to
make
some
significant
improvements
to
the
api
docs,
but
we
don't
feel
like
I
kept
it
needed
for
this
all
right.
Okay,.
A
Sounds
good,
okay,
I
think
that's
all
for
the
topics.
Anyone
else
wanted
to
add
anything.
Last
minute.
A
A
A
All
right,
I
think
the
right
people
seem
to
be
assigned
to
this
is
bowie
on
this.
D
A
D
D
A
D
Exactly
that's
what
I've
been
thinking
so
just
do
like
a
remove
sig.
A
Okay,
so
I
I
do
want
to
keep
this
in
the
in
all
right
because
yeah
like,
if
this
turns
into
like
a
discussion
about
features,
injury
again
like
that
yeah,
that's,
fine.
Okay,
all
right
failing
tests
on
legacy
beats
here.
Test
secret
update.
Okay,
let's
see
okay,
I
will.
A
I
have
to
always
remember
to
unsubscribe
to
things
after
I
triage
it.
Otherwise,
my
gear
notifications
get
all
crazy.
Okay,
log
attempts
to
output,
response.body.
C
A
As
well,
who
was
that
does
yeah.