►
From YouTube: SIG Cluster Lifecycle - Cluster API 22-03-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
march
16th,
and
this
is
the
cluster
api
office
hours
meeting.
As
a
reminder,
cluster
api
is
a
project
of
state
cluster
lifecycle.
We
have
a
meeting
etiquette
if
you
would
like
to
speak
up.
Please
use
the
recent
feature
you
will
find
in
reaction
on
your
zoom
window.
A
If
you
have
any
topics,
please
feel
free
to
add
them
to
the
agenda
I'll
post
the
link
in
chat-
if
you
don't
have
access
to
this
agenda
topic,
please
sign
up
for
the
mailing
list.
There
is
a
link
up
here
that
you
can
click
to
and
you
should
be
able
to
receive
permissions
relatively
soon
before
we
start
does
anybody
wants
to
say
hi.
A
A
Three
times
all
right,
chris,
you
have
the
first
topic
I'll
keep
the
open
proposal
readout
for
later
you
wanna
go
ahead.
B
Chris
all
right
hold
on
okay.
I
can
hear
you
now.
You
can
hear
me
now:
okay,
cool
yeah,
so
we
noticed,
while
trying
to
upgrade
the
equinix
metal
cluster
api
provider,
that
the
kcp
is
not
upgrading
core
dns
automatically,
unless
you
tell
it
specifically
what
version
to
set
it
to.
B
We
were
surprised
by
this.
I
don't
know
if
others
are
surprised
by
this
or
not
so
really.
I
just
put
a
bunch
of
questions
like
and
observations
like.
Should
this
be
changed?
I
I
think
people
who
come
from
a
kubedium
lifestyle
would
be
surprised
since
normally
that
would
manage
it
for
them
and
we
looked
at
well.
Let's
assume
they
want
us
to
manage
it
ourselves.
B
A
I
can
speak
to
that
a
little
bit
at
the
time
that
the
team
built
kcp.
A
A
So
what
do
we
do
given
that,
like
now
we're
taking
the
managing
the
management
of
core
dns
within
kcp
because
kubernetes
does
it
so
we
added
it
in
as
kind
of
like
an
extension
that
then
you
can
also
opt
out
of
with
an
annotation
fast
forward,
like
I
guess,
like
we're,
we're
not
upgrading
coordinates
unless
you,
you
specifically
set
the
version
to
that's
what
we
know
yeah.
So
that's
for.
A
I
guess,
like
a
number
of
reasons
like,
for
example,
if
you're
using
private
well,
like
you're
overwriting
your
repo
repositories,
we
don't
know
if
you
actually
have
that
image
available
with
a
specific
one.
So
assuming
a
tag,
we
will
probably
also
have
to
make
sure
that
I
guess
like
using
the
default
repositories.
A
This
is
like
kind
of
like
a
behavioral
change
like
if
we
do
allow
to
have
like
kind
of
like
a
map
of
versions
inside
of
kcp
itself,
I'm
not
100
like
against
it.
I
think
that
we
should
just
like
make
sure
that
maybe
this
is
opt-in
as
well
like
at
first
as
in,
like
you
say,
hey
auto,
upgrade
my
core
dns
version
with,
and
then
maybe
we
can
make
it
default
later.
A
It's
just
that
because,
like
if
folks
have
already
automated
their
clusters
like
around
the
fact
that,
like
you,
have
to
specify
that
core
dns
version
automatically
or
if
you
just
want
to
keep
using
the
ones
that
they
have
right
now
that
across
breaking
changes
but
yeah,
we
could
probably
discuss
this
into
an
issue.
A
B
Mean-
and
it's
not
even
that
we,
I
I'm,
asking
you
guys
for
sure
to
take
it
on
it's
like.
If,
if
we
have
to
manage
it,
I
think
there's
some
some
gaps
in.
What
are
we
supposed
to
keep
it
above,
like
what
sort
of
breakages
or
testings
are
happening?
So
we
know
that
this
is
the
safe
one
to
be
on
for
this
version
of
cluster
api
or
kate's.
A
Yeah
I
wouldn't
be
opposed
to
have
like
a
map
right
now,
truthfully,
like
coordinates,
should
become
an
add-on
that,
like
gets
managed
outside
in
the
fullness
of
time,
but
to
provide
like
a
better
experience
like.
I
think
it
would
probably
be
fine
to
manage
this
internally,
but
yeah
show
up
on
an
issue
fabrizio.
C
Yeah,
first
of
all,
plus
one
to
discuss
this
in
an
issue.
I
think
that
the
most
complex
use
cases
companies
are
having
their
own,
virtually
internal
version
schema,
and
so
they
are
already
using
this.
But
I
want
to
comment
briefly
on.
C
What
we
can
do
and
yeah
we
can
also
leverage
on
the
kubern
mean
team
for
doing
this
is
that
we
can
report
this.
The
coordinates
version
that
has
been
tested
by
kubernetes
and
by
copy
because
we
keep
it
in
sync
in
in
our
end-to-end
test,
so
probably
car
dns
version.
C
C
Basically
car
dns,
as
requires
a
utility
to
when
you
do
upgrade,
to
upgrade
the
car
dns
config
and-
and
this
is
our
limit
when
we
basically
embed
this
library.
C
A
I
think
this
would
probably
also
catch
that
issue
that
I
personally
had
like
during
testing
like
on
a
cluster
where,
like
the
migration,
wasn't
checking
the
version
or
they
would
already
fix
that.
A
Like
saying
like
this,
is
the
maximum
coordinates
version
that
we
support,
because
the
core
dns
migration
library
supports
this
version
only,
but
then
I
found
out
through
the
logs
that
it
was
failing
because
of
that
so
having
like
a
well
tested
top
for
you
know,
like
top
version
for
each
kubernetes
version,
I
think
it's
probably
I
beneficial.
D
I
think
we
can
definitely
open
an
issue,
but
we
already
have
some
code
there,
so
we
should
already
today
compare
the
coordinates,
version
and
kcp
against
what
the
library
supports
in
the
library
should
give
us
an
error
which
should
then
return
to
our
users
and
but
yeah.
Apparently
we
can
improve
something
about
that.
Whole
story.
A
Cool
fabric,
you
still
have
your
hand
raised,
trying
to
say
anything
any
other
question
common
concerns
on
this
topic.
Before
we
move
on.
E
Yeah,
so
I'm
wondering
does
how
does
this
impact
teams
that
might
elect
to
manage
core
dns
externally?
Is
there
a
way
similar
to
cubidium,
where
you
can
kind
of
move
around
this
step.
F
A
All
right
paul:
do
you
want
to
go
ahead
with
the
capi
oci
provider?
G
Is
there
any
restrictions
on
showing
the
whole
screen,
because
it's
only
letting
me
share
individual
windows
as
opposed
to
the
whole
screen.
D
G
I've
never
had
that
with
with
with
zoom
before,
must
be
a
setting
somewhere.
G
Right
because
yeah,
showing
one
window
isn't
going
to
work
really.
G
Okay,
I'll
see
what
I
can
do
I'll
just
have
to
right.
Okay,
thanks
despite
the
bungled
start,
what
I'd
like
to
do
is
just
sort
of
sort
of
introduce
our
provider
for
oracle
cloud
infrastructure
that
we
we
pushed
to
our
git
repo
last
week,
so
we've
produced
a
supply
for
oracle
cloud
infrastructure
and
what
what
I'm
going
to
do
is
just
sort
of
like
take
you
through
what
we've
done
so
far
and
just
show
you
around
what
it
produces
and
how
we've
approached
it.
G
So,
from
our
perspective,
our
cloud
is,
you
have
a
tenancy
which
is
like
the
overall
account
that
is
subscribed
to
a
region
and
within
tenancies
you
have
things
called
compartments
and
compartments
are
a
way
of
organizing
resources
like
users,
security
networks.
So
you
can
separate
things
off
and
sort
of
departmentally
project
project-wise
and
what
what
I've
done,
instead
of
instead
of
just
using
kind
or
rancher,
desktop
or
something
to
bootstrap
a
cluster?
G
What
we've
done
is
I've
created
a
managed
okay,
a
cluster
called
okay
using
a
quick
start
that
creates
a
cluster
managed
cluster
in
that
compartment.
G
Then
I've
create
updated
that
to
be
a
management
cluster
and
then
use
that
to
generate
a
workload
cluster
in
a
different
compartment
and
that's
created
the
the
workload
cluster
in
a
separate
compartment
from
that
management
cluster,
and
when
we
create
that
cluster,
we
we
create,
we
rely
on
there
being
a
compartment
and
we
create
a
vcn
which
is
a
virtual
cloud
network
we
create
by
default
public
endpoint
for
the
api
we
produce
a
public
subnet
for
any
load,
balancer
service
system
created.
G
Then
we
have
private
subnets
for
the
node
control,
plane
and
private
subnet
for
the
worker
for
the
working
nodes
and
we
produce
a
service
gateway
that
allows
the
applications
running
on
the
cluster
access
to
other
services
within
the
region.
G
We
produce
an
internet
gateway,
obviously
that
nice
public
access
to
the
to
the
end
point
and
we
also
produce
a
nap
gateway
which
is
a
way
of
allowing
traffic
out
from
private
subnets
into
the
internet,
say
if
software
needs
updating,
then
they
can
get
to
their
repositories.
To
do
that,
and
that's
the
kind
of
basic
default
we
have
in
within
a
region.
G
We
have
things
called
availability
domains,
there's
either
one
or
three,
and
these
are
separate
data
centers,
and
each
of
these
is
broken
down
into
three
fault
domains
and
we
spread
the
the
nodes
across
the
valuables
domains
and
failure
domains.
G
So
basically,
what
I
did
was
create
that
that
cluster
and
then
that's
just
an
empty
okay.
Cluster
then
went
through
and
set
up
site
sports
and
then
initiated
that
it's
a
it's
a
management
cluster
that
gave
us
then.
So
this
is
going
to
be
a
pain.
G
So
then,
what
we
can
do
is.
G
G
So
this
this
command
here,
then
we
have
some
variables
so
we're
saying
which
compute
image,
which
custom
image
we're
going
to
use,
which
compartment
we're
going
to
create
it
in
and
then
the
usual.
G
Definitions
for
machine
shapes
and
sizes,
etc.
So.
G
That's
going
off
and
doing
the
reconciliation
and
creating
all
the
resources.
So
if
I
come
back
to
here,
this
is.
G
So
in
the
the
console
we
can
see.
G
And
if
we
look
at
the
compartments
that
I
just
created,
that
in
this
is
created
the
the
vcn
with
all
its
root
tables,
security
lists
and
subnets
required.
And
that
will.
G
And
it's
provisioning
the
control
plane.
I
just
did
one
node
of
each.
G
So
one
of
the
good
things
that
we
can
do
is
I
created
one
earlier,
which
is
a
more
of
a.
G
A
full-fledged
cluster
in
that
we
had
three
control
plane,
nodes,
running
fairly
small.
G
Instances
but
the
the
the
the
worker
nodes
are
running
four
bare
metal
instances
which
is
64
core
machines,
so
we've
got
a
full
range
of
shapes,
etc
available,
and
for
this,
and
that's
right-
that's
probably
yeah.
That's
still
only
created
that
one
so
that
that's
going
on
and
and
creating
that,
so
it's
working,
just
as
any
other
won't
be
used
to,
but
it's
probably
been
of
more
interest
for
people
to
actually
know
that
oracle
does
actually
have
a
cloud
infrastructure
and
yeah.
G
We've
got
managed
services
and
we
now
have
cluster
api
support.
We
we
have
it
released
on
our
own
github
repository
at
the
moment,
because
we've
got
some
things
to
do
to
get
it
into
the
shape.
G
That
would
be
acceptable
to
to
request
it
to
move
across
into
the
sig
repo,
but
we're
working
on
that
so
sorry
for
the
not
sharing
the
full
screen,
probably
this
little
bit
just
disjointed
and
but
yeah
any
questions.
Just
let
us
know
joe
is
on
here
as
well.
So
if
there's
any
questions
in
the
chat
we
can
we
can
answer
thanks
for
your
time.
A
This
is
great.
It
also
goes
to
show
like
how
you
know
like
the
community
is
growing
as
well.
A
Lots
of
props
in
chat
and
reaction
awesome,
any
questions
for
for
the
oci
team.
C
First
of
all,
thank
you
for
the
demo.
It
was
great
to
see
another
provider
live
just
just
a
comment.
If
implemented
the
provider
going
through
the
documentation,
you
did
find
something
which
is
not
clear
enough
or
something
like
that.
Please
give
us
feedback,
so
we
can
improve
documentation,
make
the
learn
on
on
on
your
feedback.
G
Sure,
thanks
yeah
yeah,
we
I
mean,
I
don't
think
we
we
did
find
any
problems.
I
think
we
it's
it's
more
been
for
us
and
we
all
know
oracle's
sort
of
not
very
good,
with
open
source
for
more
at
first,
it's
been
more
of
a
getting
used
to
being
part
of
the
community
and
sort
of
learning
the
right
way
to
to
to
approach
things
was
was
our
biggest
issues.
I
think,
rather
than
anything,
technical.
H
So
not
really
a
question
more
of
a
note
call,
but
once
once
the
move
is
happening
to
kubernetes,
I
think
there
might
be
interesting
things
that
you
you
want
to
look
at.
H
We
already
have
across
the
board
like
in
providers
in
terms
of
in
terms
of
features,
I
took
a
quick
look
oracle,
they
already
support
those,
and
I
think
it
will
bring
value
for
the
oci
provider
to
to
support
so
yeah.
Once
we
have
office
hours,
then
we
can
meet
there
and
start
some
of
those
discussions.
H
A
Awesome
thanks
folks,
I
think
next
up
on
the
agenda
is
mike.
Do
you
want
to
take
it
away.
I
Okay,
can
you
see
the
terminal
window
yep
cool?
Okay,
so
I
guess
this
will
be,
in
contrast
to
paul's
demo,
it's
going
to
be
all
like
kind
clusters
and
cube
mark
machines.
So
it's
all
going
to
be
virtualized.
I
I
guess
just
for
the
nothing
up
my
sleeves
part
of
this
demo,
I'm
on
a
vm
here
that
I
created
and
you
can
just
I've-
got
two
kind
clusters.
I've
created
a
management
cluster
which
is
kind
of
has
our
standard
stuff.
We
would
assume-
and
I
have
a
workload
cluster-
that
I'm
calling
kmcp,
which
is,
as
you
can
see,
right
now,
there's
just
one
machine.
I
You
can
see
I've
got
two
machine
deployments
running
currently
at
zero
size,
the
one
of
them
that's
like
that
says:
cubemark
md0
is
just
kind
of
a
normal
one
and
the
one
that's
extra
res
gpu
has
some
gpus
assigned
to
it.
So
what
I'm
going
to
do
now
and
I'll
just
show
I've
got
the
auto
scaler
running
locally
in
a
terminal
here,
that's
just
connected
to
the
to
the
client
to
the
kind
cluster.
I
So
it's
kind
of
sitting
here
reconciling
doing
its
thing
and
if
I,
if
I
stop
it
for
a
second,
you
can
see
that
it's
it's
identified
both
of
our
node
groups
as
potential
candidates
for
scaling,
so
it
sees
both
of
them
and
then
what
we're
looking
at
here
is
on
the
bottom
panel.
I'm
watching
the
machine
deployment
so
hopefully
we'll
be
able
to
see
them
scale
out
as
we
create
them,
and
I've
got
a
couple
of
workloads
that
I've
created
here.
So
we'll
look
at
the
the
first
one
I'm
going
to
do.
I
I
So
right
now
it's
pending
the
auto
scaler
takes
like
I've
got
it
set
pretty
aggressively,
but
it's
still
going
to
take
like
15
10
15
seconds
for
it
to
pick
up
the
pending
pod,
then
once
it
sees
the
pending
pod,
it
will
start
to
scale
out
that
that
machine
deployment
and
while
we're
waiting
for
that
to
happen,
I'm
going
to
show
you
I've
changed
the
kubemark
provider
so
that
the
kubemark
provider
now
is
adhering
to
the
to
the
guidance
for
opt-in
scale
from
zero,
and
you
can
see
in
the
lower
part,
we're
already
scaling
up
that
we're
already
scaling
up
the
gpu
node
and
the
containers
creating.
I
So
in
order
to
do
this,
though,
I
had
to
change
the
way
kubemark
machine
templates
are
created
and
I'll
show
just
so
you
can
see
how
this
looks
for
this
one,
and
this
is
something
that
every
cloud
provider
would
need
to
do
if
they
want
to
if
they
want
to
be
able
to
participate
out
of
the
box
in
scale
from
xero,
and
you
can
see
this
is
you
know
just
for
reference.
This
is
the
actual
machine
template.
I
This
is
not
a
machine
or
a
machine
deployment,
and
you
can
see
it
exposes
it
exposes
in
its
status
the
capacity
now
you
might
notice.
I
also
have
this
up
here
in
the
options,
but
that's
just
that's
just
part
of
the
provider.
Spec,
that's
going
to
go
to
the
you
know
to
create
machines
and
whatnot.
So
this
is
where
the
auto
scaler
is
actually
pulling
the
information
from,
and
I
guess
you
know
just
to
kind
of
continue
with
the
demo
here.
If
I
you
know.
I
And
we
should
see,
we
should
see
the
that
extra.
You
know
the
gpu
set
kind
of
scale
out
and
while,
while
waiting
for
that
to
happen,
what
I'm
going
to
do
next
is
I'm
going
to
switch
to
a
different
type
of
workload
that
does
not
require
gpus
and
I'm
going
to
try
and
get
it
to
push
out
the
other
machine
deployment
as
well,
and
this
will
just
you
know.
I
It
takes
like
usually
30
seconds
to
a
minute
for
some
of
this
stuff
to
come
up,
but
while
we're
waiting,
I'm
going
to
show
you
what
the
other,
what
the
other
machine
so
the
other
machine
template.
I
I
So
create
that
watch
our
pods
again,
oh
and
now
the
problem
is,
I
need
to
scale
this
up,
because
the
other
the
gpu
nodes
I
created
are
big
enough.
That
they'll
actually
accept
those
workloads.
I
didn't
I
didn't
craft
my
resources
tight
enough
here
for
the
demo.
So
let
me
just
let
me
just
scale
it
up
to
some
absurd
number,
maybe
like
25,
replicas.
I
I
I
Finally,
and
we
should
see
nodes
coming
up,
and
so
this,
the
mechanism
by
which
this
other
machine
set
is
going
to
be
or
machine
deployment,
is
going
to
be
able
to
scale,
has
nothing
to
do
with
the
provider
and
has
everything
to
do
with
the
annotations
that
I'm
putting
on
the
machine
deployment
so
in
this
manner
users
could
use
scale
from
xero,
even
if
the
provider
has
not
been
updated
to
take
this
into
account
yet-
and
you
can
see
that
you
know
it's
already-
it's
scaled
up
to
five
replicas,
so
we
scale
that
to
our
maximum.
I
All
right
now,
what
you
can
see
is
this
machine
deployment.
Has
these
annotations
for
capacity,
and
so
the
cluster
auto
scaler
understands
those,
and
even
though
the
machine
template
is
not
exposing
the
capacity
as
a
user,
I
could
still
gain
access
to
this
scale
from
zero
functionality
by
filling
out
these
this
information
here,
and
this
helps
to
inform
the
auto
scaler
what
the
size
of
the
nodes
will
be
there.
So
that's
about
it,
that's
you
know,
I
guess
that's
the
demo
and
you
know
yeah
I'll.
Take
any
questions.
I
Oh
and
I
guess,
by
way
of
by
way
of
update,
I
have
a
little
bit
of
cleanup.
I
still
need
to
do
to
get
this
merged
into
the
auto
scaler
like
the
patch
I'm
still
working
on
the
patch,
like
it
mostly
works,
but
I
need
to
clean
up
the
documentation
and
there's
a
nasty
issue
right
now
wherein,
in
order
to
make
this
work,
the
cluster
auto
scaler
has
to
has
to
look
inside
of
the
machine
deployment
or
machine
set
and
determine
what
the
infrastructure
template
is.
I
And
then
it
has
to
look
that
infrastructure
template
up
and
that's
totally
dynamic.
So
I'm
having
a
little
trouble
like
trying
to
understand
how
I
can
set
up
client
go
because
that's
what
the
auto
scaler
uses
for
its
client
to
kind
of
like
I
need
to
discover
the
machine
template
type
and
then
add
it
to
the
informer
so
that
we're
caching
it
properly
because
right
now,
I'm
getting
like
all
sorts
of
nastiness
from
the
api.
I
You
know
just
because
we're
hammering
it
trying
to
look
up
these
machine
templates.
So
I'm
hopeful
in
like
the
next
week
I'll
have
this
cleaned
up
and
maybe
have
the
pr
ready
for
a
review
and
whatnot
and
then
yeah
like
you
know,
providers
can
implement
this
if
they
choose
to
and
if
not,
users
would
still
have
access
to
use
this
on
any
of
the
clouds
that
cluster
api
supports.
So
so
yeah,
that's
it
for
me.
A
A
Mike,
do
you
know
if,
like
that's
like
something
that
you
folks
like,
would
be
interested
in
adding
in
the
future.
A
Oh
yeah,
so
the
I
guess
I
got
the.
A
Well,
I
was
just
chatting
about
the
comment
that
forbids
you
made
like
that.
We
need
the
auto
scala
testing
copy.
A
I
think
this
came
up
like
a
while
back
as
well,
where
like
copy
was
updated,
but
like
we
don't
have
any
signal
like
within
claustrophobia
itself,
that,
like
will
work
also
with
the
latest,
maybe
main
version
of
cluster
autoscaler,
so
yeah.
I
I
mean
totally-
and
this
is
something
fabrizio
and
I
have
talked
about-
and
I've
actually
talked
about
this
with
the
sagato
scaling
community
as
well.
I've
been,
you
know,
probably
talking
with
them
for
over
a
year
about
this,
but
what
I
would
like
to
see-
and
you
know-
ben
moss-
was
doing
some
of
the
early
work
on
this
and
I'm
trying
to
pick
up
where
he
kind
of
left
off.
I
What
I
would
like
to
see
is,
I
would
like
to
see
the
auto
scaler
doing
pre-submit
tests
using
cappy,
docker
and
cappy
cubemark
to
run
like
lightweight
tests
in
the
upstream
and
auto
scaler
so,
like,
I
think,
it'd
be
cool
if
we
could
do
that
kind
of
testing.
But
I
would
also
like
to
see
us
doing
that
in
the
auto
scaler
as
well,
because
right
now,
there's
no
end
to
end
test.
I
That's
performed
on
a
per
on
a
you
know,
kind
of
per
commit
basis,
and
I
think
this
is
a
lightweight
way
that
we
could
test
out
the
core
mechanisms
you
know
for
for
any
pr
that
might
go
there.
So,
like
I'm
really
excited
about
the
possibility
that
we
could
do
more
testing
around
this
in
the
future.
A
Absolutely
do
you
folks
have
any
questions.
Comments,
concerns
on
mike's
cluster,
auto
scaling
from
zero
demo.
A
Once
twice
three
times
all
right,
stefan
you
have
the
next
topic.
D
Yep,
can
you
open
the
issue?
Please?
Oh,
I
didn't
link
my
own
issue,
the
newest
issue
in
the
cluster
api.
D
Once
again,
yeah
sure
some
comics,
so
let
us
know
today
that
they
are
discussing
upstream
to
drop
the
cluster
name
field
in
object
metadata,
I'm
not
sure
who
knows
it,
but
it's
has
been
there
for
a
while.
It
was
always
cleared
on
read
and
on
right.
So
it
was
totally
unused
and
there
was
a
big
comment
there.
D
Please
don't
use
it
and
api
server
doesn't
use
it,
etc,
and
now
on
that
first
kubernetes
corners
link,
there's
a
discussion
to
actually
just
drop
the
field,
and
because
of
that,
I
looked
a
little
bit
at
our
current
usages
and
we
have
only
a
few
of
them
and
they
all
should
be
accidental
and
they
should
all
go
away.
D
Yeah,
that's
just
a
flake,
I
I'll
retry
that
one.
Can
you
open
the
file
div
just
to
show
how
easily
it
happens
that
someone
uses
the
wrong
custom
name.
So
essentially,
if
you
want
to
use
a
regular
cluster
name
in
almost
all
of
our
resources,
we
have
something
like
spec.clustername
and
if
you
just
forget
the
spec,
then
you'll
end
up
in
x.
Actually,
dot
object,
meter.custom
name,
but
as
object
meter
is
inlined,
you
don't
have
to
write
object
meter.
D
So
we
have
those
three
occurrences
in
call
copy,
and
I
found
a
few
in
in
copper,
cap
c
b.
Probably
other
providers
too.
So
just
a
general
pc
take
a
look
at
your
code.
You
should
probably
not
use
object,
meter,
custom
name.
If
you
used
it,
it
should
have
been
always
empty.
D
H
You
see
yeah
for
providers
like
if
you're,
if
you're
intentionally
like,
if
you
see
that
you
might
be
right
in
that
field,
then
actually
removing
its
usage
might
be
a
bit
harder
just
because
of
if
someone
depends
on
it
externally.
So
if
it's
like,
if
it's
being
used
as
read-only,
then
that's
fine
as
it's
this,
like
the
case
on
this
change,
but
if
you're
setting
it
that's
another
story.
D
That
you
probably
need
to
discuss.
I
think
it's
not
so
easy.
So
if
you
set
it
locally
and
then
wrote
it
to
the
api
server,
that
field
was
just
dropped,
so
you
couldn't
actually
write
it
to
a
kubernetes.
It's
only
an
issue.
If
you
try
to
use
that
value
and
expected
something
and
then
it
was
empty
or
if
you
let's
say
played
around
with
it
only
locally
and
handed
that
object
around,
but
as
soon
as
you
wrote
it
to
the
api
server,
it
should
never
have
gotten
there.
D
Yeah,
but
I
think
there
are
some
interesting
things,
for
example
in
in
kappa.
I
saw
that
it
might
be
misinterpreting
this,
but
if
I
saw
it
correctly,
then
the
customer
name
is
used
to
set
text
on
an
eks
add-on
and
that
tag
now
doesn't
have
the
cluster
name
so
yeah.
It
might
not
be
super
trivial
to
fix
that
now
and
and
to
be
compatible
to
all
those
eks
add-ons
which
have
been
created
without
that
tag,
but
I
could
be
totally
misreading
this.
I
don't
know
anything
about
ekins.
A
And
so
I
see
that
we're
opening
issues
like
in
every
provider-
that's
great
yeah,
let's
keep
track
of
it
like
this
should
definitely
not
be
used,
but
it
seems
we're
on
top
of
it.
A
So
thanks
stefan
for
looking
into
it
hi
sagar.
You
have
the
next
topic.
F
Yeah
thanks
thanks
so
yeah.
I
think
I
approached
this
last
week
and
I
saw
that
a
bunch
of
folks
did
go.
A
I
haven't
had
time
to
review
this
proposal,
but
folks
that
have
are
there
any
standing
items
fabrizio
well,
I
think.
C
That
the
biggest
stuff
that
we
have
to
decide
is
that
original
the
proposal
was
was
suggesting
to
the
basically
a
set
of
network
fields
to
almost
all
the
provider
and
during
discussion
of
the
document,
we
kind
of
arrived
to
the
conclusion
that
maybe
that
if
we
move
this
information
on
the
machine
object
and
the
information
are
about
about
our
basic
information
about
the
networking,
we
simplify
a
lot
things
on
the
provider
side
and
also
we
make
them.
C
Let
me
say
the
api
modeling
consistent,
because
this
information
is
modeled
only
one
time
in
a
single
place.
So
this
is,
in
my
opinion,
the
the
biggest
things
that
we
have
to
agree
upon,
and
it
is
a
let's
do
this
only
on
provider
side
or
let's
do
this
with
a
change
of
of
the
core
api
and
yes
yeah.
If
people
can
comment
on
this,
in
my
opinion,
with
everything
else,
we
will
follow
up.
A
I'll
say
a
rule
of
thumb
from,
and
you
know
past
discussions
that
are
very
similar
to
this
would
be
like
if
this,
if
this
is
something
that
we
feel
will
be
used,
and
I
can
think
about
like
a
number
of
ways
that
you
know
this
could
be
used
across
providers.
A
Close
jpi
would
probably
be
the
best
place
to
do
it
and
now
like
in
terms
of
responsibilities,
it
does
feel
a
little
bit
overreaching,
but
at
the
end
of
the
day,
it's
also
like
something
that
you
know
we
we
definitely
can
provide
if
as
a
functionality,
because
you
need
my
pen
to
spin
up
clusters
in
some
environments.
A
A
I
guess
allocate
allocate
we,
for
example,
the
vpc
cider
block.
A
Maybe
that's
something
that
we
should
think
about
because
so
like,
if
we
think
about
cluster
api
as
like
at
this
management,
you
know
the
single
management
point
for
lots
and
lots
of
clusters
across
cloud
providers
as
well,
usually
like
at
least
like
from
what
I've
seen
like
most
of
the
users
would
be
somewhere
like.
A
You
still
need
an
ipam,
a
cross
cloud,
provided
if
you
want
connectivity
between
them
as
well,
because
you
cannot
have
overlapping
cider
blocks
in
different
vpcs
across
collaboration
if
you
want
them
to
connect
to
each
other,
and
so
this
would
be
a
good
way
to
like
say
yeah
to
have
like
that.
A
Ipam
integration
like
across
cloud
providers
that,
like
you,
couldn't
have
done
before,
unless
you
did
it
by
hand
or
like
with
an
external
card
provider
but
sort
of
instant
ipad
integration,
so
having
those
vpc
sliders
to
come
from
the
the
ipad
integration
that
we
can
provide
could
be
very
beneficial,
especially
because
right
now,
like
I
think,
the
vpc
for
aws,
for
example,
it's
like
a
slash
eight
from
correctly,
which
is
pretty
pretty
wide.
A
So
this
this
could
be
something
that's
like
hey.
I'm
gonna
allocate
a
set
of
block,
maybe
it
would
default
something
smaller.
It's
like
a
24
or
something
like
that,
and
then
other
providers
could
do
the
same.
A
So
just
some
food
for
thought.
If
we
do
put
it
in
capi,
I
would
like
to
see
that
as
kind
of
also
an
integration
point.
A
F
J
Yeah
thanks
vince
yeah,
I
don't
know:
if
are
we
enjoying
this
call
today,
but
just
basically
wanted
to
let
everyone
know
that
he
put
this
vr
in
place,
so
any
feedback
is
welcome.
A
Yeah
this
is,
this,
is
great
to
see
I'll
yeah
I'll
definitely
enjoy
this.
Does
anybody
have
any.
A
Comments
on
the
label
propagations
for
folks
that
don't
know
currently
like
if
you
want
to
have
a
kubernetes
node
with
labels
predefined.
A
A
Now
oftentimes,
like
labels,
can
be
used
to
assign
permissions
as
well
to
nodes,
or
you
know
they
can
be
used
to,
for
example
like
in
node
selector,
and
things
like
that.
A
So
the
biggest
concern
that
we
had
like
to
sync
any
kind
of
labels
from
the
machine
controller
to
the
node
itself
would
be
that,
like
potentially
like
you
can
assign
yourself
labels
that
you
shouldn't
have
assigned
to
yourself
as
well,
like
even
some
labels
from
other
domains
that
are
like
outside
of
cluster
api.
A
A
The
authority
will
be
the
cluster
api
machine
controller,
so
you
won't
be
able
to
add
a
node
another
label
in
the
node
itself
outside
of
cluster
api
will
overwrite
everything
in
that
prefix,
and
so
this
it
just
introduces
a
new
prefix
into
the
kubernetes
land
that
we
own
and
that,
if
you
want
to
like,
say,
restrict
like
node
selectors,
and
things
like
that
that
you
could
do
so
on
the
node
prefix,
like
a
knowing
that
cluster
api
owns
this
prefix.
A
All
right,
I
think,
we're
at
the
end
of
our
agenda.
Any
other
comments,
questions
concerns
or
last
minute,
topics
for
today.