►
From YouTube: 20200415 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
I
I
We
don't
have
any
PSAs
today,
I
guess
like
from
last
week
we're
just
getting
ready
for
b03
for
if
you're
interested
like
please
do
check
the
milestone
in
the
copy
page,
so
I'll
just
move
down
to
general
questions.
This
is
a
time
where
we
allow
anyone
in
the
community
to
ask
any
kind
of
questions.
If
you
also
a
beginner,
you
want
to
know
more
about
sheppy
I
feel
free
to
ask
anything
during
this
time.
I'll
pause
here
and
allow
you
to
a
mute
and
ask
questions.
J
We
put
that
two
weeks
ago,
thanks
for
the
opportunity,
as
agreed,
why
slack
or
github
issue
where
we
correspondent
I,
will
try
to
limit
it
or
even
limited
to
20
to
30
minutes
altogether,
so
the
demo
will
be
done
by
myself
and
Chelsea
belt,
who
also
introduce
himself,
so
he
will
show
the
demo
and
I
think
it
makes
sense
to
provide
some
context
on
it
before
you
look
at
the
screen,
so
I'll
take
five
minutes.
So
if
I
can
share
my
screen,
I'll
stop
sharing,
so
you
can.
Okay.
J
In
case,
let's
say
you
are
looking
for.
What's
going
to
be
presented
today,
I'll
be
doing
myself.
I
have
kubernetes
squad
lead
in
India
telecom
running
the
small
DevOps
team
there
and
we
are
partnering
with
weworks
company
where
the
Charles
comes
from,
and
he
will
do
the
actual
demo.
But
in
terms
of
context,
if
you
wonder
you
know
all
around
the
world
by
the
heck
do
telecom
is
we
are
the
leading
global,
telco
Plus
globally
pleasant,
present
systems
integrator
our
biggest
business
business
segment
is
since
recently,
the
facto
us
where
we
have
t-mobile
plus
Sprint.
J
Since
a
couple
of
days
officially
confirmed
merger.
Second
largest
segment
is
Germany
and
then
the
third
is
Europe,
which
means
a
number
of
properties
around
Central
and
Eastern
Europe,
and
then
the
global
t-systems
system
integration.
So
it's
fairly
big
company
we
come
and
the
shift
comes
from
telecom
business
segment,
Germany,
it's
a
home
market
and
in
Germany
we
are
running
essentially
fully
integrated
telecom
provided
operations,
which
offers
everything
from
like
access
services,
fiber-to-the-home
mobile
for
drystone
to
be
5g,
TV
services,
data
voice
or
full
pallet.
So
it's
not
like
a
mobile
provider
like
in
u.s.
J
Specifically
now,
since
it's
even
bigger
term,
the
shift
comes
from
network
technology
department
or
metaphor,
the
phonology
part,
if
you
look
at
who
is
not
familiar
with
the
telco
organisations,
the
technology
for
us
is
having
a
two
parts.
One
is
network
technology.
Another
is
IT
information
technology,
so
network
technology
is
everything
what's
not
dealing.
Crm
ERP
typical
enterprise
things.
J
What's
our
actually
challenge
or
a
problem
statement
or
requirement,
we
have
many
locations
in
the
network
technology.
Some
are
core
locations,
some
other
edge
locations.
Some
are
far
edge
locations
to
make
it
even
more
colorful.
Some
locations
have
a
compute
based
on
the
vSphere.
Some
have
open
stack
based
compute,
some
have
a
pure
bare
metal
based
compute.
Even
we
have
a
vCloud
director
from
from
VMware
to
make
it
even
more
complicated
due
to
historic
reasons
to
the
network.
Segmentations
we
have
networks
which
are
not
visible
between
each
other.
J
So
all
Private
Networks,
most
of
them
air-gapped
and
those
locations,
are
somehow
either
isolated
from
each
other,
but
they
are
also
wild
in
through
the
three
different
fire
rosam,
pretty
fairly
complex
setup,
which
is
not
easy
to
change
and
then
consolidate.
And
probably
there
is
no
case
to
do
that
over
a
period
of
time.
But
what
you,
as
all
those
in
all
those
locations,
we
need
one
or
more
kubernetes
clusters,
some
today,
some
in
long
term.
J
So
what
we
have
as
our
task
and
what
we
realized
so
far
is
and
needs
or
to
cover
the
needs
to
manage
many
kubernetes
clusters.
When
I
say
many,
it's
like
our
estimate
thousand
three
thousand
five
hundred
plus
there's
not
taking
into
account
the
mobile
cell
sites,
which
could
come
and
still
far
far
far
in,
but
it
could
come
through
to
the
cloud
radio
access
network
things
there.
So
we
need
to
manage
those
clusters
all
around
these
locations
on
different
platforms
on
different
networks.
J
As
a
kettle
and
our
starting
point,
the
design
decisions
was
to
build
the
cloud
native
engine
for
these
clusters,
using
upstream
kubernetes,
using
hundred
persons
and
alone
or
making
it
hundred-percent
stand-alone
clusters
hundred
percent
infrastructure,
agnostic
and
hundred
percent.
It
opens
managed
and
automated
so
another
application
that
come
on
them
that
as
well.
But
we
talk
here
about
the
infrastructure.
J
This
looks
briefly
like
on
a
very
high
level
like
this,
so
in
this
is
initial
setup
couple
of
bigger
location,
so
one
here
in
the
middle
with
a
darker
background,
it's
shift
control
plane
in
Berlin
and
Bremen
do
cities.
This
one
is
strong
here
and
an
out
of
that
control
plane,
which
is
also,
as
you
will
see,
kubernetes
cluster,
nothing
new
for
Cappy
folks
we
would
like,
and
we
are
actually
creating
the
clusters
in
those
different
locations
in
the
remote
locations.
So
major
use
cases
remote
location
because
of
the
size
of
these
locations.
J
We
will
create
also
some
clusters
in
the
same
location
as
you
can
intuitively,
probably
see
it
fits
very
well
and
we
say
almost
perfectly
to
cap
it
or
a
copy
fit
to
our
contexts
almost
perfectly
and
the
control
plane
of
the
store
originally.
Actually,
to
be
honest,
when
we
start
to
last
some,
there
was
before
alpha
alpha
too,
so
we
wanted
to
develop
the
whole
engine
so
using
some
custom,
in-house
developments
and
so
on.
J
But
then
we
essentially
looked
at
what
happened
with
the
alpha
2
and
decided
to
place
bet
on
copy
and
did
that
without
any
compiled
code.
So
we
realized
almost
everything,
but
there's
still
wait
to
go,
so
our
control
plane
is
consisting
of
copy
management,
cluster
plus
some
other
components
around.
The
list
is
here,
not
exhaustive,
and
then
out
of
that
we
create
a
ship
that
we
call
shift
N
and
clusters.
This
is
nothing
else
about
the
workload
as
there
with
additional
components
which
we
are
going
to
present,
how
we
are
adding
them.
J
This
is
the
market
tech
chure
slide.
Essentially
what
we
like
very
much
with
the
Cappy
is
this
notion
of
pluggable
infrastructure
providers,
and
this
is
how
actually
we
put
a
layer
on
the
on
the
tenon
cluster.
We
put
a
layer
of
observability
in
automatic
way.
We
add
components
in
the
environment
that
enable
or
internal
users
to
have
entire
chain,
and
then
we
manage
everything
with
flux
and
cluster
api,
so
our
main
interface
user
interface.
Is
it
lab
actually
or
get
a
repository
for
entire
thing?
J
On
top
of
that,
we
are
working
with
vkp
to
add
some
fleet
management
capabilities,
cluster,
modeling
and
so
on.
But
this
is
not
the
subject
of
today's
presentation,
so
essentially,
maybe
just
to
mention
we
are
starting
practically
this
week
or
next
week,
depending
when
we
manage
to
put
all
the
things
together
with
the
first
production
clusters
on
zero
point.
Three
point:
three:
capping
with
cap:
V,
zero
point:
six
point:
three:
this
is
our
first
integration,
then
we'll
go
move
on
with
a
bare-metal
integration
or
original
plan
was
with
the
metal
tree.
J
But
we
see
the
movement
around
the
Tinkerbell
and
bucket
all
sourcing,
and
there
is
also
this
firecracker
thing
which,
with
KVM,
we
might
look
forced
it's
pretty
undefined
and
then
we'll
move
on
with
AWS
and
OpenStack
providers.
So
demo
use
case
for
today,
briefly
we'll
show
and
transfer
show
that
the
goal
is
to
show
you:
how
do
we
consume
kappa
and
why
ham
charts
of
kappa
custom
resources
are
essential
to
it
and
in
the
would
be
nice
also
to
have
some
some
attraction
on
that
one.
J
So
we
have
here
a
management
cluster,
that's
having
all
copy
deployments
in
and
capri
deployments.
This
manager
custer
also
runs
flex
CD,
which
is
a
Gibbs
agent.
Biggs
flux
see
the
listens
for
our
changes
in
a
negative
private,
it
repo
in
the
git
repo.
We
have
our
helm,
based
templates
of
the
custom
resources
and
then
for
each
workload
class.
There
we
have
a
values
file
or
it's
simplification,
but
it's
in
essence.
It's
a
values
file,
so
we
have
all
resources.
Here
is
just
an
example.
The
value
file
is
containing
the
parameters.
J
Trans
can
show
it
also
in
more
details
and
all
the
copy
resources,
custom
resources,
Caffey
like
machine
deployment,
cluster
machine,
vSphere
machine
and
so
on.
We
we
actually
worked
out
as
a
templates,
so
that
one
template
of
machine
deployment
here
in
an
example
can
be
used
for
a
multiple
clusters
in
just
changing
the
value.
So
essentially,
what
happens
we
as
admins?
We
would
make
a
change
or
create
a
new
cluster
new,
a
new
folder
in
a
git
repo,
or
do
some
change
on
the
values
wherever
and
then
a
flux
will
detect
that.
J
Essentially,
we
will
apply
that
in
the
cluster.
In
this
case
it
will
create
a
resource
set
for
for
a
workload
cluster
one
to
create
it
then
the
rest,
you
know
this
resource
sets,
would
create
the
cluster
and
then
we'll
configure
the
cluster
and
join.
What
we
use
also
is,
then
how
to
boost
up
and
and
bring
all
the
things
after
cluster
comes
up
like
see
and
I,
like
many
components
that
we
use
in
the
tenant
cluster.
J
So
we'll
show
here
like
how
we
bring
up
to
things
like
CNI
and
one
component,
like
certain
manager
in
a
tenant
clusters
or
in
workflow
clusters.
This
we
do
as
a
part
of
post
give
away
the
end
sequence,
also
in
a
templated
way,
so
not
to
take
much
more
time.
If
there
is
any
question
on
that,
maybe
we
can
do
it
after
the
demo,
but
I
would
hand
over
to
Charles,
so
I
stopped
sharing
Charles
and
if
you
take
the
screen.
J
C
Great
alright,
so
we
have
a
essentially
a
stripped
down
demo
to
save
time
and
we've
got
it
such
that
you
should
be
able
to
see
that
the
essential
bits
for
get
ups
committing
head
on
charts
and
I'll
then
take
you
through
the
current
ones
that
the
video
is
complete.
If,
if
anyone's
interested
in
that,
so
without
further
do
you
click
Play,
so
we've
did.
All
it
takes,
is
moving
a
helm
release
into
the
correct
location,
committing
that.
C
J
C
C
C
C
C
Apologies
work
so,
as
we
mentioned,
here's
your
tenant
cluster
and
the
contents
of
the
tenant
cluster,
your
parts
coming
up,
you'll
see
an
eye
and
the
helm
operator
within
the
cluster,
and
you
see
the
master
additional
masters
coming
up
and
in
the
lead,
the
election
happening
and
that's
it.
Your
cluster
is
pretty
much
provisioned
and
operate
in
operation.
C
C
J
So
certainly
we
built
around
the
premises
that
we
have
things
templated
than
the
single,
then
changing
it
and
and
and
or
get
commit,
would
trigger
entire
chain
and
then
deploy
the
workload
cluster
and
then
we
would
move
on
and
then
a
same
workflow
will
be
used
by
tonight
cluster
to
equip
itself
it
all
components.
We
didn't
want
to
bother
you
with
our
like
opinionated
set
up
for
that,
where
we
have
like
more
than
than
10
components
but
is
given
as
an
example.
Here,
certain
manager
is
the
one
component
that
comes
in
I.
C
C
J
You
so
what
we
essentially
found
out
is
that,
regardless
of
Gibbs,
he'll
be
used
pretty
well
as
a
templating
engine
for
resources
and
more,
and
we
are
going
to
maintain
these
templates
for
the
for
the
custom
resources.
Charles
will
show
as
we
are
depending
on
them.
So
we
are
going
to
follow
on
new
releases
and
make
changes
and
do
the
testing
and
so
on.
J
We
thought,
if
that's
interesting
piece
for
some
of
the
community
members,
then
we
could
also
put
it
somewhere
up
either
in
the
main
four
main
repo,
or
we
could
certainly
put
it
in
in
our
telecom
repo
and
show
it
there.
I
know
that
there
was
a
discussion
around
the
pool
request.
Maybe
without
you
didn't
have
a
context,
trust
was
making
some
pool
requests
a
couple
of
weeks
ago.
J
From
our
perspective,
if
that
resonates,
it
may
be
something
we
can
take
over
in
terms
of
workloads,
but
I
think
we
would
benefit
more
if
it's
kind
of
part
of
CI
for
the
for
the
releases
so
that
we
have
things
tested
and
not
separately
tested.
But
we
could
also
do
it
separately,
wanted
to
just
share
this
and
then
get
some
feedback
on
this
stuff,
so
trust.
Maybe
we
have
a
two
more
minutes
on.
C
My
screen
and
you
see
the
cap,
my
screen,
you
see
the
can
we
do
you
see
the
cap,
you
can
be
most
welcome
to
use.
You
see
the
cap
e
on
my
screen.
You
see
the
cap
e
helm
chart.
We
have
each
of
the
CRT
components
in
a
separate
file
for
ease
of
dipping,
but
also
due
to
some
issues
around
how
helm
behaves
and
so
forth,
and
we
found
that
this
main
things
run
a
lot
more
smoothly.
C
We
have
the
various
deplorable
items,
such
as
certificates
deployments
and
so
forth,
they're
just
a
standard,
helm
template
and
we
have
a
low
values
file
for
copy
and
captivity
components.
As
you
can
see
there
they're
currently
no
values,
and
this
is
because
we
have
not
actually
templated
out
the
deployments.
Everything
is
currently
statically
defined
in
the
template,
because
a
lot
of
this
work
has
been
moving
so
fast
when
we
didn't
see
the
value
at
this
point
in
time
in
templating
out
the
various
versions,
because
we
found
sometimes
things
changed
quite
a
bit.
C
C
J
C
C
C
The
actual
clustered
components
which
we've
separated
each
item
out
so,
for
example,
the
hitch,
a
load
balancer.
You
can
see
the
various
values
at
the
moment
we
have
a
certain
elements
very
thinly
configured
during
our
PRC
and
prototyping,
so,
for
example,
networking
and
so
forth
are
not
extensively
configured
up,
because
this
is
suited
the
requirements
for
crafting
everything
else,
an
example
of
machine
deployment.
I
C
Values
file
looks
something
like
that,
so
the
various
components
in
a
head
on
charge
account
scroll
to
you.
You
know
because
at
the
moment
there
are
so
credentials
that,
as
you
can
see,
it's
not
committed
I
had
that
in
there.
So
I
don't
want
to
scroll
further
down
just
in
case,
if
I
expose
something.
J
I
think
that's
it
to
summarize.
Thanks
Russ,
we
with
cap
P
parts,
we
didn't
have
any
any
major
issues
which
kept
V.
We
have
number
of
things
which
are
not
that
big
we
discussed
with
with
a
team
there.
There
are
some
issues
we
can
file,
but
we
wanted
to
provide
context
first
and
yep.
Any
questions
are
more
than
welcome,
so
I
go
on
you
because
church
is
ring
ringing
Bell
here,
bye-bye.
K
C
J
J
Will
be
probably
going
to
publish
some
of
these
things,
including
architectural
charts
and
so
on
and
some
point
in
future,
but
our
first
thing
is
to
get
whole
system
up
and
running
and
then
hopefully
provides
for
you
some
some
operational
experience.
How
that
work
on
on
a
larger
scale
in
terms
of
cases.
I
L
Thanks
so
I
guess
I
would
be
remiss
if
I
didn't
ask
this
question,
since
you
were
working
so
closely
with
all
the
cluster
API
kind
of
machinery
and
whatnot.
Were
there
any
kind
of
areas
that
you
saw
where
you
ran
into
trouble
or
you
feel
there
were
like
gaps
in
what
you
know.
Cluster
API
was
providing
that
you
think
you
know
should
be
addressed.
J
H
J
The
crease,
but
with
capi
as
I,
said
it's
fitted
extremely
well
with
our
use
case.
The
only
thing
the
trials
also
mention
is
that
we
try
to
jump
in
the
running
train
and
do
some
production
readiness,
which
of
course,
is
not
yet
your
commitment
with
stability,
since
it's
it's
alpha
3,
so
we've
been
starting
with
alpha
2
and
then
going
early
on
three
zero,
three
zero,
three
zero
and
then
there's
something
stable.
So
all
all
no
business
as
usual
for
this
stage
of
development
but
I
think
as
you
do
a
very
great
job.
J
You
like
try
to
build
on
that
to
also
extend
the
use
case
and
then
contribute
some
things
around
it.
But
I
think
it's!
You
know
we'll
speak
how
it
goes
in
in
a
metal
part.
We
might
want
to
to
go
because
it's
our
honestly
one
of
our
core
use
cases
for
all
edge
sites
and
all
telco
access
sites.
There
is
no
sense
to
have
there.
J
I
All
right
so
I
have
actually
come
to
me
so,
like
the
so
first
of
all,
like
thanks
for
the
great
presentation
that
I
don't
like
this
was
really
great
to
see
both
as
a
customer
isn't
and
as
a
developer.
Like
the
view
you
have
been
doing
with
cluster
API,
it's
really
nice
to
see
and
for
the
helm
question
so
from
I
man.
Your
point
of
view
we've
decided
to
synergize
on
customized
and
which
is
like
kind
of
like
what
controller,
runtime
controller
tools,
also
users
and
queue
builder,
so
I
would
separate
the
two
things
like.
I
Here's
like
one
is
like:
how
do
you
deploy
closed
feedback
components
as
in
like
CR
DS,
like
the
control,
the
deployments
the
web
hooks
etc,
and
the
other
one
is
like
how
you
create
clusters
so
for
the
latter
like
clusters,
machine
deployment
in
templating,
there
is
a
proposal
for
templating
on
the
roadmap
for
alpha
3,
I,
guess
at
the
merge
yet.
But
I
would
suggest
you
to
look
at
that
that
issue.
So
this
is
like
to
add
templating
a
templating
engine
into
cluster
cuddle
so
that
you
can
pretty
much
look
apply.
I
Cluster
cuddle,
like
objects
like
cluster
machines,
machine
deployments,
etc
with
a
templating
engine
which
is
going
to
be
pluggable,
I,
know
that,
like
we
have
been
talking
about
ypg
support,
but
if
this
becomes
pluggable
I'm
sure
that,
like
you
could
like,
add
help
support,
for
example,
so
the
issue
is
23:39,
so
this
is
where
I
would
refer
you
to
to
add
more
stuff.
Actually,
you
are
already
here.
So
that's
great
yeah
yeah.
We
can
work
more
on
this
on
the
other
side,
like
for
helm,
charts
regarding
a
lot
to
deploy
close
to
API.
J
H
J
That
the
the
let's
say
our
use
case
doesn't
fit
it's
a
command-line
tool,
because
either
we
need
to
wrap
that
somehow
in
the
container
having
cluster
CTL
and
then
run
it
somewhere.
But
we
don't
see
much
need
for
that
when
we
could
directly
deploy
the
resources,
because
you
guys
made
a
lot
of
work
to
make
it
reconcilable
by
component
a
so
when
we
have
manifests
deploy
them.
It's
actually
a
whole
beauty
of
the
thing.
J
That's
why,
for
us,
cluster
cuttle
is
not
it's
a
part,
that's
usable,
as
we
don't
plan
to
to
do
any
kind
of
scripts
or
any
automation
from
command
line
just
directly
from
from
it.
So
that's
why
we
wanted
to
bring
that
as
a
subset
or
maybe
it
could
be
something
like
non
cluster
CTL
based
templating
kind
of
idea,.
I
H
J
I
I
Have
clarified
cost
Ricardo
like
it's,
not
just
a
command-line
tools,
also
library,
so
you
could
use
it
in
a
reconciler
as
well,
and
if
we
like
ever
write
an
operator
like,
oh,
you
could
actually
be
a
good
use
cases
for
get-ups
as
well,
but
yeah.
This
is
like
actually
like
a
good
thoughts
and
use
cases
that
we
need
to
bring
foster
care
forward
so,
but
we
can
discuss
in
the
in
the
issue.
I.
G
Think
the
main
thing
is
that
we
template
our
clusters
and
deploying
the
CR
DS,
and
all
of
that
should
not
be
directly
in
a
ham
chart.
So
I
think
that
is
more
in
the
customized
stuff,
which
are
you
currently
doing
and
I
think
only
that
I'm
writing
of
these
classes
should
be
somewhere
maintainable
for
the
broad
of.
J
The
two
things
is:
we
did
that
without
writing
a
single
line
of
of
executable
code,
and
we
wanted
to
keep
it
like
like
that,
of
course,
operators,
library-
we
could
go
for
that,
but
it
works
for
us
pretty
nicely
now
on
this
level
without
coding.
It
might
be
also
the
case
that
some
people
will
be
just
interested
to
do
that
without
now,
going
into
specific
components,
but
doesn't
hinder
us
actually
for
for
going
in
the
direction
at
some
point,
use
it
as
a
library
and
explore
that
path.
J
M
Curious
if
anyone
had
strong
opinions
about
introducing
a
tool
like
code
Cove
is
like
one
of
these
cloud
hosted
code
coverage
tools
that,
like
will
comment
on
your
pull
request
to
just
give
you
a
report
about
whether
you
know
code
coverage
is
increased
or
decreased
based
on
their
the
pull
request.
I
thought
it
might
be
nice
not
as
a
block
or
anything,
but
just
as
like
a
signal
for
whether
people
are
adding
tests
got
richer,
getting
right,
her
lowering
the
test
coverage.
I
A
Okay,
yeah
I
can
lead
the
discussion
first
into
kind
of
a
problem
that
we
we
have
faced
in
working
with
the
bare
metal
that
we
would
like
to
see
some
sort
of
earth
way
to
you
know
kind
of
a.
When
will
we
want
to
use
the
massing
he'll
check,
for
example,
in
in
any
bare
metal
use
cases
we
can't
basically
used,
or
we
don't
definitely
want
to
use
the
default
way
to
remediate
unhealthy
machines
like
like
at
the
moment.
It's
what
it
does.
A
It
just
delete
the
machine
and
create
a
new
one,
but
especially
in
environmental
use
cases
we
might
have
a
need
to
do
something
else
than
just
delete.
The
machine
object,
which
start
to,
for
example,
be
producing
name
of
the
bare
metal
machine,
and
then
then
we
need
to
add
a
extra
machine
maybe
to
to
have
a
have
another
node
on
our
cluster.
A
A
We
would
like
to
see
a
new
CR
that
tells
the
provider
that
that
some
machines
are
are
unhealthy
and
needs
some
remediation,
and
these
this
CR
would
would
be
the
cavalry
considered
by
the
remediation
controller
in
provider
side
and,
at
the
same
time,
massing
health
check
can
check
the
stages
what's
going
on
on
the
remediation.
Is
it
still
ongoing?
A
Is
it
failed
or
or
what
whatever
happens
in
there
and
and
I
saw
that
you
have
also
discussed
these
these
things
in
concerning
about
Q
between
control
planes,
so
I
think
this
is
the
right
Simon
that
placed
a
little
bit
thing
about.
Is
it
enough
to,
for
example,
just
annotate
the
machines
or
or
or
cube
admin
control
planes,
and
if
we
only
go
with
the
annotation?
So
what
are
the
limitations?
For
example,
if
we
have
a
long
time
folder
that
we
need
for
a
remediation
in
bare
metal,
so.
I
N
So
this
issue
has
been
actually
a
long-standing
one
with
kubernetes,
so
I
think
generally
it's
it's
going
to
depend
on
what
we
define
as
unhealthy.
If
it's
just
a
machine
that
is
in
a
failed
state,
then
I
would
say
that's
fine,
but
if
it's
a
machine
where
works
it,
for
example,
we
did
we
started
and
the
cubelet
did
not
start
up
back.
Then
I'm,
not
sure
if
we
can
operate
remediation,
because
if
you
do
have
workloads
that
are
stateful
and
running
with
persistent
volumes,
then
you
might
have
some
corruption
issues
with
your
data.
O
N
D
As
for
corruption,
we
already
faced
this
considerations
downstream,
so
we
first
in
at
least
in
bare
metal.
We
power
off
the
host
power
of
the
bare
metal
server,
and
then
we
delete
a
node,
so
we
should
not
face
any
corruption
in
this
case,
so
this
is
for
at
this
point
and
as
a
general
note,
I
think
that
it
makes
sense
to
separate
machine
health
check
and
machine
remediation.
I.
Think
the
I
think
that
machine
deletion
is
just
one
type
of
remediation.
That
could
be
done.
That's
my
my
point
of
view.
A
Yeah
I
think
so
choose
and
also,
if
we
separate
the
the
hell
check
and
and
the
remediation
up
from
the
hell
check,
then
you
know
I
at
least
I
think
makes
it
Masen
he'll
check
more
more
flexible
and
we
don't
know
all
the
use
cases
that
we
might
face
in
the
into
the
future
and
new
providers
coming
in
they.
They
might
need
some
different
kind
of
a
remediation
on
their
infrastructure
structure.
So
I
don't
know
any
thoughts
from
Vince
or
any
other
guys
come
from
more
closer
to
Cappy.
K
K
Yetta,
you
didn't
get
up
into
Google,
Doc,
Google,
Docs,
probably
better
for
collaboration
for
starters,
where
we
take
all
the
ideas
that
we've
been
tossing
back
and
forth
in
28:46
and
the
kcp
remediation
issue
as
and
try
to
put
together
a
list
of
possible
approaches
which
could
include
annotations
could
include
new
CRTs
and
just
try
and
put
together
a
preferred
approach.
After
listing
out
the
options
that
we
have
and
then
we
can
all
work
together
to
try
and
come
up
with
an
implementation
that
makes
sense.
K
A
A
Circling
around
these
ideas
on
metal,
cube
perspective
or
open
shift
perspective
and
near
have
done
some
work
on
OpenShift
already
with
the
machine
hell
chicken
messing
remediation
controllers,
so
we
have
some
ideas
and
I'm
not
definitely
against
their
any
kind
of
annotation
as
soon
as
we
as
soon
as
we
find
out
all
these
kind
of
a
corner
cases
that
might
might
make
it
a
little
bit
different
or
but
I
agree.
We
should
list
all
the
options
first
and
then
probably
make
a
more
official
proposal.
So
so
I.
Would
that
agree
with
that?
One.
B
O
O
I
P
M
P
This
request
was
fulfilled
and
how
long
am
I
supposed
to
wait
until
I
believe
that
the
node
is
healthy
or
what
path
do
I
go
down
if
the
node
never
comes
back
and
you
just
get
into
like
a
whole
big
mess,
so
I
think
getting
a
proposal
flushed
out.
That
covers
those
corner
cases,
because
that's
really,
where
I
think
the
interesting
parts
are.
I
I
With
regards
with
the
other
lengths,
the
self,
even
control
plane,
we
met
this
morning
with
Joe
and
Alberto,
and
we
kind
of
like
are
going
to
kind
of
like
solve
this
immediate
problem
of
like
enabling
the
Machine
health
check
for
control
planes,
and
we
will
treat
this
external
remediation
as
a
separate
topic,
because
completely
two
things
it
just
too
much
of
a
big
problem
to
solve.
I
just
wanted
to
point
that
out,
but
yeah
plus
one
on
getting
a
proposal
out
and
iterating
on
it.
I
I
I
just
wanted
to
point
out
that
like
if
we
want
to
make
sure
that
I
get
this
proposal
for
external
motion.
Remediation
goes
in
alpha
three,
it
needs
to
be
backward
compatible
and
we
should
not
be
like
have
any
breaking
changes.
Otherwise
we
will.
We
can
just
say
like
will
do
this
announcer
for
Joel,
you
have
a
question.
We
have
one
minute
left
and
then
we
will
adjourn.
Q
I
was
just
going
to
add
to
about
the
the
reason
that
we
think
we
can
achieve.
This
is
everything
we
can
do
this
in
a
non-breaking
way
for
users
like
the
idea
is
that
internally,
we
can
change
the
way
that
we're
triggering
deletes
for
kcp,
but
without
adding
any
extra
fields
to
any
API,
or
anything
like
that.