►
From YouTube: SIG Cluster Lifecycle 2021-03-23
A
A
A
A
You
are
at
your
name
as
a
new
participant.
You
want
to
present
yourself.
B
Hi
yeah
sure,
hey
everyone,
I'm
raja
shree
and
I
work
at
aws.
I
recently
started
like
learning
about
cluster
api
and
using
it
so
I'm
still
fairly
new
to
it,
and
I
just
wanted
to
like
attend
one
of
the
meetings
and
discuss
the
the
kept
that
I
have
linked
there.
A
Okay-
maybe
we
can
talk
about
this
now,
so
basically,
this
is
the
original
k
for
hcd
idm.
I
believe
yes.
A
A
So
basically
you're
interested
in
quasar
api,
and
you
want
to
start
using
hcdm.
Is
that
correct.
B
Yeah
that's
right
and
for
that
I'm
more
than
happy
to
work
on
a
poc
to
just
see
how
it
will
look
like
yeah,
and
I
had
written
a
few
things
down
based
on
what
I've
seen
so
far
like
an
idea
of
how
we
can
add
a
bootstrap
provider
that
uses
lcd
adm
like
just
like.
We
have
one
for
kubernetes.
A
B
Actually,
no
so
I
haven't
reached
like
it's,
not
it's
pretty
rough
right
now.
I
do
have
a
google
doc,
but
it's
not
following
the
format
that
cluster
api
proposals
use.
I
can
definitely
clean
it
up
and
turn
it
into
that
format,
but
I
can
still
share
that
google
doc
right
now
is
that
okay.
A
I,
yes,
you
can,
I
guess
you
can
briefly
present
it
to
us.
If
you
want,
you
want
me
to
give
access
to
share
the
screen.
B
Okay,
okay,
I
hope
you
all
can
see
my
google
docs
screen
yes
and
okay.
So
what
I
was
thinking
is
we
we
could
add
a
separate
bootstrap
provider
using
it
that
will
use
fcdm
to
bring
up
an
ncd
only
cluster,
because
from
what
I've
seen
cluster
api
right
now
doesn't
provision
and
manage
an
external
hct
only
cluster
right.
B
So
for
that,
this
is
a
very
rough
initial
idea
and
a
lot
of
my
time
just
went
in
understanding
how
cluster
api
works,
because
I
haven't
worked
with
it
before
so
we
could
have
an
object,
lcd,
adm
config
from
what
I
know
of
xcdm
the
init
and
join
commands
like
the
way
I
have
set
up
an
external
cluster
and
ncd
cluster,
using
that
it
didn't
require
a
lot
of
additional
configuration,
but
I
had
to
pass
in
the
servers
or
sans
to
provide
public
ip
addresses
of
the
nodes
that
I
was
creating,
so
that
I
could
use
them
as
endpoint
while
talking
to
the
cd
cluster
and
the
certificates
directory.
B
I
just
added
it
in
because,
because
of
how
the
bootstrap
provider
will
work,
so
the
initial
idea
will
be
the
same
that
whenever
we
start
provisioning
this
cluster,
each
machine
or
node
will
try
to
acquire
the
lock,
which
will
decide
which
becomes
the
first
lcd
node.
Once
any
node
acquires
the
log,
we
will
use
the
init
configuration
to
generate
the
fcdn
command
and
we
will
also
have
to
store
the
okay.
So
this
is
here
when
I'm
a
little
confused.
B
But
I
don't
think
we
can
do
that
right,
because
that
would
involve
like
copying
the
search
that
were
created
on
the
first
node
and
then
like
copying
them
over
to
the
other
nodes.
So
I
was
thinking
that
cluster
api
itself
can
generate
the
sorts
and,
like
sorry,
can
generate
the
ca
sort,
and
then
we
can
like
create
the
init
configuration
with
the
certificates
directory
field.
B
Sorry,
I
don't
know
if
this
is
making
sense
or
I'm
just
traveling
on
but
okay,
the
cesar
is
generated
by
cluster
api
and
the
bootstrap
data
secret
that
we
generate
that
will
contain
the
commands
like
write
files
and
run
cmd
and
write
files
will
basically
tell
to
like
will
contain
the
exact
ca
third
key
pair
and
the
path
where
it
should
be
copied
over,
and
then
I
think
if
this
node
runs
hcd
adm
in
it
like
it
will
use
the
ca
search
keeper
to
generate
all
of
its
own
search
server,
peer
and
client
search.
B
So
I
think
that
should
be
fine.
Any
questions,
suggestions
or
anything
till
now.
A
Yeah
something
useful
to
follow
the
the
costar
api
template
for
proposals
is
that
people
looking
at
this
will
be
able
to
understand
like
where
you're
coming
from
like
what
is
the
use
case?
Why
we
want
to
do
that
so
in
terms
of
the
certificates,
the
ca
in
particular
that
lcd
uses
should
be
ideally
signed
by
the
private
key
of
the
the
root
ca
that
is
in
the
cluster
and.
A
B
Okay,
so
just
so,
I'm
getting
it
right.
The
cuban
provider
generates
like
it
generates
the
root
ca
and
its
cd
secret
key
needs
to
be
signed
by
that.
A
B
B
So
how
does
that
like?
Okay,
I,
like
I
don't
know:
how
does
that
work?
Is
there
something
that
I
can
read
for
that
or.
A
State
of
your
proposal,
you
can
start
transitioning
to
the
the
new
proposal
template
that
is
for
question
api
proposals,
and
when
people
see
this
proposal,
they
can
also
give
you
feedback.
They
can
try
explaining
you
how
the
cubed
booster
provider
works,
and
maybe
you
can
get
some
ideas
from
the
costar
api
maintainers.
A
E
D
Okay,
great,
thank
you,
sorry
about
that.
I
think
this
is
super
exciting.
I
think
just
to
give
you
a
bit
of
background
on
the
etsy
adm
project.
We
started
that
with
like
merging
two
projects,
one
of
which
I
see
daniel
here
like
platform,
nine
has
the
scdm
like
command
line
tool
and
then
the
chaos
project
was
building
something
called
or
he's
building
something
called
cd
manager.
We
are
very
close
to
merging
the
two
of
them.
D
The
way
I
would
describe
the
goal
is
that
etsy
adm
will
be
sort
of
the
cli
or
interactive
tool,
and
then
lcd
manager
will
become
a
sort
of
self-driving
layer
on
top
that
will
automate
some
of
those
things.
Currently
they
aren't
entirely
integrated.
We
are,
I
think,
at
the
stage
where
we're
very
close
to
being
able
to
build
and
run
at
cd
manager
from
the
fcdm
repo
they're.
D
All
all
the
code
is
in
one
repo
and
we'll
soon
be
able
to
switch
the
sort
of
canonical
location
to
that
that
repo,
so
that
that's
where
we
are
in
terms
of
and
then
we
can
start
unifying
like
the
behaviors.
D
If
you
look
at
sd
manager
how
it
works,
it
does
sort
of
a
lot
of
this
sort
of
like
complicated,
lock,
mutex
stuff
and,
like
a
sort
of
leader
election,
to
try
to
orchestrate
some
of
these
things
and
and
manage
manage
that.
So
this
makes
a
lot
of
sense.
D
I
think
one
of
the
things
I
think
is
interesting
is
in
the
case
of
cluster
api,
a
lot
of
the
work
that
ncd
manager
does.
It
could
be
greatly
simplified
because
we
actually
have
so
a
lot
of
work.
That
manager
does
is
trying
to
construct
a
a
leader
election
protocol
without
anything
really
to
work
with,
and
in
the
case
of
cluster
api.
We
are
running
on
a
kubernetes
cluster,
and
so
you
could.
D
You
could
imagine
writing
this
as
a
a
controller
that
runs
in
the
management
cluster
that
runs
cdadm
or
orchestrates
sddm
on
on
the
machines
that
are
brought
up.
D
That
may
be
that
maybe,
and
then
then
that
might
be
easier
in
terms
of
copying
things
around.
But
if
you
want
to
like,
if
you
look
at
scd
manager
and
how
it
integrates
into
chaos,
it
probably
has
the
same
sort
of
flows
that
you
need
to
do
here.
D
Chaos
doesn't
currently
use
kube
adm,
so
there
will
be
some
differences,
but
in
terms
of
the
general
like
flow
of
the
ca
certificates
like
we
pre-create,
the
various
ca
certificates
externally
copy
them
in
and
then
each
scd
or
api
server
node
is
able
to
generate
and
sign
using
the
cas
key
pair.
B
Okay,
so
for
this
I
was
following
what
the
cube
adam
booth
provider
does,
and
it
also
seems
to
be
a
controller
like
crd
plus
controllers
running
on
the
management
cluster.
So
I
think
that
is
what
you
were
also
saying
right,
like
running
a
similar
controller
that
uses
xcdm
instead
and
for
the
part
of
signing
the
like
providing
an
external
root
ca
and
having
keys
signed
by
that
you
said
I
can.
I
can
refer
etsy
manager
for
getting
an
understanding
of
how
that's
done.
D
Yes,
I
mean
cops
or
chaops
and
cd
manager
are
sort
of
combined
to
do
that,
but
I
mean,
I
think,
I
think,
the
the
long
and
the
short
of
it
is
we
we
create
a
ca
key
pair
in
a
sort
of
central
phase
in
the
in
the
cops,
create
cluster
stage,
and
we
write
it
to
a
bucket
and
then
the
the
lcd
manager
processes
download
those
and
use
the
the
key
pair.
So
that's
how
we
avoid
that's,
how
we
avoid
the
need
to
copy
the.
D
B
Yeah
yeah,
okay,
so
I
think
that's
something
that
I
was
thinking
that
the
csr
will
be
generated
by
cluster
api
on
the
management
cluster,
and
then
this
write
files,
part
of
the
bootstrap
data
script,
will
contain
the
contents
of
it.
Okay,
I
I
can
definitely
go
through.
How
cops
does
it
and
like
understand
it
better
yeah,
and
so
should
I
go
ahead
or
should
I
wait
to
like
clean
this
up
and
then
discuss
this
in
tomorrow's
meeting?
Instead.
A
B
A
Yeah,
if
you
google,
cube
adm,
v1,
beta2
or
v1
beta1,
you
should
get
a
cluster
configuration
object
that
supports
hcd
as
a
key
and
then
under
hcd,
there's,
external
or
local.
So
in
waco
this
means
the
lcd
is
managed
by
qbay
and
the
kubernetes
booster
provider.
Exactly
so
it's
deployed
as
a
static
port
on
the
machine
that
has
the
control
plane.
In
this
case,
xhd
has
to
be
external
and
there's
a
there's
a
key
to
configure
this,
like
you
basically
pass
the
external
addresses
where
the
api
server
should
connect
to.
A
B
A
B
B
Yeah,
I
thought
that's
a
like.
I
thought
that
that
had
the
options
for
local
versus
external
at
city,
but
I
can
like
I
can
look
into
that
and
then
I
can
also
add
that
to
the
proposal
like
based
on
what
I
find.
A
Yeah,
basically,
this
is
something
to
investigate.
Once
you
extend
the
proposal,
people
will
review
it
from
the
course
of
api
project.
B
Okay,
okay,
so
I
actually
yes
like,
since
I've
never
done
like
attended
any
of
the
meetings
or
presented
anything
before
I
didn't
know
the
agent
purpose
of
this
meeting.
So
that's
why
I
wrote
this
all
up
and
I'm
presenting
this,
but
I
think
the
right
thing
to
do
is
like
clean
this
up
and
like
maybe
bring
it
up
in
tomorrow's
meeting
and
also
open
a
github
issue.
I
I
still
had
a
few
questions
but
like
if
I
shouldn't
ask
them
right
now
like.
A
F
Was
just
gonna
move
to
like
week
solutioneering
on
the
call,
probably
isn't
the
best
use
of
everybody's
time,
because
you're
going
to
have
to
go,
we
would
be
the
stakeholders
that
would
be
making
the
change
the
the
cluster
api
folks
would.
So
I
think
you
know
presenting
that
you
have
a
proposal
and
getting
eyes
on
is
usually
the
modus
operandi
and
then
it'll
go
through
review
process.
F
So
it'll
take
a
while
to
get
the
ball
rolling.
I
think
that's
the
appropriate
venue.
I
think
if
there's
you
know
a
general
time
at
the
end,
if
people
want
to
hang
out
sort
of
talk
about
the
nitty
gritty
horrors
and
details
of
std
management,
then
that's
totally
fine
too.
B
Sure
so,
just
last
question
like
this
is
definitely
something
that
the
clustered
api
project
would
want
to
use
right,
like
a
separate
external
provider,.
F
Not
everybody
wants
to
do
that
for
a
number
of
reasons,
so
I
think
that
that
would
be
a
good
conversation
to
have
and
then
they'll
find
a
happy
landing
zone
for
for
this
work.
A
Yeah
from
the
perspective
of
the
whole
sake,
I
think
this
is
something
that
we
want
to
support
somehow,
due
to
the
architecture
diagram
that
we
have
hcd
being
used,
ncd,
idm
being
used
in
cube,
adm
and
all
the
other
projects,
but
again
since
you're
targeting
coaster
apis.
In
your
initial
proposal,
you
just
have
to
talk
to
the
maintainers
there.
A
C
A
A
No
problem,
let
me
share
my
screen
again
to
go
through
the
rest
of
the
agenda.
I
only
have
one
group
topic:
that's
the
corporate
performance
regression
fix
might
be
on
hold
for
the
initial
release
of
121.
derek
made
a
comment
here
that
he
is
concerned.
A
That
this
should
take
more
time
to
soak
in
into
the
ci
before
we
merge
it
in
for
121.,
so
he's
proposing
that
we
merge
this
to
122
and
then
we
backport
it.
I
personally
have
no
objection
to
that.
I
mean
I
can
create
an
argument
that
the
original
breaking
change
that
was
merged
didn't
soak
in
ci
at
all,
so
now
we're
creating
an
argument
that
he
should
soak
inside.
F
It
is
not
normal
to
introduce
a
regression
into
a
new
minor
series
right.
It
seems
what's
the
release
time
frame
for
121
like
well,
how
what
are
we
talking
about
for
soap
time
like?
What's
the
lead
time,
we
have.
A
So
code
phrase
we
are
already
in
code
freeze,
test
freeze,
which
was
the
period
discussed
to
merge
this
change
before
is
this
thursday?
So
if
you
want
to
make
an
argument
for
the
point,
the
second
point
that
direct
propose
now
is
the
time.
So,
if
you
have
any
comments,
like
just
drop
them
here,
I
completely
agree
with
you
this
we
are
doing
some
interesting
stuff
with
respect
to
this
change.
Releasing
a
minor
release
with
this
performance
regression
is
very
confusing
and.
F
Yeah
I
I
would,
I
would
vote
against.
What's
the
issue
number
against
pull
9936
right,
I'll
comment.
A
Yeah,
if
anybody
has
comments,
just
drop
him
on
the
issue,
I'm
sure
a
lot
of
people
do
not
agree
with
delaying
this
to
122.
immediately.
Somebody
commented
hey
how
about
one
118?
A
A
So
basically,
this
this
problem
is
now
present
in
all
versions
of
kubernetes
of
this
of
the
couplet
specifically,
and
that's
why
we
have
this
pr
that
fixes
the
problem.
But
the
argument
is
that
it's
now
touching
critical
paths
that
we
need
more
time
to
test
but
yeah.
If
somebody
wants
this
change
just
comment,
it's
we
can
try
to
get
it
in
for
121.
A
Cool
moving
to
subproject
updates,
I
added
one
item
for
cube
adm.
It
has
been
a
bit
slow
for
kubernetes.
We
are
in
code
phrase.
The
release
is
looking
good
for
kubernetes.
Nothing,
special,
something
interesting
that
somebody
created
a
kip
for
a
kubernetes
control
plane
run
as
long
route
to
provide
some
context
here.
Currently
cubed
runs
the
cube
api
server,
scheduler
controller
manager
as
a
root
in
the
containers
managed
by
the
kubernete.
A
Basically,
the
cap
is
proposing
that
we
stop
doing
that
and
start
implementing
a
way
for
the
user
to
configure
what
user
to
use
in
those
containers.
A
A
We
are
missing
some
caveats
here.
Risk
and
mitigation
like
there
are
a
lot
of
interesting
aspects
to
this.
For
instance,
the
the
cap
is
proposing
that
we
pin
the
uid
and
gid
to
2000.
like
what
happens.
If,
on
the
host
machine,
we
have
a
user
that
is
already
with
the
same
uid.
A
This
means
that
we
grant
him
access
to
view
these
to
view
the
ca
key,
which
is
probably
the
most
important
artifact
that
we
generate
so
yeah.
I
have
some
interesting
questions
is
here.
Basically,
I
requested
from
the
contributor
to
break
down
all
the
mounts
that
we
bought
in
the
static
ports
and
explain
what
is
their
idea
for
the
permissions
which
and
the
ownage
of
these
artifacts.
A
F
It's
just,
I
think
it's
good,
so
we
we've
talked
about
this
a
number
of
times,
like
other
people.
Do
this,
like
I
know,
gartner
does
this,
so
I
think
it's
good.
A
Yeah,
I
definitely
everybody
wants
this.
Just
I'm
not
sure
like
we
have
to
do
the
security
review
thoroughly,
because
it's.
A
Artifacts:
okay,
moving
to
hcadm
joseph.
D
Yes,
I
mean-
I
think
we
touched
on
this
before,
but
just
to
say
that
we
are
making
progress
on
moving.
Chaops
is
canonical
at
cd
manager,
location
to
the
ncd,
adm
repo
and
kubernetes
infrastructure
like
that
was
mentioned
last
time
I
think
by
mica,
and
I
think
we
are
there
are
some
prs
up,
and
hopefully
maybe
with
lots
of
luck
touch
would
like.
D
In
two
weeks
we
will
be
running
from
from
the
community's
infrastructure
coming
from
etsy
adm,
so
that
will
be
a
good,
a
good
step
for
towards
the
unification.
G
I
I
mean,
I
think,
for
instance,
you
know
with
aws
showing
interest,
I
think
yeah.
I
think
I
I
mean
I
guess
it's,
it's
sort
of
maybe
a
similar
story
with
cluster
api
right,
but
I
I
it's
important
to
get
user
feedback
I
think
before
before
stabilizing,
maybe
maybe
the
project
more.
So
if
you
know,
as
we
hear
from
more
for
more
users,
that'll
give
us
the
confidence
that
okay,
this
is
we're
sort
of
on
the
we're
on
the
right
track.
So
yeah.
G
I
think
I
I'm
I'm
not
I'm
not
in
I'm,
not
in
a
hurry
to
to
to
change
to
change
the
status
but
yeah
yeah
at
the
same
time
you're
showing
you're
showing
the
road
map.
You
know
it's
it's,
I
would
say
slow,
slow
progress
on
on
that.
But
that's,
I
think
I
guess
that's
maybe
different
than
beta
like
having
a
beta
label.
A
To
get
the
project
going
faster,
we
should
try
to
get
more
contributors
to
actually
contribute
to
the
code
base
of
the
project
like,
like,
I
said
I
think
google
somewhat
called
you
know
the
cncf
mentoring
presenting
the
kubecon
these.
These
are
ways
together
to
get
more
contributors.
Maybe
people
can
start
joining
the
meeting.
You
know
assigning
themselves
to
drafting
a
configuration
file
format,
support
for
static
ports.
You
know
some
of
the
things.
A
G
That's
a
good
question
yeah
I
didn't.
I
don't
think
I
don't
think
that
I
had
that
in
mind
or
we
had
that
in
mind.
But
that's
that's.
That's
a
good
question.
Probably
a
subset
of
these
things
could
could
qualify.
G
You
know
I
guess
moving
moving
to
beta
api
types,
probably
actually
some
some
of
this.
Some
work
has
happened
to
support
running
std
in
in
static
pods
in
the
service
of
a
cd
manager
and
so
that
that
could
probably
be
another.
D
I'd
agree
with
this,
I,
like,
I
think
beta,
is
when
we
api
type
beta.
I
don't
know
whether
you're
saying
api
type,
data
or
product
beta,
but
api
type
beta,
is
when
we
have
reasonable
confidence
in
the
struct
in
this
schema,
not
changing.
D
A
Yeah,
it's
also
a
matter
of
the
coi
fox
themselves
being
an
api
when
we
start
using
them
in
other
projects.
It's
also
important
to
say:
okay,
these
flags
are
better,
at
least
if
they're
alpha.
I
I
don't
know
what
is
a
cy
guarantee
and
whether
it's
the
idm
follows
the
official
k3
kubernetes
api
graduation
process.
But
if
something
is
alpha,
the
whole
cli
is
alpha.
This
means
that
flags
can
be
changed
without
any
deprecation.
F
There's
an
adoption
guarantee
too,
like
for
better
for
worse,
like
in
sick
cluster
life
cycle,
we've
been
pretty
conservative
for
right
for
good
reason.
Right,
like
we,
people
are
depending
upon
the
api,
so
our
alpha
apis
are
really
they're,
much
stronger
guarantees
than
the
rest
of
kubernetes
apis,
because
if
we
break
somebody
you
hear
about
it
everywhere,
on
slack
on
twitter,
you
know
it
immediately.
F
F
A
And
nowadays
the
mindset
is
so
changed
that
alpha
and
production
is
not
something
that
people
object
as
as
long
as
somebody
makes
a
joke
on
twitter
everybody's
happy.
Let's
see
everybody
use
the
alpha.
I
come
from
a
slightly
different
bicep,
but
yeah.
I
it
feels
arbitrary
compared
to
something
like
q
proxy,
which
is
still
alpha
and,
in
fact,
I'm
making
a
proposal
to
sig
network
to
treat
the
q
proxy
config
as
ga.
A
If
we
want
to
deprecate
it,
it's
alpha,
but
it's
used
everywhere.
So
if
you
want
to
deprecate
it,
it
has
to
be
one
year,
at
least
for
this
alpha
version,
and
it
becomes
a
question
like
if
this
has
been
awful
for
so
long.
Is
it
actually
already
bitter
it's
just
the
same
happened
for
the
cri
specification.
A
It
has
been
awful
for
so
long
everybody's
using
the
cri
spec
already,
but
then
what
they
did
is
just
they
jump
jump
promoted
it
to
v1,
so
the
cri
spec
is
now
v1.
Just
nobody
was
promoting
the.
A
H
Yeah,
it's
it
just
as
a
data
point
like
no,
this
isn't
necessarily
universal,
but,
like
ga
v1
does
sort
of
signal
to
consumers
that
this
is
ready
for.
Prime
time
like
this
has
gone
through
the
ringer,
not
that
not
that
it
necessarily
will
never
change
like.
H
Obviously
we
have,
you
know
semantic
versioning
and
we
can
do
b2s
and
stuff,
but
like
just
as
a
data
point
like
eks
again,
this
is
kubernetes
core,
not
so
much
the
the
cluster
api,
necessarily
or
even
a
cd
atom
here
but
like
the
like
eks,
doesn't
enable
alpha
features
as
a
pretty
certain
policy,
just
because
it's
very
hard
to
like
turn
something
off.
That's
kind
of
why
the
aws
apis,
don't
change
so
while
kubernetes
api
could
change.
H
Enabling
alpha
features
for
eks
is
part
of
it's
about
stability,
part
of
it's
about
like
continuities,
but
there
are
like
consumers
out
there,
who
sort
of
expect
things
who
do
expect
things
to
sort
of
be
able
to
stand
up
to
something.
So
ga
does
not
obviously
not
to
like
to
rush
anything,
but
but
it
does
signal
to
users
that,
like
this
is
really
ready.
A
Yeah,
I'm
happy
to
hear
that
you
know
people
still
believe
in
not
using
apis
that
are
not
officially
graduated
to
v1
yeah.
Also,
rushing
things
to
v1
is
not
not
a
bad
idea,
and
I
think
that
there's
always
v2
after
v1,
so
p
things
change.
If
you
have
something
that
has
been
used
for
a
lot
of
time.
Sorry
for
a
long
time
by
a
lot
of
people,
maybe
it's
time
to
graduate
it
to
v1
and
think
about
d2
in
the
future.
Something
like
that.
F
A
Yeah
I
for
the
particular
hcdm
integration
we
can
discuss
it
tomorrow
in
the
quasar
api
meeting.
I'm
also
curious
what
we
are
going
to
do
with
cube
adm,
but
that's
like
a
separate
process.
Ideally
we
are
going
to
need
a
cap
upstream
kubernetes
escape
for
that
it's
going
to
be
interesting.
A
I
think
the
the
the
main
couple
of
points
for
hcd
adm
at
the
adoption
for
kubernetes
was
that
we
wanted
the
configuration
file
support
so
that
we
can
tell
let
users
pipe
a
configuration
file
from
some
file
on
disk.
A
Sorry,
sorry,
a
pipe
lcd
adm
configuration
from
a
file
of
disk
all
the
way
to
the
lcd
dm
binary
and
the
kubernetes
binary
was
going
to
deploy
the
lcd
adm
binary.
We
also
wanted
the
static
port
use
case,
which
I
think
it's
not
that
complicated
to
add.
I
think
the
config
is
more
complex
but
other
than
those
two
things.
These
two
things.
I
don't
really
remember
anything
else
for
the
cube,
adm,
hcdm
adoption.
A
D
I
think
yes,
and
yes
with
a
little
bit
of
nuance,
the
idea
is
that
fcd
manager
will
be
a
sort
of
self-driving
layer
that
drive
that
is
actuating
fcdadm
commands,
so
you
will
always
be
able
to
like
see
what
it
is
doing
for
you,
and
we
will
try
to
express
things
in
terms
of
things
you
could
have
typed
yourself.
D
The
plan
is
to
do
that,
live
as
it
were,
so
ncd
manager
will
shortly
well
shortly.
Chaop's
clusters
will
shortly
be
running
std
manager
from
this
repo,
and
we
will
move
that
repo.
D
Because
it
has
its
own,
like
a
current
lcd
manager,
has
its
own.
I
guess
you
call
it
like
a
a
fairly
direct
wrapper
around
dead
city.
It's
called
scd
server
or
something
but
like
it's.
It's
currently
like
just
a
class
rather
than
a
separate
binary,
and
the
idea
is
to
basically
replace
that,
with
at
city,
managed,
adm
or
at
least
unify
the
two.
A
You
see
that's
a
good
one.
Yeah
registry.
B
D
I
I
don't
think
it
will
happen.
I
don't
think
so.
I
think
we're
gonna
try
to
keep
the
basic
flow.
Unless
someone
wants
to
do
things
change
things
there.
I
think
what
it
will
add
is
the
ability
to
like
what
we
need
in
chaops
is
the
ability
to
spin
up
an
auto
scaling
group
in
a
different,
auto
scaling
group
and
have
it
like
self
in
it
and
join
like
we
don't
have
we
don't
have
an
external
manager
to
do
that?
D
For
us
like
cluster
api
is
interesting
because
it
does
have
that
external
source
of
truth.
Etcd
manager
is
built
for
a
world
where
there
is
no
external
source
of
truth.
Where
you
can
power
off
everything
power
it
all
back
on
and
it
will
all
come
back.
I
D
That
that
is
correct.
It's
slightly
complicated
by
the
fact
that,
in
terms
of
model,
it's
absolutely
correct.
There
is
no
crd
for
etsy
demand.
There
is
no
real
crd
for
etsy
manager,
because
there's
no
api
server
at
that
time.
D
D
D
We
meet
on
mondays,
so
on
six
days
is
our
next
meeting.
I
think.
D
D
A
D
A
Yeah
now
it's
a
bit
confusing
with
the
time
zones
yep,
I
see,
I
see
craig
click
on
the
call
me
craig
peter
peters,
yesterday
discussed
on
a
private
slack
discussion.
The
support
for
the
windows
control
plane.
Correct.
Do
you
want
to
talk
about
this
here
or
is
it
like
a
topic
for
the
like
the
sick
windows
meeting.
E
You
know
run
on
aks
which,
which
manages
the
most
of
the
control
planes.
So
that's
not
really
a
big
concern
to
the
users.
There
are
obviously
add-ons
that
don't
work
on
windows,
and
so
they
need
to
have
some
linux
nodes
as
a
part
of
their
clusters
that
sometimes
provide
some
challenges
to
those
users
who
are
just
not
familiar
with
operating
linux
machines.
E
So
there's
some
interest
in
that.
It's
not
something
for
our
users
that
we've
kind
of
risen
to
the
highest
priority,
but
I'm
very
interested
in
hearing
if
other
users
are
seeing
a
large
demand
for
that.
I
guess
I'll.
Also
note
that
you
know
we
have
other.
You
know.
Microsoft
has
other
products
that
use
kubernetes
on
windows
hosts
and
you
know
to
solve
this
problem.
We
essentially
run
linux
vms,
to
run
those
those
components
and
that's
working.
Just
great.
A
Yeah
to
provide
some
context
to
the
side
of
vmware,
we
have
seen
some
customers
want
this,
but
the
overall
demand
has
not
been
very
high
and
I
was
curious
like
what
is
the
case
and
whether
we
should
maybe
start
investing
time
to
solve
this
problem,
and,
quite
frankly,
the
cubelet
was
the
most
complicated
component.
A
We
got
the
kubernetes
working,
which
means
that
maybe
the
rest
of
the
control
plate
components
will
be
fairly
easy
to
set
up,
and
I
wonder
if
maybe
someday
someone
can
just
do
the
experiment
if
it
works,
and
we
can
just
convince
sigrid
lees
that
they
should
push
the
windows,
container
images
or
another
external
process.
Doing
that.
Just
it's
a
matter
of
experiment,
and
I
I
can
try
joining
the
sick
windows
meeting
and
discussing
more
this
topic
more
to
see
what
they
think
about
it.
I
think
I
think.
E
That's
worth
worth
discussing
the
you
know,
I
think
it's
not
really
about
the
rest
of
the
control
plane.
I
think
that
actually
wouldn't
be
a
problem
at
all.
It's
really
about
the
rest
of
the
ecosystem,
components
and
prometheus,
and
so
on
that
that's
where
very
few
of
those
components
run
well
on
linux
I
mean
we've,
we've
invested
a
lot
and
they
get
getting
envoy
to
I
mean
on
windows.
We've
invested
a
lot
in
getting
envoy
to
run
well
on
windows
yeah,
but
you
know
for
service
mesh
scenarios.
E
A
This
is
going
to
make
a
lot
of
those
windows
administrators
comfortable
if
they
especially
they
don't
have
to
switch
between
context
switch
between
operating
systems.
A
A
This
all
right,
if
you're
interested
in
this
topic,
I
I
think
we
have
a
sick
windows
meeting
in
40
minutes.
That's
correct!
All
right!
Any
more
topics
for
today
final
call.
B
If
there
is
still
time,
I
would
like
to
just
ask
a
few
questions
about
the
xcd
adm
and
cluster
api
integration
that
I
was
talking
about
earlier
sure
we
have
10
minutes,
okay,
so
for
the
proposal
you
said
that
there
should
be
use
cases
of
why
there
would
be
an
external
lcd
cluster
and,
like
I
could
just
think
of
what
one
this
one
use
case
that
the
cube
adm
config
it
does
support
under
cluster
configuration.
It
does
have
options
to
either
use
local
lcd
or
external
lcd.
B
Endpoints,
like
you
were
saying
earlier,
and
I
see
that
since
cluster
api
doesn't
right
now
provision
and
manage
this
external
lcd
cluster,
like
that,
can
be
one
use
case
that
just
managing
this
external
hcd
cluster
that
you
can
give
to
your
cluster
configuration
like
that
can
be
one
use
case,
but,
like
other
than
that
yeah
I
like
I'm,
not
sure
how
many
use
cases
should
the
ket
or
should
the
issue
contain
yeah.
A
The
explanation
is,
I
think
you
should
explain
why
this
is
needed
in
terms
of
topology,
because
in
the
external,
the
networking
between
the
nodes
like
why?
Why
would
somebody
prefer
that
over
the
stock
case,
where
cube
adm
deploys
the
lcd
on
the
same
node,
it's
obviously
a
better
high
availability
support?
If
you
have
an
external
lcd
cluster,
we
consider
this
to
be
like
one
level,
better,
aha
than
the
other,
the
stock
case.
But
this
is
a
this.
You
can
talk
about.
A
It's
like
hey,
it's
better
high
availability,
if
you
have
an
external
requester
here,
is
how
we
can
manage
it.
This
is
the
external
key
did.
C
A
It's
pretty
simple
and
by
the
way
a
fun
fact
is
that
currently
the
the
rancher
crime
project
kind
is
a
shim.
It's
a
k-I-n-e.
A
A
I
Can
be
a
good
use
case?
I
look
into
it,
but
because,
once
you
have
externalized,
then
the
r
back
can
be
managing
different
hcd
with
a
different
roles.
So
that's
besides
h.a,
which
is
what
christine
mentioned,
I
think
and
multi
tenancy
is
one
more
use
case,
which
you
can
add
positively
well.
A
I
don't
think
our
back
is
reported
for
what
we're
discussing,
because
this
is
this
is
not
authorization
layer.
This
is
authentication
and
a
connection
between
hosts.
D
If
they're
talking
directly
to
ncd-
yes,
I
don't
know
if
this
is
a
good
idea
on
your
api
server.
But
yes,
I
mean
yeah
like
what
percussion
is
saying
like.
If
you
have
two
different
api
servers
and
you
want
to
share
an
std
cluster
that
would
that
would
be.
That
would
be
supported.
That
would
be
a
good
idea.
I'm
not
sure
you
should
use
it
as
a
supplement
for
rbac.
If
that's
what
you're
like
for
kubernetes
are
back.
If
that's
what
you're
going
for.
A
It's
like
some
fascinatingly
granular
control.
D
Oh,
by
the
way,
I
also
put
it
in
chat,
but
in
case
you
didn't
see
it
there's
a
there's.
Another
project
called
cluster,
a
provider
nested
which
is
bringing
up
ncd
in
the
in
the
control
plane
in
the
management
cluster.
I
guess
you'd
call
it
so
they
run
as
a
stateful
set.
So
that
is
probably
the
easiest
way
to
to
understand
like
the
various
way
the
keys
and
stuff
have
to
flow.
But
of
course
it's
not
using
scd-80m
or
anything
like
that,
but
it's
a
good
it's
a
good
reference.
B
D
This
is
using
etcd
running
as
pods
in
a
management
cluster.
As
my
understanding,
I
don't
know
what
I
don't
know.
What
bucket
you
put
that
in
terms
of
like
topologies,
but
it's
a
different
topology.
Okay,.
A
Yeah,
it's
running
the
whole
control
plane
in
pots
on
the
marriage,
requester
and
there's
a
pending
proposal
for
it,
because
we
created
the
repository
for
the
project
first,
but
the
proposal
that
actually
explains
why
we
need
this
was
created
later.
A
So
I
think
the
project
is
still
about
to
explain
why
it
exists
in
a
way,
and
I
can
see
some
very
interesting
test
cases,
at
least
using
it
for
end-to-end.
A
Testing:
okay,
any
more
questions.
We
have
three
minutes.