►
From YouTube: 20201007 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
cluster
api
office
hours
meeting
cluster
api
is
a
is
a
project
of
c
cluster
life
cycle.
We
have
a
meeting
etiquette
if
you
like
to
speak
up
or
have
word,
please
use
the
raise
hand
feature
in
zoom.
You
can
find
it
under
the
participant
list.
If
you
have
any
questions,
while
we
go
through
the
agenda,
feel
free
to
either
raise
your
hand
or
drop
them
in
chat.
A
Welcome
everyone.
Today,
let's
get
started
with
some
psas.
I
had
a
few.
The
first
one
is
the.
There
is
a
pr
out
now
that
updates
the
roadmap
document
for
v1
alpha
4..
I
don't
think
I
captured
everything
here,
given
that
there
are
tons
of
issues
open
so
feel,
free
to
go
through
through
here
and.
B
A
Take
a
look
at
what
we
have
and
then
we'll
probably
review
it
at
the
next
week's
meeting
together
before
we
merge
it
actually
before
we
do
that.
I'm
gonna
put
a
hold
here
so
that
we
don't
merge
it
too
early.
A
A
It's
it's
probably
like
gonna
come
around
like
at
february
end
of
february
in
march,
and
then
we're
probably
gonna
plan
alpha
5
for
q3
2021.
I
just
put
that
in
there,
but
I
actually
don't
don't
know
when
that's
going
to
happen.
A
One
other
thing
to
to
note
is
that
we
have
some
themes
so
for
alpha
4
we're
talking
about
stability
in
general.
That
does
not
mean
like
we
only
have
to
work
on
stability,
but
we
want
to
like
push
the
stability
forward
for
our
api
types
and
controllers
so
that
we
can
get
to
beta,
possibly
by
end
of
next
year
or
a
little
bit
after
any
questions.
A
C
C
Yeah,
I
just
so
I
had
opened
a
pr.
You
know
to
it
to
move
that
one
line
from
the
v3
to
v4
in
the
roadmap.
Would
it
make
more
sense
just
to
kind
of
close
that
pr
and
get
it
included
in
this
one
that
you've
got
open
or
or
does
it
matter
really?
C
C
Sorry
in
the
deleted
section
yeah
that
so
I
I.
B
C
A
C
A
All
right,
so
two
more
things,
the
other
pr
that
I
have
hoping
is
for
release
guidelines.
Folks
have
been
asking
this
for
a
long
time.
Just
a
summary
of
what
like
this
does
is
just
to
document
what
we
have
done
in
2020
and
what
this
says
like
the
minor
versions.
So,
for
example,
like
alpha
3
and
alpha
4
in
in
the
2020
case,
are
planned
twice
per
year
and
can
be
planned,
it
doesn't
mean
they
have
to
come
in,
and
patch
versions
are
instead
planned
for
every
month.
A
This
will
give
us
a
way
to
like
just
be
more
on
a
set
schedule.
We
don't
have
to
always
like
release
patch
versions,
for
example,
if
there
are
like
changes.
One
example
is
right
now
we're
doing
zero
through
eleven
we
lose
zero.
Three
ten
at
the
beginning
of
october.
We're
gonna
wait
for
zero
three
eleven
for
november
and
just
wait
for
it
like
a
kind
of
the
milestone
to
be
filled
with
prs.
They're
gonna
be
merged
before
releasing.
A
A
A
All
right,
the
other
psa
that
I
had
was
that
the
main
branch
is
actually
open
for
breaking
changes.
We
actually
have
already
merged
some
braking
changes
in
there's,
going
to
be
a
a
lot
of
changes
coming
up,
because
we're
upgrading,
also
controller
runtime
to
the
zero
seven
series
which
has
not
been
released
yet
so
it's
a
it's
an
alpha
and
we're
gonna
keep
it
keep
kind
of
like
updating
it
as
controller
runtime
like
it
goes
to
zero,
seven
zero.
Some
bug
fixes
and
other
important
vr's
might
be
backboarded.
A
A
All
right,
if
there
are
no
more
questions,
I
think
we
can
move
on
to
discussion
topics
going
once
twice
three
times
all
right,
warren
and
fabrizio
with
the
management
cluster
operator
camp.
D
Hello,
so
just
wanted
to
provide
an
update
for
the
management
cluster
operator
cap.
We
spent
some
time
working
on
it
and
we
filled
it
out
with
obviously
the
initial
details,
but,
more
importantly,
well
with
the
api
for
those
who
are
interested
in
this
declarative
style
of
creating
the
management
cluster.
Please
take
a
look
at
the
api
kind
of
go
through
it.
We
kind
of
I
we
provided
some
example
sort
of
yaml
scenarios
at
the
bottom
as
well.
Yeah
and
feedback
suggestions
comments.
All
welcome.
D
E
No
thank
you
lauren,
so
I
think
this
is
a
good
step
forward
after
the
initial
discussion,
so
I
would
like
to
to
get
feedback
from
from
as
many
people
as
possible
and
if
the
people,
someone
prefer
to
have
a
meeting
the
dedicated
meeting
for
reviewing
the
pi,
we
can
arrange
these.
Otherwise,
we
we
can
go
on
offline.
A
G
Hi
everybody
yeah,
so
I
just
kind
of
put
out
this
item
in
here.
G
I'm
trying
to
gather
a
little
bit
more
information
about
what
the
requirements
would
be
for
a
new
cluster
api
provider
and
specifically
a
new
repo
we've
been
working
on
a
project
called
virtual
cluster
that
came
from
a
couple
groups
and
it's
specifically
out
of
the
multi-tenancy
working
group
and
we're
in
the
middle
of
re-implementing
the
api
using
using
cluster
api
style,
specifically
the
control
plane,
endpoints
that
have
been
developed
in
the
latest
release
and
right
now,
we're
struggling
from
from
a
from
a
management
perspective
and
building
this
perspective,
because
working
groups
technically
aren't
supposed
to
own
code
and
so
we're
trying
to
figure
out
the
right
place
for
this
in
the
long
run
and
be
able
to
build
out
with
proper,
like
development
life
cycles
actually
being
able
to
use
ci,
for
example,
because
we
can't
do
that
where
this
is
currently
housed
so
yeah.
G
So
this
was
just
to
gather
a
little
bit
information.
If
somebody
could
help
point
me
in
the
right
direction,
or
that
would
be
great.
A
Perfect
and
first
of
all,
welcome
to
you
all
like
to
this
group,
I
think
tim
in
in
his
lab
to
kind
of
assign
this
to
himself
in
lukamir.
So
I
think
like
as
long
as
there's
like
two
companies
and
maintainers,
we
should
be
good
but
yeah.
This
is
this
is
a
great
start,
and
this
is
also
something
that,
like
folks,
have
been
asking
in
the
community
to
have
a
virtual
cluster.
A
So
I
probably
basically
control
plane
talked
about
it
like
two
years
ago
at
the
get
together
so
exciting
stuff
cool.
Thank.
G
A
H
Yeah
hi
so
sort
of
follow-up
from
last
week
so
to
write
your
problem
statement
around
bootstrapping,
I
kind
of
went
into
a
rabbit
hole.
So
what
I've
decided
to
do
is
to
write
start
writing
several
documents
that
cover
different
areas
of
bootstrapping
that
we
might
want
to
revise.
H
So
this
first
one
came
out
of
a
quite
long
conversations
in
in-depth
discussions
with
justin
santa
barbara
around
what
cops
is
doing
to
secure
node
identity
and
what
gcp
is
doing
in
the
gcp
cloud
provider,
using
the
trusted
platform
module
stuff
and
to
try
and
create
a
cluster
api
based
mechanism,
which
is
one
half
core
components,
and
a
smaller
provider
based
implementation.
H
So
this
results
a
long-standing
issue
where
the
cube
adm
bootstrap
token
doesn't
prevent
basically
allows
you
to
register
as
any
node
and
that
potentially
can
give
you
access
to
secrets
and
volumes
that
you're
not
supposed
to
get
the
other
thing
that
this
should
help.
It
is
because
this
will
have
two
components.
H
Basically,
it
will
take
over
the
kubelet
authentication
from
cube
adm,
so
we
will
require
some
cube,
adm
changes
and
also
a
controller
that
could
sit
on
the
management
plane
that
talks
to
the
workload
cluster
and
is
approving
the
request
certificate,
signing
requests
that
come
for
nodes
as
they
come
up
and
register,
and
at
that
point
we
can
go
and
create
those
node
objects
and
apply
those
labels
and
taints,
which
is
another
sort
of
security,
sensitive
area
because
paint
pain,
node
restriction.
H
Labels
are
used
to
restrict
what
workloads
are
on
and
in
fact,
in
recent
words
and
kubernetes
you're
not
allowed
to
use
that
it
doesn't
even
let
you
register
with
that
prefix
with
the
node
mission
controller.
So
this
is
a
way
of
allowing
cluster
api
to
take
over
some
of
that
management
from
cube
adm
because
kubernetes
platform
agnostic
it
can't.
H
You
can't
fundamentally
know
that
a
host
is
who
it
says
it
is,
but
cluster
api
providers
can
so
I've
just
started
to
start,
there's
quite
a
lot
more
to
add
to
it
and
we're
going
to
be
taking
it
to
sig
off
and
6
the
new
security
for
review,
but
yeah.
This
probably
there's
a
potentially
big
change
to
bootstrapping,
and
then
I've
got
I'm
going
to
be
writing
other
docs
on
solving.
This
is
only
for
kubelet.
H
Node
registration
still
need
to
work
on
how
to
fix
the
initial
secrets
during
cluster
bootstrapping.
This
is
for
azure
and
different
types
of
bootstrap
systems
which
are
not
cloud
in
it.
So
that
will
be
a
separate
document.
It
didn't
make
sense
to
put
it
all
into
one,
because
there's
different
concerns
going
on.
A
I
was
gonna
ask
like
do:
how
does
this
behave
with
machine
pool.
H
H
What
this
does
is
allow
us
to
attest
the
identity
of
every
node,
that's
coming
in
if
it's
coming
from
machine
pool
or
otherwise,
we
need
to
figure
out
what
how
we
prove
that
it's
a
member
of
the
right
cluster,
but
in
theory
this
gives
us
this
would
should
better
support
machine
pools,
as
well
as
individual
node
registration.
E
H
It
doesn't
actually
so,
if
you
think
about,
I
think,
that's
certainly
true
for
vsphere
and
aws.
We
don't
actually
know
what
the
node
name
is
before
kubernetes
understand
it
works
because
qvadm
generates
a
token.
Then
infrastructure
provider
creates
a
request.
We
don't
know
what
their
instance
id
is
and
for
say,
the
aws
cloud
provider.
We
use
the
private
dns
name
as
the
node
name.
That's
that's
a
necessity
and
we're
looking
for
cloud
provided
to
to
allow
either
instance
id
or
right
or
the
private
dns
name.
We
know.
A
H
E
A
I
Yes,
so
openshift
already
kind
of
does
something
like
this.
Today
we
use
the
kubernetes
csr
bootstrapping,
and
so
it's
just
upstream.
I
What's
called
a
bootstrap
config
and
you
put
in
like
a
bootstrap
token
and
that's
just
enough
credentials
to
talk
to
the
api
server
and
generate
a
csr
certificate.
Signing.
B
I
I
I
H
Cool
thanks
I'll
take
a
look.
That's
pretty
much
exactly
the
same
line
so
yeah
today
cube
adm
uses
the
default
siding
mechanism,
so
it's
just
automatic
approval
of
whatever
the
csr
is,
which
is
not
so
in
this
model
we
use
the
same
csr
api
with
a
bootstrap
token,
but
with
a
custom
designer
that
will
be
cluster
api
and
I
will
need
to
check
with
openshift.
But
I
think
the
difference
here
is
that
we
we
rely
on
a
cloud
provider
identity,
so
that
allows
us
to
with
no
prior
information
about
the
machine.
I
Yeah,
I
think,
there's
a
number
of
ways
you
could
do
it.
I
know
there's
things
that
I
would
like
to
see
change
that
we're
doing
but
yeah
what
we
do
is
just
collect
all
those
everything
that's
going
to
appear
in
the
the
csr
request,
because
we
don't
allow
anybody
to
like
fiddle
with
the
kubelet
naming
or
anything
like
that.
So
that's
just
whatever
the
kubelet
sticks
in
there
by
default
and
that
we
make
sure
it
matches
what
the
cloud
providers
are
going
to
give
us.
B
Yeah,
I
I
also
want
to
make
sure
that
we're
I
mean
this
isn't
going
to
be
tailored
so
that
this
is
only
for
use
in
the
big
three
cloud
providers
right,
because
we're
developing
here
both
on
bare
metal
and
in
vmware
for
our
customers
and
I'd,
be
curious
to
know
how
that
we're
familiar
with
openshift
process.
That
seems
to
work
for
us
on
vmware,
but
is
there
an
alternative
yeah
the
cloud
provider
in
place.
H
Yes,
so
we
I'm
proposing
that
core
plus
the
api
has
a
tpm
based
implementation,
which
will
work
with
gcp,
recenter
and
bare
metal
systems
where
you
have
tpm.
I
think
the
the
issue
that
you
will
have
with
bare
metal
is
getting
the
endorsement
key
and
I
think
they'll
be
probably
in
it.
So
the
endorsement
key
will
be
something
you
would
have
to
do
for
your
particular
environment.
How
to
get
that
key.
That
is
endorsing
the
hardware,
whereas
for
gcp
and
vmware
there's
sort
of
a
common
endpoint
that
we
can
get
that
from.
B
H
No
so
vcenter
vsphere
has
tpm
support,
we've
not
done
it
yet
for
one
of
the
things
that
is
a
requirement
is
we
need
uefi
boot
for
vsphere.
So
we
need
to
do
some
work
with
image
builder
to
switch
all
the
distributions
over
to
ufi,
and
then
we
can
do
this
implementation,
which
should
serve
a
number
of
different.
I
H
Yeah,
so
I
think
we
will
still
have
sort
of
a
always
allow
sort
of
insecure
approval
that
will
behave
much
like
the
cuban
does
today
and
then,
if
you
want
to
sort
of
make
declare
your
provider
as
secure,
then
you
will
need
to
you.
You
should
be
implementing
a
provider
specific
signing
attestation
mechanism.
H
Thank
you.
I
mean
it
sort
of
ties
into
conversations
with
had
around
conformance,
which
might
have
at
some
point,
but
I
mean
it's
yeah.
We
should
start
looking
at
these.
A
Thank
you
all,
and
this
looks
great,
like
I'm
gonna,
take
some
time
to
review
it
later,
but
yeah.
It
seems
this
is
a
really
good
great
great
thing.
Thank
you
for
putting
this
together.
You
also
have.
A
And
before
I
forget
like,
can
we
collect
all
the
open
proposals
under
here
if
possible
and
maybe
remove
the
one
that
we
already?
Yes,
we
already
have
done
this
so
that
we
can
keep
track
of
them
like
in
the
next
meetings,
and
the
other
thing
is
like
do
we
have
an
issue
open
for
secure
note,
registration.
H
Maybe
not
completely
yes,
it
is
sort
of
emerged
out
of
the
discussion
on
applying
labels.
Yes,
I
need
to
break
it
out
into
a
proper
thing
of
its
own.
A
Okay,
perfect,
then,
then
we
can
write
it
added
to
the
road
map
as
well
all
right
mike
out
of
skiller
from
zero.
C
Yeah,
this
should
be
a
fairly
quick.
I
don't
know
discussion,
I
guess
so.
A
question
came
up
earlier
today,
as
I
was
trying
to
update
some
of
what
we
had.
We
had
talked
about
for
how
to
solve
this,
and
I
think
you
know
we
had
we
kind
of
agreed
that
our
first
version
of
the
scale
from
xero
would
just
contain.
You
know
kind
of
cpu
memory
and
gpu
requirements.
C
We
wouldn't
we
wouldn't
get
into
propagating
taints
from
you
know
from
the
cappy
stuff
back
out
to
the
auto
scaler,
but
stephen
harris,
and
I
don't
if,
if
he's
in
here,
please
speak
up,
I
don't
know
he
raised
a
question
or
he
raised
an
issue
that
said
aws
kind
of
required.
This
you
know
required
these
taints
in
order
to
do
its
auto
scaling
groups-
and
you
know
I
I
kind
of
asked
back.
Is
it
okay?
If
we
just
kind
of
provide
like
an
empty
list
of
you
know
like
say
like?
C
C
Does
this
mean
that
if
you
use
an
auto
scaling
group
with
aws,
does
that
mean
you
can
never
apply
taints
to
those
machines
that
are
in
that
group,
because
they'll
get
bounced
out
of
the
node
group
or
something
and
I
it
seemed
like
steven-
would
be
okay
with
proceeding.
C
You
know
with
us
just
kind
of
letting
the
user
know
ahead
of
time
that
taints
are
not
enabled
right
now
for
this
behavior,
and
I
just
I
wanted
to
bring
it
here
just
to
see
if,
if
that
kind
of
raised
any
issues
for
people
or
if,
if
anybody
has
thoughts
about
this,
if
you
know
if
there
are
no
objections
to
proceeding
with
the
first
version
without
taints,
I'm
I'm
going
to
work
up
a
poc
to
kind
of
demonstrate
this,
based
on
some
comments
that
andy
had
given
so
yeah,
I'm
just
I'm
just
kind
of
curious.
C
M
Here:
hey
hey,
this
is
harlow,
so
I
don't
have
any
blo,
so
I
wouldn't
call
it
a
blocker,
but
because
I
have
implemented
this
in
the
recent
past,
I
would
say
so.
Auto
scholar
basically
tries
to
predict
the
node
object
right
that
which
new
node
object
will
come
and
what
kind
of
gains
it
will
have
not
only
things
what
kind
of
labels
also
it
will
have
so
based
on
that
it
will
decide
whether
a
given
part
should
be
able
to
get
scheduled
on
that
or
not
right.
M
So
that's
the
one
concern
that
we
have
to
keep
in
mind
that
if
user
has
certain
parts
which
should
ideally
not
be
scheduled
on
this
on
this
node,
then
autoscalable
may
take
a
wrong
decision.
It
may
think
that
okay,
this
the
future
node,
does
not
have
a
team,
so
my
workload
can
get
settled
on
it
and
can
physically
scale
up
the
wrong
machines
that
that's
a
downside,
but
so
essentially
this
has
to
be
dissolved
probably
later,
but
it
cannot
be
kept
this
way.
In
my
opinion,
it
will
harm.
C
The
functionality
yeah-
and
I
I
don't
think
our
plan
would
be
our
plan-
would
would
not
be
to
keep
it
that
way.
I
think
there's
a
deeper
discussion
about
how
we
how
we
handle
those
taints
from
the
cluster
api
side,
and
that
was
you
know.
C
I
think
we
didn't
I
I
my
impression
was
that
we
could
get
a
first
version
of
this
done
just
to
get
the
basic
mechanics
of
scale
from
xero
working
and
then,
as
we
decide
how
we
want
to
handle
the
taints
from
the
cluster
api
project,
then
we
could
provide
a
second
follow-up.
So
if
we're
okay
to
create
the
first
version,
you
know
kind
of
giving
the
users
clear
indication
that
they,
you
know
this
feature
is
not
available.
C
Okay
thanks,
I
appreciate
the
I
appreciate
the
advice.
C
You
know
this
is
the
one
thing
that
concerns
me
is
that
you
know
we'll
have
to
be
very
clear
with
the
documentation
about
what
doesn't
work
right
out
of
the
box
or
what
you
know,
people
shouldn't
expect
and
then
I
get
we'll
have
to
just
follow
up.
C
C
L
I
could
raise
my
hand,
sorry
yeah,
so
in
this,
when
we
talk
about
scheduler
when
scheduler
selects
a
particular
node,
is
there
not
a
algorithm
that
sorts
out
the
nodes
that
are
available
that
can
be
utilized
based
on
the
taints
and
all
those
factors,
so
that
is
already
incorporated,
I
believe
in
so
that
should
tell
that
what
we
do
for
selection
for
auto
scaling
it
if
a
node
is
available,
you'll
get
it.
If
the
node
is
not
available
meeting
your
requirement,
it
will
not
be
so.
L
C
Thanks,
you
know
like
to
the
first
part
of
what
you're
saying
I
think
that's
what
hardik
was
saying
is
that
you
know
the
scheduler
in
the
in
the
auto
scaler
will
attempt
to
match
pods
to
node
groups
that
it
can
scale.
So
if
it
sees
a
pod
has
a
certain
tank,
you
know
that
it
required
label
or
taint
toleration.
Then
it's
going
to
look
for
those
in
the
node
group
that
gets
scaled.
C
So
if
we
don't
expose
those
taints
to
the
auto
scalers
information,
it
could
make
poor
decisions
about
where
you
know
where
to
where
to
scale
things.
Basically,
unfortunately,
the
difficulty
here
is
that
the
plumbing
on
the
cluster
api
side
to
expose
those
taints
in
the
auto
scaler
is
a
little
complex
and
there
are
still
some
questions
we
need
to
answer
about
how
we
would
do
that.
C
So
it's
it's
an
imperfect
situation,
but
I
think
it'd
be
nice
to
start
getting
this
feature
kind
of
written,
even
if
we
have
to
give
users
an
indication
of
of
how
they
can
use
it
currently,
and
my
hope
is
that
it's
not
a
hard
blocker
based
on
you
know
the
requirements
of
the
clouds,
the
cloud
providers
and-
and
so,
if
it,
if
nobody
has
like
kind
of
a
hard
objection
to
going
forward,
then
I'd
kind
of
like
to
proceed
with
the
knowledge
that
will
inform
users
and
kind
of
work
on
the
second
part
of
this.
K
K
This
is
kind
of
already
an
issue
that
exists
with
the
auto
scaler
integration,
as
it
sits
today,
so
I
think,
adding
the
scale
from
zero
support
without
the
tank.
You
know,
the
native
taint
support
would
just
be
kind
of
like
adding
an
additional
feature
to
the
existing
auto
scaler
integration,
and
then
we
can
try
to
solve
the
obtain
issue,
and
how
do
we
expose
that
out
as
a
separate
issue
so
that
it
doesn't
block
kind
of
the
initial
implementation
work.
C
I
can
dig
that
jason.
I
I
think
my
my
bias
probably
shines
through
a
little
bit
around
this
taint
issue,
because
you
know
we
we've
handled
it
a
little
differently
in
openshift.
So,
like
you
know,
yeah
like
I
just
kind
of
assume
it
exists
there
for
all
the
other
providers,
which
is
a
bad
assumption
on
my
part,
but
I
I
tend
to
agree
with
your
thinking.
Jason.
A
All
right,
brian
distributed
racing.
J
Yeah,
so
I
just
want
to
mention
I've
I've
written
a
document-
I
I
did
a
demo.
Many
of
you
will
have
seen
a
few
weeks
ago
at
this
meeting.
The
link
to
the
video
is
in
this
document.
J
If
you
were
not
at
that
meeting,
if
you
like,
distributed
tracing
I've,
I've
done
essentially
a
proof
of
concept
into
cluster
api,
and
this
document
should
really
be
a
cap,
but
I
didn't
quite
get
around
to
formatting
it
in
all
the
right
ways,
and
also
I
it's
kind
of
getting
confusing
because
there's
bits
of
it
need
to
go
into
multiple
different
repos.
So
I'm
putting
that
out
there.
Please
comment:
please
tell
me
what
I
got
wrong
and
we'll
try
and
move
this
forward.
A
All
right
was
there
anything
like
other
than
what
we
chatted
today
about
the
context
in
controller
runtime,
that
you
would
like
to
see
cluster
api
as
in
as
a
breaking
change
or
like.
Does
that
fulfill
the
purpose
of
spams.
J
J
The
kind
of
the
entering
the
reconcile
method
and
then
figuring
out,
if
you're,
actually
going
to
do
anything
before
firing
up
a
trace
span,
would
would
be
really
excellent
and,
and
that
would
probably
look
like
a
breaking
change.
I
I
don't
actually
know
how
to
do
that
right
now,
but
you
know
as
a
in
terms
of
placeholder
to
your
question.
Yes,
I
I'm
sure
there
there
could
be
breaking
changes.
A
Sounds
good
I'll
make
sure
to
add
the
issue
to
the
clipboard.
A
Thanks
for
working
on
this,
the
yeah
like
feel
free
to
like
look
at
the
the
document.
Folks
and
there's
a
demo
link
for
here,
and
I
highly
suggest
to
look
it
up.
It
was
really
good.
A
J
Can
I
hope,
run
it
yourself?
I've
got
branches
against
the
zero
six
zero
controller
runtime
and
the
zero
five
eleven
controller
runtime
that
current
master
cluster
api
is
built
against.
I
don't
have
like
published,
published
images.
I
could
do
that
if,
if
somebody's
really
keen
but
yeah,
if
you're
building
cluster
api,
you
can
take
this
code
and
and
it
should
build
and
play
with
it.
Please
do.
K
Yeah,
I
just
wanted
to
give
an
fyi
to
folks,
because
I
feel
like
there
may
be
a
little
bit
of
overlap
with
this
group.
I've
started
on
a
poc
of
being
able
to
build
a
minimal
embeddable
kubernetes,
less
api
server
that
has
crd
support,
so
not
a
full
kubernetes
api
server.
K
Just
the
minimal
amount
needed
to
be
able
to
deploy
crds
and
do
things
with
the
idea
being
that
you
can
embed
it
into
a
separate
binary
and
be
able
to
use
it
potentially
for
bootstrapping
workflows
to
avoid
needing
a
kubernetes
cluster
to
get
a
kubernetes
cluster
type
of
thing.
I
will
be
presenting
this
at
the
api
machinery
call
that
starts
at
the
top
of
the
hour,
if
anybody's
interested
feel
free
to
reach
out
just
mostly
wanted
to
bring
awareness
to
folks,
because
I
know
some
folks
have
been
following
this
saga
on
twitter.
I
K
Yeah,
it's
actually
the
work
that
I'm
doing
on
another
project,
tinkerbell
for
managing
like
pixie,
booting
and
workflows
around
infrastructure
and
we're
getting
ready
to
implement
kind
of
event-driven
workflows
there
and
a
lot
of
the
semantics
that
we're
having
to
build
are
kind
of
things
that
you
get
for
free
from
kubernetes
and
that
kind
of
led
me
down
this
rabbit
hole.
So.
K
I
don't
know
I
I
think,
and
test
has
a
kind
of
separate
kind
of
usage
right
now.
It
could
potentially
be
aligned
like
this
project
has
turned
into
kind
of
like
a
ball
of
things
that
are
encompassing
a
few
different
things.
K
One
is
identifying
places
where
it's
hard
to
reuse,
existing
components
within
kubernetes
itself,
because
they
exist
in
kk
instead
of
other
places,
and
you
know
some
of
the
ways
that
some
of
the
api
server
components,
whether
it's
the
aggregated
api
server,
the
extensions
api
server
things
like
that,
where
they're
really
tied
to
like
cobra
commands,
rather
than
more
of
a
api
type
model
to
where
it's
easier
to
embed
it
into
your
own
applications,
without
just
exposing
whatever
command
line
arguments
that
kubernetes
has
things
like
that.
K
So
there
may
be
a
little
bit
of
overlap
with
m
test,
but
I
don't
really
expect
so,
because
m
test
is
trying
to
model
more
full
kubernetes
api
server,
and
this
is
more
kind
of
scaled
down.
A
Thanks
jason,
and
so
this
will
be
at
11
pacific
time
so
at
the
top
of
the
hour
right
awesome,
exactly
perfect.