►
From YouTube: 20180410 sig cluster lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
We
are
deploying
CN
CF
projects
like
prometheus
salinity,
core
DNS
onto
kubernetes
across
other
cloud
providers
and
we've
recently
added
own
app,
which
is
a
linux
on
a
project
deploying
that
and
right
now
we're
supporting
AWS
as
your
Google
Cloud,
which
is
GC,
GK
IBM
cloud.
We
recently
added
OpenStack
and
we
are
supporting
packet
for
bare-metal
kubernetes
on
bare
metal
I'm
going
to
switch
over
and
show
you
the
dashboard.
C
So
one
of
the
main
goals
that
we
had
test
for
us
was
trying
to
show
some
results
for
these
different
tests
in
a
way
that
was
I'm
kind
of
in
between
the
details
that
you'd
get
from.
Maybe
the
testing
sig
intense
matrix
for
kubernetes
some
of
the
different
projects
that
are
on
Jenkins
and
circle,
CI
and
Oliver,
and
have
some
view
that
we
could
look
at
and
some
of
the
different
versions
and
how
they
the
different
projects,
work
together
and
across
the
clouds.
C
So
this
is
showing
that
on
the
CNC
FCI,
the
production
dashboard
that
we
have
running
for
CN
CF
is
the
projects
that
we
have
supported.
Kubernetes.
Of
course,
some
of
the
projects
that
we're
deploying
the
builds
are
your
standard
stuff,
where
it's
going
through
and
file
phase,
and
all
that
right
now
on
get
lab
or
we
pull
the
actual
artifacts
up
from
upstream
like
own
app.
C
So
I'm,
just
watching
over
a
quick
to
this
environment,
I've
already
run
the
build
phase
because
it
can
take
a
while
and
just
running
the
provisioning.
This
is
calling
out
to
the
API
and
starting
that
pipe
on
the
way
and
then
switch
over
here.
So
these
are
the
on
get
lab
side
we're
starting
the
page
for
the
provisioning
of
kubernetes
and
getting
here
to
one
of
these
and
see.
C
This
is
actually
putting
all
the
pieces
together
that
we
need,
including
and
gathering
artifacts
for
the
specific
version
kubernetes
and
then
we'll
move
on
to
this
provisioning
stage.
I'm
gonna
pull
up
that
earlier.
One
ran
a
quick
and
we
can
see
it's
grabbing
the
kubernetes
release
and
it's
going
to
go
through
and
we
use
terraform
to
go
out
and
allocate
the
resources
with
cloud
provider
and
then
do
your
provisioning
of
kubernetes.
C
C
This
is
your
normal
thing
for
the
builds
and
getting
the
ad
tests
and
cream
of
containers
visiting
gathering
all
the
pinnings
for
the
specific
version.
What
flags
we
use
for
each
of
those
providers,
whatever
that
would
be,
and
then
we
use
pass
those
to
the
terraform
to
use
in
the
templates
that
we
have
for
the
kubernetes
manifests
that
we
lay
down
and
then
the
app
deployment
stage
where
we
deplete
to
those
apps
like
Prometheus
and
Cordy
Ness,
and
run
the
end-to-end
test
that
they've
provided.
C
So
what
I
was
shown
earlier
was
get
lab.
That's
the
underlying
system,
we're
looking
at
stuff
up
in
occur
and
the
producing
itself
is
mix
of
terraform
cloud
and
and
some
custom
Nettie's
visioning
to
make
our
the
configuration
that's
trying
to
use
templates,
and
so
we
use
terraform
to
take
advantage
of
the
cloud
provider
provisioning.
That's
community
and
cloud
provider
maintain
and
tie
into
that
and
then
templating
for
the
actual
different
versions
of
kubernetes
and
to
support
those
flags
and
pass
everything
in
and
make
the
maintenance
pretty
easy.
C
The
dashboard
itself
is
Erlang
elixir
for
the
API
and
then
Vijay
s
for
the
front-end
been
around
about
a
year.
This
is
a
timeline
a
little
over
a
year
actually
and
had
our
first
production
release
with
the
dashboard
and
everything
in
January
and
had
added
some
more
clouds,
including
IBM
cloud
and
OpenStack
in
March,
and
then
an
app
which
was
that
Linux
Foundation
project.
C
If
we're
trying
to
can
work
with
a
lot
of
the
communities
and
get
feedback,
the
Thursday
new,
open,
CI
community
and
that
came
about
after
a
face-to-face
workshop
before
OH&S
conference
a
few
weeks
ago
and
we're
actually
working
on
a
messaging
protocol,
RFC
link
is
here
in
the
slides,
were
part
of
the
cio
working
group,
which
has
twice
monthly
meetings
right
now.
We're
planning
on
a
deep
dive
in
the
intro
at
the
cube
con
quad
native
in
copenhagen.
A
A
A
A
choose
your
own
adventure
story,
so
I
think
what
Chris
is
asking
is:
what
is
the
default
path
you
provided
and
then
there
probably
a
follow-on
question
which
I
might
have
would
be.
Is
there
a
way
to
customize
the
many
paths
route
to
extend
the
testing
coverage
for
many
providers
in
different
configurations.
C
As
far
as
the
different
configurations
of
goes,
we
are
definitely
trying
to
make
that
possible
with
using
the
templating
and
everything
else
on
the
with
by
going
with
terraform.
The
way
that
we're
cross,
clad
itself
is
put
together
with
the
API.
We
can
pass
in
all
those
arcs
and
pass
them
on
for
health.
Templating
is
using
in
flags.
We
are
building
H
a
clusters.
C
Also
looking
at
things
like
feature
flags
for
providers
where
we
could
go
we're
going
to
turn
on
this
type
of
storage,
which
wouldn't
be
available
on
all
the
cloud
providers
and
see
how
that
works.
So
if
it's
enabled
on
one
that
doesn't
have
it,
then
what
do
we
expect?
What
type
of
failures
and
what's
there
as
well
as
we?
We
had
something
in
logging
with
the
service
Orchestrator
from
ona
enabled,
and
I
think
it
was
in
195.
Kubernetes
195
started
having
some
issues
with
how
it
was
icing
logging,
so
caching
stuff
like
that.
C
A
C
Yes,
absolutely
so,
there's
a
mailing
list:
CN
CFC,
a
mailing
list,
there's
also
a
slack
channel
on
the
CN
CF
slack
we're
trying
to
set
that
up
as
a
shared
Channel,
and
so
it
can
be
on
maybe
a
couple
of
slacks,
since
it
is
a
cross
project
and
we
have
a
the
email,
an
email
and
several
other
things.
So,
however,
people
would
like
to
interact,
of
course,
on
the
you
have
itself.
C
A
C
Yes,
so
I'm
hot
back
over
here
to
the
dashboard
I
said
the
these
other
projects,
like
Prometheus,
are
deployed
using
helm,
charts
on
top
of
these
kubernetes
clusters.
So,
like
Prometheus
ooh,
one
is
being
deployed
on
q4,
Nettie's
196
here
and
and
then
we
run
the
end-to-end
tests
for
Prometheus
so
that
what
I
was
referring
to
earlier
with
own
app
I
mean
it
looks
like
we're
having
some
problems
with
the
last
play,
but
anyways.
So
with
own
app
we
had.
We
saw
an
issue
with
kubernetes
195
with
their
logging
component
and
how
it
was
utilizing.
C
A
Going
once
twice
three
times,
thanks
for
the
demo,
I
think
it's
highly
appreciated,
I
think
over
the
long
haul,
it's
probably
going
to
be
hest
us
to
understand
where
the
C&C
fci
begins
and
we're
testin
friends.
Ideally
I
would
like
to
kind
of
you
know
putting
my
community
hat
on.
C
Absolutely
I
also
know
that
that
the
tests
and
for
us
having
to
maintain
a
very
large
set
of
items
to
address
things
for
so
many
folks
so
I
think
it's
definitely
going
to
run,
is
kind
of
a
complementary
system
and
we're
trying
to
see
where
we
can
pull
resources
where
we
plan
to
provide
access
to
the
API.
Let
go
saying
so
then
they're
going
to
be
more
collaboration
there
and
we've
talked
with
folks
on
the
testing
sig
before
about
potential
accessing
results.
Out
of
what
happens,
there
definitely
want
the
collaboration.
A
A
Great,
hopefully,
folks
can
see
my
screen
all
right.
Next
up
on
the
agenda,
upgrades
we've
had
in
a
number
of
conversations
and
a
couple
of
different
upgrade
issues
with
Covidien
that
resulted
from
the
110
release
cycle
and
one
of
which
is
the
config
file
issues
that
have
occurred.
Liz
and
Matt
I
know
we're
talking
about
it.
Do
you
want
to
give
a
brief
updates
and
current
status
of
what
we're
trying
to
do,
because
this
this
is
kind
of
the
most
urgent?
E
I
can
speak
to
that.
So
the
basic
issue
is
that
we
introduced
in
incompatibility
in
our
config
file
format,
some
underlying
struct,
that
we
reference
changed
without
us
noticing
it,
and
that
means
that
the
previous
config
file
format
does
not
deserialize
is
properly
in
1.10
and
above
we
tried
a
couple
different
ways
to
rectify
this
right
now.
F
Not
really
the
stuff
that
I
liked,
you
is
fairly
tangential
to
that
so
I.
Let
us
say
that
the
proposal
that
I
have
out
is
to
introduce
additional
config
maps
and
I'm.
Now
so
after
going
through
the
code
and
everything
and
then
listening
to
what
lives
and
friends
have
been
dealing
with
I'm
pretty
hesitant
to
move
forward
with
that.
F
Given
that
we
need
to
figure
out
the
single
config
map
and
it
seems
very
difficult
to
me
to
add
additional
config,
my
house,
without
sort
of
changing
all
of
the
code,
I'm
not
sure
if
it's
just
because
I'm
new
to
the
code
base,
but
it
seems
pretty
complex,
so
I
just
wrote
my
thoughts
on
that
comment
there.
If
you
want
to
look
it
over
my.
E
F
Yes,
I'm
specifically
talking
about
the
cube
admin
config,
which
is
the
master
configuration
struct.
It's
like
this
one-to-one
mapping
and
that,
as
far
as
I
know,
is
the
only
config
file
that
a
user
can
provide.
So
it's
this
sort
of
that
one
one
place
for
a
user
to
provide
their
configuration
right.
E
A
Hood,
so
how
does
this
sound?
We
should
probably
get
the
stopgap
in
place
immediately
and
petrea
picked
on
the
110
branch,
but
we
also
do
need
a
long
term
roadmap
for
how
we're
going
to
manage
the
config
I'd,
probably
go
towards
planning
to
have
a
kept
in
place
for
the
config
to
move
to
beta
I,
don't
thought,
or
at
least
to
have
versioning
semantics
that
we
can
maintain
over
time.
A
I,
don't
necessarily
know
if
we
need
to
force
it
to
go
to
beta
other
than
that
prevents
that
forces
us
to
support
backwards,
compatibility
which
isn't
necessarily
a
bad
thing,
given
the
states
of
its
usage,
and
we
could
also
along
that
way,
have
a
long-term
versioning
semantics
built
into
the
config
file.
I
debate,
whether
or
not
we
want
to
use
the
full
API
Machinery
apparatus
or
just
have
a
deserialized
transform
mechanism
that
allows
us
to
resi
relies
on
the
other
side
right.
E
A
Next
up
was
with
regards
to
upgrading
issues.
I
know
that
Lee
could
not
make
it
today,
but
we've
had
a
number
of
conversations,
and
there
was
a
a
second
issue
that
was
not
related
to
the
configuration
changes,
which
was
a
race
condition
on
initialization
and
ordering
because
of
how
the
how
we've
managed
secrets
slightly
differently,
not
manage
search
slightly
differently
in
the
110
release.
Jason,
you
want
to
give
a
synopsis
and
then
talk
about
the
issue.
Yes,.
D
So
the
main
issue
was
the
well.
There
are
multiple
inner
weaving
issues.
Basically,
the
only
reason
why
upgrades
are
working
right
now
is,
if
you
rerun
a
certain
number
of
times,
you'll
hit
a
race
condition,
that'll
bypass
the
static
pod,
update
checks
which
allows
you
have
great
to
proceed
in
the
state
it's
in
right
now,
and
that's
because
of
the
way
that
we're
generating
the
hash
of
the
pod
the
static
pods
that
are
deployed.
D
So,
if
that
object,
mutates
for
any
reason
like
a
status,
update,
it'll
sit
there
and
mark
that
pod
is
upgraded
prematurely,
and
when
that
happens,
that
actually
allows
the
upgrade
to
proceed
right
now,
because
we
do
we
upgrade
one
static
pod
at
a
time
right
now.
So
when
we
upgrade
that
CD
and
enable
TLS.
D
D
So
we
need
to
follow
up
with
another
change,
either
to
break
out
updating
the
TLS
on
at
CD
as
a
separate
phase,
where
we
just
update
the
TLS
settings
on
the
existing
at
CD
and
API
server,
pods
and
then
proceed
with,
then
we
can
proceed
with
the
upgraded
bits
we
do
it
now
or
we
have
to
do
some
other
jiggery
to
kind
of
address
how
we're
handling
upgrades
to
kind
of
sorted
it
out
more
properly.
I
like.
A
The
for
what
it's
worth
I,
like
the
notion
of
not
conflating
in
having
two
separate
steps
of
having
the
certs
added
and
then
the
upgrade
second,
because
that
would
force
a
rollback
that
would
you
would
allow
yourself
to
still
have
the
existing
real
forwards
and
rollback
mechanism
that
exists
without
you
having
to
try
to
couple
both
things
changing
at
once,
and
that
that
that
that
is
a
problem.
If
you
try
to
couple
both
things
changing
at
once
as.
D
We've
talked
about
yeah
I
agree,
because
if,
if
we're
going
to
kind
of
restart
those
pods
synchronously,
we
definitely
want
to
break
that
out
and
do
the
TLS
enablement
separately.
Otherwise
we
could
end
up
in
some
weird
rollback
stages:
states
where
you
know
we
actually
mutate
that
CD
and
have
to
restore
from
backup
where
we
wouldn't
have
to
do
that.
There's
there's.
A
A
there's
also
a
concern,
though,
like
without
a
couplet
API
check
for
liveness
and
readiness
of
these
other
components.
You're,
basically
gonna
like
fire
with
a
search
change
right,
you're
gonna
basically
have
to
copy
the
manifest
so
copy
the
manifests.
A
fire
with
a
search
change,
wait
some
period
of
time
interval
for
the
API
server
to
back
on
line.
That's
not
terrible.
We
do
that
in
other
places,
but
it's
not
ideal
either.
Yeah.
D
That's
basically
how
we
handle
the
upgrade
the
upgrade
stages
today,
as
it
is
taking
the
TLS
enablement
out
of
it.
The
1.9
upgrade
basically
did
the
same
thing.
We
dropped,
the
sed
manifests
update,
and
then
we
just
keep
hitting
the
API
server
until
it
comes
back
online
to
connect
to
NCD
when
it
comes
back
up,
and
we
do
the
same
thing
when
we
upgrade
the
API
server
static
pod
today,.
A
D
A
Don't
disagree
with
that
statement.
I
think
the
problem
is
we'd
have
to
talk
with
sig
note
and
have
an
ambassador
on
that
side.
Who
can
actually
give
us
the
state
of
sig
note
in
a
meaningful
way
that
we
understand
when,
when
the
API
for
the
couplet
has
gone
to
beta
and
is
in
a
supportable
fashion,
because
within
the
last
couple
cycles,
there's
been
a
fair
amount
of
churn
and
I?
A
Don't
think
we
can
rely
on
it
yet
I'm,
totally
okay
with
making
a
stopgap
change
in
111
and
then
having
a
large
comment
block
an
issue
to
cross
reference
to
say
like
it
in
the
air
deal
world,
maybe
in
112
or
if
we
get
to
beta
in
111
cycle.
Remove
this
section
of
code
and
you
know,
use
the
Google
API.
That
also
seems
like
a
reasonable
approach.
A
A
Going
once
twice
three
times
all
right,
so
backlog
triaging.
What?
If
what
has
Tim
done
and
why
do
I
receive
10,000
emails.
So
the
the
big
thing
that
I
did
to
get
ready
for
111
cycle
is
that
we
kind
of
had
a
lot
of
cruft
in
a
lot
of
me.
Maintenance
issues
with
the
qadian
repo
and
I've
done
a
bunch
of
logistical
things
to
kind
of
get
some
order,
and
and
I
wanted
to
sort
of
talk
about
how
I
plan
to
sort
of
execute
and
how
other
folks
kind
of
execute
within
this
structure
right.
A
So
that
way,
we
can
actually
like
federate
as
much
work
as
possible
without
us
trying
to
like
lose
it
right,
because,
as
a
community,
it
becomes
a
little
bit
difficult
to
coordinate
all
the
things,
because
different
actors
will
have
different
agendas,
and
we
just
want
to
make
sure
that
people
aren't
conflicting
on
things
and
can
all
execute
the
federated
fashion.
So
there
are
a
large
number
of
the
issues
have
been
assigned
for
111
right.
A
Everything
in
the
111
cycle
should
be
assigned
unless
there's
a
Help
Wanted
one
and
the
only
reason
this
one
is
not
a
scientist
because
there's
actually
a
patch
that
I
just
lgt
Emden
approved
a
second
ago
from
someone
else,
even
though
it
says
there's
an
assignee
here.
That
doesn't
necessarily
mean
that
if
folks
want
to
contribute-
or
if
they're
interested
in
actively
engaging
on
an
issue
that
they
can't
contribute
a
patch.
A
So
the
mode
of
operation
that
I
plan
on
using
is
similar
to
how
we
we
as
an
FTO
kind
of
manage
some
of
our
open-source
projects.
If
a
person's
been
assigned.
That
means
just
the
default
assignee
for
a
subject
matter
area
and
once
that's
active.
That
means
they're
actually
kind
of
working
on
it
right,
so
we'll
use
the
active
label
as
a
to
denote
that
this
person
actually
is
working
on
patches.
A
Just
comment
on
the
issue
saying
I'd
be
willing
to
contribute,
and
then
whoever
is
the
assignee
can
then
coordinate
with
that
person
to
then
make
them
the
assignee
or
just
if
they
don't
have
ackles
on
to
the
repository
we'll
just
you
know,
we'll
have
the
person
who's
the
default,
assignee
I
just
kind
of
Shepherd
it
through
the
meccans
that
bit
makes
sense
to
folks.
Are
they
questions
there.
A
I'll
pause
for
a
second
alright,
so
it's
pretty
uncontroversial.
Everything
in
the
backlog,
I've
kind
of
put
relative
priorities,
as
I
saw
it
I
put
a
sorting
hat
on.
If
folks
thinks
that
the
priority
needs
to
be
changed
by
no
means
is
that
you
know
my
original
sorting
hat
does
not
mean
it's
the
end-all
be-all,
so
some
some
folks
might
think
an
issue
is
higher
priority
because
it
affects
them.
You
know
in
their
current
deployments.
So
please,
if
you
feel
like
a
priority,
should
be
changed
for
us
to
address.
A
Please
let
us
know
we
will
happy
to
change
the
priority.
The
the
goal
of
all
of
this
sorting
is
that
we
execute
in
priority
sorted
order.
Priority
sorted
order,
basically
needs
anything,
that's
critical
or
urgent.
It's
going
to
be
executed
first.
The
way
upstream
priorities
have
changed
over
time
that
consider
this
p0.
A
So
anything
that's
critical,
urgent
p1
is
anything
that
says
important
to
p2
is
anything
that
says
long-term
important
and
then
there's
backlog,
which
is
like
P,
3
or
4
right,
so
everything
will
be
executed
on
priority
started
order.
I
know
some
people
had
poked
on
issues
periodically
to
ask
like
what's
the
status
of
things.
Hopefully,
anyone
now
should
be
able
to
look
at
the
committee
and
repository
and
get
an
understanding
of
where
we're
at
right.
A
If
there's
a
bug
in
the
queue
for
110
or
111,
sorry,
you
should
be
able
to
sort
based
upon
the
priorities
and
understand
where
it
is
in
the
process
of
execution.
We
have
87
open
issues
for
this
release
cycle.
That's
a
lot
to
expect
to
address
all
of
them.
Maybe
if
we
get
enough
folks
to
be
able
to
execute
on
all
the
pieces
than
maybe
there
are
also
some
other
things
that
are
outside
the
scope
of
just
this
repository
that
are
also
exist.
A
Some
of
those
things
aren't
kept
proposals,
but
not
necessarily
in
issues
like
Matt's
work
right,
it's
in
a
kept
proposal,
and
that
requires
probably
yet
another
kept.
If
we
start
to
break
down
some
of
the
work
items
around
it
right,
because
one
thing
I
want
to
try
to
do
is
interject
a
little
bit
more
to
order
and
transparency.
A
So
that's
the
that's
kind
of
the
current
tentative
plan
for
111,
so
my
goal
for
my
ideal
goal
for
111
would
be
bug,
count
zero
test
back
in
place,
and
you
know
a
plan
forwards
on
all
the
major
work
items
and
address
the
road
to
GA
and
H
a
as
cleanly
as
we
can.
That's
a
that's
a
long
tail
of
action
items
so
so
speak
now.
If
you
have
any
concerns.
A
A
A
The
reason
why
you
need
this
is
to
address
two
major
failure
conditions.
The
first
major
failure
condition
is
if
I
have
a
single
masternode
environment
and
if
that
node
were
to
fail
for
any
reason-
and
you
want
to
recover
if
it's
running
at,
if,
if
the
API
server
is
running
as
a
pod
on
the
system
and
not
aesthetic
pod,
then
that
information
is
lost
on
the
on
the
restart.
A
Unless
there
is
a
checkpoint,
at
least
that's
the
bend
of
thinking
to
date
for
a
long
time
and
the
other
second
failure
condition
is
if
I
have
an
H
a
control
plane.
If
I
have
a
DC
outage
and
all
the
those
nodes
go
down,
then
that
also
has
the
same
type
of
failure.
Condition
that
check
that
self-hosting
and
checkpointing
were
meant
to
address.
A
It
starts
to
unravel
into
a
spiral,
so
one
of
the
one
of
the
potential
stopgap
measures
that
I've
been
thinking
about
and
which
I'll
talk
about
tomorrow
during
the
breakout
meeting
in
more
detail,
because
other
folks
are
not
necessarily
here-
is
the
notion
of
just
having
a
bootstrapper
pod.
That
is
a
static,
manifest
pod
and
the
booster
pod
will
basically
detect
if
an
API
server
is
running,
if
not
to
create
a
default.
A
G
I
wanted
to
give
a
quick
update
on
mini
human
that
also
just
kind
of
open
up
some
general
discussion
about
how
many
Cuban
gauges
with
this
cig.
First
of
all,
I
just
wanted
to
say:
I'm
gonna
keep
110
released
should
be
going
out
later
today,
which
is
the
first
one
that
uses
qu
Badman
to
set
up
the
cluster
by
default.
G
The
coop
admin
work
has
been
on
your
honor
for
the
last
several
releases
and
it's
been
available
kind
of
as
an
optional
flag,
which
some
users
have
been
testing
out,
but
at
this
point
we're
switching
over
to
that
one
by
default.
So
apologies
if
there's
a
huge
number
of
Kubat
and
bugs
discovered
by
this,
but
we
couldn't
think
of
another
way
to
get
more
people
to
actually
try
this
out
without
flipping
it
by
default,
and
then
the
other
update
is
just
yet
Mindy
cubasis
sub-project
of
state
cluster
lifecycle
and
we
kind
of
attend
this
meeting.
G
A
G
I'm
doing
conversations
with
seed
testing
people
around
this
I'm
trying
to
unify
on
a
standard
way
of
setting
up
a
testing
environment
for
an
arbitrary
set
of
kubernetes
components
and,
in
fact,
using
Kubb
admin
to
set
up
many
cubic
clusters
is
a
pretty
big
stepping
stone
for
getting
to
that
point,
because
people
can
specify
custom
cube
admin
bills
which
have
custom
those
of
different
kubernetes
components
inside
of
them
as
well.
If.
A
Folks
want
to
understand
the
the
jiggery
of
how
and
the
order
of
operation
is
there
some
dock
that
you
could
point
us
to
that?
You
know
if
people
wanted
to
engage
on
that
front,
because
you
know
it's
the
same.
People
like
Liz
and
I
are
here
and
we're
also
on
the
testing
front,
and
we
also
want
to
do
this,
and
we
also
have
other
reasons
why
we
want
to
do
this.
A
B
A
A
Also,
the
way
people
even
develop
on
kubernetes
proper
itself,
the
the
local
up
cluster
is
kind
of
an
abomination,
but
it
is
what
developers
have
used
for
you
know
years
now,
I
think
that's
a
second
area.
Where
folks
could
it
work
on
this
type
of
have
that
as
like
sort
of
a
under
the
scenes?
It
just
executes
a
script,
but
then
it
stands
up.
The
latest
version
of
the
control
planes
so
I
think
all
signs
from
my
side
point
to
having
a
way
to
automatically
use
any
built.
G
Yeah
I
guess,
since
the
start
of
medicube,
this
has
always
kind
of
been
a
second
order
role
and
the
the
old
method
of
setting
up
the
cluster
using
global
cube
kind
of
made
it
very
difficult
to
impossible
to
send
up
custom
control,
plane
components.
So
we're
excited
to
get
onto
Kubat
ADM
and
have
a
real
standard
solution
here.
I
think
it
a
little
mock
a
lot
more
of
these
use
cases
and
make
it
easier
for
the
mini
cube
team
to
support
them.
Yep.
A
A
I
don't
have
there
are
no
more
agenda
items.
Are
there
any
last
things
that
folks
wanted
to
discuss?
Is
there
any
updates
from
maybe
from
the
cluster
API
folks
that
folks,
that
cross
over
into
the
space
into
the
comedian
space
nope?
Not
yet
alright,
unless
there
are
any
more
last-minute
items,
I
think
we
can
get
like
15
minutes
back
again.