►
From YouTube: SIG Cluster Lifecycle 2021-03-09
A
All
right
hello-
this
is
the
sequester
life
cycle
meeting
today
is
the
9th
of
march
2021.
I'm
going
to
give
the
agenda
document
link
in
chat.
Please
add
your
names
to
the
list
of
participants
in
your
agenda
topics.
A
Link
all
right
do
we
have
any
new
meeting
participants
that
wish
to
present
themselves
to
the
group.
B
A
All
right,
I
guess
we
should
move
to
the
group
topics,
so
this
about
c
group
drivers.
It's
it's
a
saga.
Already
it's
the
cycle
continues.
A
I
had
a
bit
of
a
chat
with
signal
about
this,
so
the
situation
currently
is
that
the
couplet
cannot
detect
the
container
run
type
c
group
driver
and
they
have
to
match
because
they
are
essentially
man.
The
complete
is
managing
slices
c
groups
and
basically,
if
they
don't
match,
you
get
a
bit
of
a
mess
and
it's
manual
you
have
to
set
up
the
kubrick
manually.
A
You
have
to
know
what
is
the
driver
on
the
host
container
in
time
and
you
have
to
pass
either
a
flag
which
is
deprecated
or
a
corporate
configuration
value
that
matches
the
host
secret
driver,
and
I
had
a
discussion
with
signal.
They
know
that
this
is
very
annoying,
but
this
has
been
like
that
for
a
very
long
time,
as
probably
since
the
inception
of
the
couplet,
so
the
latest
discussion
with
them
and
also
in
the
same
discussion,
participated
some
cri
implementers
from
cryo
and
container
d,
I
think,
is
to
start
using
the
cri.
A
Basically,
interface,
the
protocols
to
communicate
this
implementation
detail,
I'm
not
sure
when
this
is
going
to
happen,
but
I'm
sure
you
know
that
deployers
such
as
cube
adm,
it
has
become
a
question.
Api
problem-
I
probably
cops
knows
about
this
mini
cube-
knows
about
everybody,
pretty
much
it.
It's
a
like
a
major
pain
for
users
to
be
even
discovering
this
secret
driver
detail.
A
A
So
yeah
I've
locked
this
issue,
I'm
not
sure
when
somebody
is
going
to
start
working
on
it,
but
at
least
we
have
some
initial
proposal
in
there.
Let
me
show
it
quickly.
A
Yeah
I
mean
there
are
a
couple
of
alternatives:
either
the
kubrick
should
look
what
the
sierra
implementer
is
doing,
or
the
other
way
around,
which
is
the
cubelet,
is
set
up
in
certain
way
and
basically
a
container
time
adapts
to
the
particular
driver.
From
what
I
understood,
the
second
option
is
preferred.
We
have
this.
This
is
the
link
to
the
slack
discussion.
A
So
if
you
are
interested
in
this,
you
can
follow
the
ticket.
I
think
this
is
all
we
can
do
for
now.
There
will
not
be
any
changes
related
to
this
auto
detection
in
121
for.
A
A
A
I
wanted
to
give
a
bit
of
an
update
about
that.
I'm
going
to.
Basically,
the
pr
is,
from
my
perspective,
is
in
a
good
state.
It's
a
past
review
from
jordan
from
derrick
and
from
elena
husband
from
redhat.
A
Basically,
I
think
derek
has
to
do
another
pass
on
this,
because
we
we
had
a
back
and
forth
discussion
what
what
is
best?
What
could
go
wrong?
I
did
some
testing
here.
The
delay
is
caused
by
a
particular
check.
A
We
do
we
added
a
particular
check
to
synchronize
the
the
informer,
the
kublet
device
to
enumerate
the
nodes
from
the
api
server
so
that
it's
the
complete
synchronizes
with
the
api
server
to
know
what
loads
are
in
the
cluster
and
a
particular
delay
was
introduced.
That
is
very
problematic.
It's
not
in
a
good
spot,
so
this
pr
tries
to
fix
this
but
yeah.
This
is
I'm
going
to
push
for
this,
there's
a
milestone
on
the
ticket
on
the
pr
and
it's
marked
as
critical
origin.
A
Hopefully
we
can
block
the
further
releases
the
release
process
of
prior
sorry.
The
patch
releases
of
prior
minor
releases
is
a
bit
arbitrary
at
the
moment.
I
I
need
to
talk
with
the
release
team
to
potentially
block
all
those
spatulas
as
well,
because
currently
we
just-
I
don't
think
anybody
looks
at
the
critical
tickets
in
kubernetes
kubernetes
and
they
just
push
a
new
release,
whatever
accumulates
for
that
particular
release,
but
we
ideally
we
should
block
if
something
is
critical,
bug
port
it
and
only
then
release
some
of
these
patch
releases.
A
This
can
help
downstream
consumers
knowing
that
okay,
we
know
about
this
critical
fix,
but
I
mean
hopefully
the
next
part
release
fixes
it,
but
if
we
release
a
partial
release
with
without
this
critical
bug
fixed
it's
a
bit
confusing
so
yeah.
This
is
the
update
for
that.
I
forgot
to
add
it
in
the
agenda.
Does
everybody
else
have
any
actually
before
does
everybody
have
any
group
topics?
Because
I
want
to
go
to
the
audio
reports?
Maybe
we
can
have
a
like
a
review
here.
B
A
Cool,
so
every
like
every
sick
has
to
do
an
audio
report.
Now
working
groups
have
to
do
them
as
well.
So
I
prepared
this
in
a
google
doc.
I
don't
think
anybody
like
anyone
commented
on
that,
but
this
is
the
pr.
I
don't
know.
What
is
the
deadline
for
this
to
merge,
but
I
think
it's
something
like
the
middle
of
march,
potentially
so
justin.
I
want
to
ask
you:
do
you
want
to
do?
What
do
you
think?
It's
a
good
idea
to
review
this
here
or,
like
maybe
just
point
to
a
synchronous
review.
C
I
suggest
if
there
are
any
questions
anyone
wants
to
bring
up.
Let's
we
can
talk
about
them
here,
like
fabrizio.
Just
commented
that
he
added
some
comments.
I
don't
know
if
any
of
those
are
things
that
we
should
discuss
synchronously.
Otherwise
I
suggest
just
encouraging
people
to
have
a
look
but
yeah.
Thank
you.
So
much
for
doing
this.
A
Yeah
no
problem,
I
see
it
for
great
to
add
comments.
I
actually
haven't
checked
them
yet,
but
overall
there's
nothing
that
how
do
I
say
I
don't
think
there's
any
pending
discussions
that
much
in
this
particular
annual
report.
Everything
is
pretty
transparent.
We
don't
have
any
private
topics.
D
I've
reported
a
comment
from
the
google
doc
in
the
pr
they
are
only
needs,
but
the
what
we
are
proposing
it
looks
more
than
fine
for
me.
Thank
you
for
driving
this
work.
A
Okay,
so
for
brits,
I'm
going
to
update
your
comments,
maybe
tomorrow.
If
anybody
else
has
any
comments
for
this
audio
report,
please
add
them.
I
know
that
dims
is
going
to
do
another
part
of
this,
because
dmc
is
the
liaison
between
the
steering
committee
and
the
sick.
A
A
A
All
right
moving
to
updates,
I
read
I
added
a
couple
of
items
for
cube
adm.
A
We
are
defaulting
the
complete
configuration
secret
driver
which
is
related
to
the
previous
topic.
Basically,
we
started
defaulting
to
systemd
for
new
questions
that
I
created,
and
I
coordinated
this
with
the
mailing
list.
Question
api
and
image
builder.
A
The
story
is
that
kubaydm
uses
systemd
to
run
the
couplet
and
you
have
to
if
you're,
using
systemd,
to
run
the
couplet,
the
recommend
the
driver
is
systemd,
which
means
that
for
keyboard
setups,
ideally
like
one
should
be
using
the
system
d
driver.
I
know
that
json
broke
the
topic
about
what
do
we
do
with
some
other
linux
distributions
that
are
not
using
systemd?
I
know
that
we
support
sorry.
I
forgot
the
name
of
that
distribution.
A
Open
rc,
the
opener
sees
the
their
in
its
system,
but
I
forgot
the
name
of
it's
a
popular
minimal
distribution
anyway
yeah
and
somebody
commented
that
they
actually
don't
mind
this
change
as
long
as
you
can
configure
it.
So
I
guess
we
can
proceed
with
it.
I
didn't
get
any
comments
from
the
actual
open,
rc
maintainers,
but
to
to
by
the
setting
system
d
the
systemd
driver
will
be
required
if
you
want
to
support
c
group.
Fs.
A
Sorry
c
groups
version
two,
so
I
don't
think
anybody
is
going
to
be
able
to
use
cigar
professor
at
that
point,
so
yeah
I
don't
know,
what's
the
situation
there,
I
didn't
see
any
major
complaints
about
this
and
again
until
we
have
this
automatic
detection,
we're
basically
going
to
execute
on
this
change
during
upgrade,
it's
not
going
to
happen
for
121,
but
we
have
a
plan
to
also
start
requiring
users
to
in
122
to
basically
set
up
their
loads
to
be
compatible
with
this
driver,
because
kubernetes
will
automatically
start
upgrading
them
to
system
d
unless
they
are
explicit
about
the
cigarette
driver
that
they
want
in
the
linked
issue.
A
I
have
a
pr
for
the
community's
website
where
I
wrote
a
new
guide,
how
to
migrate
to
a
new
driver
and
that's
part
of
the
effort
we
have
for
that.
A
This
is
in
parallel
to
the
the
the
upstream
kubernetes
coverages
feature
gate
graduation,
which
is
also
happening
121.
We
already
have
tests.
Somebody
submitted
a
pr
to
document
how
people
can
do
this
with
cube
adm,
and
this
is
basically
we're
just
flipping
the
feature
gate
at
this
point.
A
A
Kubernetes
cool
moving
to
cops
justin.
C
Thank
you.
Yes,
so
I
think
the
most
interesting
cross
project
thing
is:
we
are
adding
support
for
control,
plane,
nodes
that
are
not
that
don't
run
at
cd.
C
C
We
have
still
called
master
and
we
have
nodes
and
we
are
going
to
add
a
third
one
called
it's
gonna
be
called
api
server
that
will
not
run
at
cd
and
could
have
a
much
more
simple,
auto
scaling
policy
on
it,
for
example,
so
you
would
still
run
three
lcd
control,
plane
nodes
and
then
you
would
run
n
like
where
n
is
from
zero
to
whatever
your
max
is
of
api
server.
So
it's
a
sort
of
scalability
thing.
C
The
the
interesting
things
I
think
are
we're
trying
to
figure
out
how
to
label
and
taint
those
like.
Should
we
label
them
as
control
plane?
C
Should
we
call
should
we
taint
them
as
control
plane?
Should
we
try
to
be
more
granular,
so
I
think
we're
probably
gonna
end
up
labeling
them
with
control,
plane
and
a
second
label
to
say
they
are
controlling
an
api
server,
but
that
is
sort
of
ongoing.
C
Yes,
I
think
I
think
the
the
two
alternatives
are
control,
plane
and
api
server
or
just
api
server,
or
something
like
that.
The
downside
of
just
api
server
is,
although
it's
more
accurate,
it
would
mean
that
you'd
have
to
sort
of
know
like
you'd
have
to
learn
that
right.
You
have
to
know
that
a
control
plane
can
be
labeled,
either
control,
plane
or
api
server,
and
it
might
be.
We
do
something
different
for
taints
than
we
do
for
labels.
C
The
cube
api
servers
in
sorry,
the
control
plane
nodes
do
run
a
cubelet
and
yes,
cube.
Api
server
runs
as
a
static
pod
on
the
control
plane
nodes
in
caps.
C
We
aren't
sure
about
that.
Actually,
to
be
honest,
like
it's
a
little
fuzzy,
we
that's
one
of
the
things
we're
debating
is
like
should
do
we
when
we
say
this,
do
we
really
mean
not
at
cd
or
do
we
mean
just
api
server.
A
Yeah,
if,
before
the
control,
plane
components
are
there
at
least
in
kubernetes,
we
call
this
an
external
lcd
control,
plane
node,
something
in
those
lines,
but
we
don't
label
it
it's
just.
You
can
use
the
cribadm
config
to
discover
this
information,
but
it
really,
I
think
you
should
make
this
decision
before
you
should
make
this
decision
before
thinking
about
the
names
the.
C
Labels
does:
does
cuba
am
label
external
lcd
with
the
control
plane
label.
A
So
yeah
I
I
mean
you
can
still
label
them
label
them
with
something
like
control,
plane
and
then
a
separate
label
that
is
external
hd.
Maybe.
C
Yeah,
I
agree
to
me
that
seems
the
most
flexible
and
I'm
I'm
arguing
on
the
review
that
we
should
just
just
do
the
labels
we're
sure
about,
because
we
can
always
add
labels
later,
but
it's
very
difficult
to
take
them
away.
A
Yeah
and
it's
essentially
a
matter
of
topology
whether
you
want
to
have
only
the
cube
api
server
there
like
is
it
like?
Is
it
a
requirement
to
have
it
only
the
server
or
do
you
want
the
other
components
as
well
the
benefit
of
having
them
all
on
the
the
same
note
is
that
they
can
only
communicate
or
localhost.
They
don't
need
to
be
communicating
across
the
internet
or
across
the
same
sub
network
to
the
api
server.
A
Yeah,
it's
a
topology
decision.
Yeah
I've
got
to
I'm
going
to
follow
this.
If
else
is
interested,
you
can
do
the
same.
B
A
B
I
have
a
sort
of
tangentially
related.
This
is
my
question
that
I
want
to
bring
up
that's
sort
of
tangentially
related
to
this.
We're
really
interested
in
like
based
up
based
on
how
eks
supports
multi
ftd
separate
from
he
has
operates.
B
Std
separate
from
the
cube
api
server
today
in
the
managed
eks,
I'm
on
the
eks,
distro
eks
anywhere
team,
and
we're
really
interested
in
cappy
and
also
in
using
the
cappy
to
create
clusters
and
we've
been
using
cops
as
kind
of
that's
one
of
our
test
beds
and
one
of
the
things
we've
noticed
and
it
maybe
is
specific
to
cops.
But
I'm
I'm
curious.
B
Just
I
just
want
to
ask
it
because
justin
you're,
here
and
kind
of
the
ltd
came
up
cops
uses
the.
What
is
it
code,
bio,
ncd
manager
for
managing
a
td?
And
it's
very,
it's
obvious.
Why,
like
managing
you,
know
membership
and
other
xcd
operations?
But
that's
not
a
clear.
B
It's
not
clear
to
me
where
the
governance
is
on
that
project.
It's
not
part
of
the
kubernetes
org,
it's
apache
too
licensed,
but
it's
just.
I
don't.
I've
been
part
of
a
big
corporation,
so
I
don't
have
like
carp
launch
to
commit
and
use
anything.
So
I'm
I
I
want
to
use
an
std
manager
for
for
cappy,
but
I
I'm
curious
about
the
status
of
this
project
and
I
know
there's
also
like
std
manager.
B
C
Absolutely
I
have
a
it's
a
very
fair
question.
I
have,
I
think,
good
news
on
this
front,
so
copio
etsy
manager
is
going
to
merge
or
is
in
the
process
of
merging
into
the
kubernetes
cigs
subproject,
which
is
at
cd
adm
it.
The
code
actually
is
currently
in
there
in
the
etsy
adm
project.
We
have
not
yet
moved
our
build
to
it.
C
C
But
I
think
our
our
next
step
is
to
get
the
build
built
from
the
etsy
adm
repo,
and
then
we
can
effectively
shut
down
or
say,
like
contributions
go
into
the
kubernetes
cigs
repo,
because
right
now
the
path
is
copio
to
and
then
mirrored
or
manually
copied,
but
justin
mirrored
and,
as
you
say
like,
we
want
to
get
it
all
going
into
etsy
adm
and
then
we
can
also
start
to
unify,
like
fcdm
is
a
command
line
tool.
C
I
guess
more
similar
to
kubereum
and
lcd
manager
is
more
of
like
we
call
it
the
sort
of
self-driving
version,
and
it
would
be
great
to
get
the
self-driving
version
using
the
using
ncdm.
But
yes,
yeah
next
step
is
to
build
from
the
cdam
repo
and
then,
if
we're
good
there,
we
can
start
to
accept
prs
on
that
repo.
B
Awesome
yeah
is
there?
Is
there
a
ticket?
I
can
reference
and
point
to
to
say
this
is
happening
or
just
just
that
somewhat
outlines
what
you
just
said,
because
that
would
be
really
helpful.
Just
to
have
a
public
place
to
say
look,
this
is
happening.
This
is
not
a
scary
thing.
That
is
an
excellent
idea.
B
In
a
couple
of
minutes,
thanks-
and
we
also
have
a
eks
internally-
has
a
very
comparable
version
of
xcd.
B
Like
the
code,
bio,
a
cd
manager,
we
call
it
the
same
thing:
xcd
manager,
internally,
it's
it's
pretty.
There
are
portions
of
it
that
are
specific
to
how
egas
operates
at
cd,
but
eks
doesn't
operate
at
using
a
pod.
We
just
use
systemd,
but
we
have
a
like
a
manager,
sidecar
process
that
we
operate
and
that's
something
that
I
think
I
would
like
to
see
open
source.
I
think
others
would
too,
and
that
might
be
a
good.
B
C
Yeah,
that
would
be
absolutely
wonderful
specifically
to
what
you
just
mentioned
about
how
we
run
it.
The
a
well
recognized
downside
of
the
current
mode
of
operation
is,
we
have
to
embed
all
the
versions
of
fcd
we
support
into
the
xcd
manager
pod
and
it's
not
a
great
architecture,
so
we
have
to
sort
of
careful
about
what
versions
we
support,
because
it
like
adds
weight
and
we
discussed
in
scd-adm.
C
One
of
the
things
we
want
to
do
is
to
be
able
to
run
in
a
separate
pod,
and
then
the
discussion
came
up.
Well,
why
are
we
running
fcd
manager
in
a
pod
at
all?
Why
don't
we
run
into
systemd
process?
And
the
nice
thing
about
that
is,
it
would
be
even
closer
to
how
etsy
avm
runs
it.
So
I
think
collaboration
there
will
be
wonderful.
I
will
open
a
second
ticket
to
describe
this
thing
and
you
can
wait
in
there.
How
about
that
awesome.
B
Yeah,
the
yeah
there's,
there's
yeah
there's
a
number
of
things.
I
think
that
that
we
can
help
contribute
that
I'd
love
to
see
like
you
know,.
A
B
There
we
we
don't
run
the
same
version
of
it.
We
actually
run.
We
don't
run
the
upstream
version
recommended
of
xtd.
We
basically
run
latest
everywhere,
just
because
we
found
that
the
api
differences
are
not
significant
enough
and
the
bug
fixes
that
we
want
are
critical
enough.
Typically,
that
we
want
it.
We
just
want
it
everywhere.
So
that's
one
difference.
We
have
and
more
flexibility
there,
but
yeah
that
definitely
makes
sense.
C
Yeah
collaborating
they'll,
be
wonderful,
I
mean,
I
think,
I'd
also
like
to
see
collaboration
with
the
kind
project
who
are
doing
that
sort
of
you
know
thing
as
well
like
supporting
different
etcd
back
ends
and
yeah.
I
think
I
think
we
can,
I
think,
we're
finally
in
a
reasonable
place,
particularly
if
we
don't
get
distracted
by
replacing
bazel
just
yet.
A
Yeah
upstream
kk
already
removed
bazel
support,
but
we
still
have
it
in
some
repositories.
C
Yes,
and
I
my
understanding,
is
we
don't
have
to
remove
it
yet,
although
I've
I've
put
it
as
sort
of
the
writings
on
the
wall,
but
if
we
like,
I
think
getting
collaboration
and
momentum
going
on
at
city
manager,
is
more
important
than
removing
bazel
from
that
cd
adm.
A
Yeah,
I
think
it's
fine,
it's
not
like
a
recommendation
to
remove
it.
I
think
test
infra
will
continue
to
use
bazel
because
there's
so
much
bezel
in
there.
That
is
hard
to
remove.
I
we
at
some
point
removed
it
from
coaster
api,
because
I
don't
think
anyone
liked
bezo
for
me
is
personally
fairly
unreadable
to
be
honest,
yeah,
but
it's
a
it's
a
it's
a
build
system.
You
can
replace
it
later.
The
source
code
is
important,
but.
A
All
right
any
more
questions
for
cops.
E
It's
been
a
while,
since
there
was
a
mini
cuber
up
here,
so
I
thought
I'd
fill
you
guys
in
so
yeah,
so
we're
kind
of
stuck
upgrading
the
kubernetes
version
because
of
the
performance
degradation
so
we're
we're
just
sort
of
stuck.
I
was
gonna
ask
questions
about
your
c
group
driver
thing,
but
then
anders
asked
all
the
questions
on
the
issue
itself.
So
that's
taken
care
of
at
least
shout
out
to
andrews
so
yeah.
So
in
the
last
two.
E
So
since
I
was
last
year
released
117
and
118,
we
added
proper
support,
not
proper
support,
basic
support
for
all
arm
64
machines
for
at
least
our
non-vm
drivers.
E
The
iso
doesn't
build
an
arm64,
yet
we
added
an
add-on
to
automatically
pause
your
kubernetes
cluster,
if
you're
not
using
it.
So
if
you
don't
get
any
if
minicube
or
kubernetes
doesn't
get
any
requests
in
a
certain
amount
of
time,
that's
configurable,
then
we'll
just
pause
kubernetes
for
you
and
then
the
second.
You
ask
something
from
the
cluster
it'll
spin
back
up
and
answer
your
question
and
then
anders
added
a
a
generic
driver.
So
you
can
just
have
like
a
remote
cluster
somewhere.
E
That
can
be
that
you
can
manage
via
mini
cube
on
your
local
machine.
So
those
are
like
the
big
beats
and
yeah.
That
was
the
thing
that
caught
the
degradation
or
these
new
performance
dashboards
that
we
added
internally,
and
so
they
just
they
basically
spin
up
a
vm
and
then
run
mini
cube,
start
on
all
of
our
drivers
and
then
measure
them
and
they
just
like
spiked.
And
that
was
that's
what
happened.
A
Yeah
that
that's
pretty
cool,
I'm
going
to
add
a
cube
idm
test
for
that
at
some
point.
What
problem
we
have
is
that
currently,
the
the
proud
cluster
that
we
have
is
can
be
a
bit
flaky,
or
at
least
it
used
to
be
at
some
point
getting
a
little
bit
a
resource
constraint,
which
means
that
you
cannot
measure
his
performance
in
a
risk
through
the
resource
constraint
environment.
So
I
think
I'm
going
to
add
a
test
for
that
in
kubernetes
itself.
A
So
hopefully
we
can
catch
some
of
these
problems
and
thank
you
for
reporting
that
it
was
it
caught
some
attention.
Yeah
I
wanted
to
so
the
r64
support.
Does
that
mean
that
you
now
support
the
so
desired
m1
mac.
A
E
E
I
kind
is
behind
this
aspect.
I
think
still
we
have
more.
We
just
have
more
head
count.
Honestly,
that's
the
reason.
Yeah
poor
band
is
just
trying
to
do
everything
we're
tr.
How
did
you
figure
out
expensive
one
network
right
now?
The
the
engineer
who's
been
working
on
our
arm
machine
is
just
bought
it
out
of
pocket
and
we're
just
trying
to
figure
it
out,
but
yeah,
so
the
for
the
for
the
m1
machine.
E
There
are
like
add-ons
that
don't
work
like
you
can't
get
a
dashboard
to
work
that
doesn't
work,
and
we
have
a
bunch
of
add-ons
that
use
images
that
don't
work
on
arm
but
we're
slowly
trying
to
migrate
everything
to
like
a
multi-arch
manifest.
So
that's
sort
of
we
have
like
a
checklist
of
things
to
make
it
like
ga.
A
E
That's
not
that's
not.
That
was
not
really
the
the
goal
of
this
particular
add-on.
If
you
restart
your
machine
mini,
you
still
need
to
tell
mini
cube
hey.
I
had
I
had
a
cluster
here
like
as
long
as
your
configuration
is
still
there.
The
cluster
won't
will
not
have
gone
away,
but
we're
not
gonna,
auto
start
up
or
anything
like
that.
The
way
that
the
add-on
actually
works
is
that
it's
a
it's
a
it's
a
systemd
service,
but
it's
inside
of
the
vm.
E
So
unless
you,
unless
you
tell
the
vm,
hey
start
back
up,
it's
not
gonna
do
anything.
I
see
so
we
have
to
have
a
script.
That
starts
the
vm
as
well,
and
then
everything
will
just
kind
of
work.
I
mean
it's
still.
This
is
still
an
experimental
feature.
So
this
it
does
like
something.
There
are
still
features
that
are
missing,
namely
that
if
you,
if
you
make
a
call
to
like
cube,
adm
directly
or
to
even
to
kubernetes
that
not
through
mini
cube,
that
won't
work,
but
that's
sort
of
the
end
goal
there.
A
Thank
you.
Any
questions
for
beekeep.
A
All
right
final
call
for
a
group
topic,
so
I
don't
see
the
coaster
api.
Obviously
I
saw
that
for
brits.
You
had
to
drop
earlier
today,
but
we
do
have
a
coaster
appear
meeting
tomorrow,
so
bike.
If
you're
interested
you
can
join
tomorrow's
music.
B
All
right
where's
is
that
is
that
link
the
link
for
that
meeting,
notes
and
doc
in
just
the
cluster
lifecycle.
Github
page,
oh
it's
up
here.
Okay,
great.
A
Let
me
check
if
this
zac
this
is
up
to
date.
I
think
it
is.
A
The
sequel's
life
cycle,
okay,
different
than
this
one,
okay
I'll,
find
that
one
and
join
that
okay.
So
do
we
have
any
final
group
topics
for
today?
Anything
final.