►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone:
this
is
the
cyclists
lifecycle
meeting
on
Wednesday
that
when
he
said
August
and
today
we're
going
to
talk
about
cubed,
M
upgrades,
self-hosting
and
overall
1/8
stabilization
and
implementation
and
yeah.
Let's,
let's
start,
we
have
some
items
on
well.
The
main
track
of
4
cubed
M
is
in
kubernetes
cubed
M
issues
with
the
1
1
8
milestone.
We
see
if
I
can
find
link
to
that
one.
So
we
can
see
I've
tried
to
actually
keep
that
up
to
date
and
currently
it
says
23
open
issues
and
29
closed
and
we
have.
A
A
Yeah,
so
we
have
to
to
ps4
self-hosting
one
is
from
Diego
from
Korres,
really
nice
PR
with
charging
rolling
update
strategy
for
demon
seeds,
which
means
means
they
will
upon
a
new
part,
will
be
added
before
the
old
one
is
killed,
which
is
required
for
self-hosting
upgrades,
and
we
have
Tim's
PR
with
checkpointing
pod
checkpointing
explicitly.
Let
me
see
if
I
can
find
that
link
as
well
as
as
always
its
it
be
it.
A
A
B
B
B
I
would
feel
comfortable
with
having
it
behind
a
feature
flag,
sorry,
a
feature
gate
and
promoting
widely
in
a
blog
post,
for
example,
and
in
the
release
notes
and
in
the
documentation
that
we
want
people
to
try
it
and
that
we
plan
to
make
it
the
default
in
1.9,
but
to
not
quite
go
as
far
as
making
it
a
default
in
1.8
yeah.
It's
given
that
it's
going
to
be
kind
of
last-minute,
but
it
goes
in
and
also
based
on
the
previous
history
of
cube
admin
releases.
I'm.
Just
saying
what
you
told
me
earlier
now.
B
A
So
basically,
I
mean
kubernetes,
isn't
maybe
well
known
for
having
like
not
breaking
users
sometimes
so
when
itís
is
targeted
like
from
a
sig
p.m.
perspective,
to
be
a
stabilization
release,
so
I
think
personally,
it
may
make
sense
to
keep
it
in
like
everything
working.
We
have
all
the
required
pieces,
but
not
graduate
not
use
it
by
default
in
in
1:8.
A
A
We
yeah,
we
didn't
announce
that
in
1:7
that
well
we
prepare
that
when
you
upgrade
to
one
eight,
this
will
happen.
Also,
we
one
one
thing
to
keep
in
mind
is
that
we
can
only
enable
self
forcing
by
the
default,
but
upgraded
clusters
will
would
still
use
static,
pod
hosting
us
as
there's
no
way
to
well
there's
a
hacky
way.
Yes,
but
no
easier,
no
good
way
to
do
us
to
upgrade
a
static
pod
cluster
from
one
eight
to
one
set
one:
seven,
two
one:
eight
as
we
don't
have
this
demon
set
strategy
so
yeah.
C
A
I
think
that's
pretty
much
part
of
the
contract
with
with
bina
and
and
like
I.
Don't
see
the
the
reason
to
remove
static
God
manifests
support
as
we're
gonna
use
it
in
any
case
right,
but
for
every
new
cluster
we're
first
gonna
do
static.
Pods
then
just
like
upgrade
to
upgrade
to
the
self-hosting.
So
it
makes.
B
A
C
D
C
A
Yeah,
so
so
I
think
yeah
I,
don't
know
at
which
point
you
you
joined
them.
How
should
I
just
do
a
one-minute
recap
or
something
or
what
we
said
all
right.
So
basically,
the
first
main
discussion
item
which
we're
gonna
bring
up
next
realistic
meeting
as
well
is:
should
we
support
static
self-hosting?
Should
we
enable
self
hosting
by
default
in
one
night
and
I'm
a
little
bit
hesitant
to
doing
it?
Actually,
after
thinking
about
it,
sometime
as.
A
C
A
A
You
can't
upgrade
to
1/8,
but
you
can
just
force
that
through
and
then
it
will
well
deploy
exactly
the
same
command
for
the
same
manifest
file
for
the
API
server
verify
it's
working
then
proceed
with
the
other
ones,
and
yeah
I
mean
so
that
that's
like
reason,
number
one.
We
don't
explicitly
need
it
I
think
and
also
as
let's
like
the
only
way
kind
of
top-grade
from
one
seven
to
one.
Eight
is
doing
this
static.
Bolting
I!
A
Don't
want
this
to
have
a
situation
where
we
have
half
the
cost
as
a
static
phone
hosted
and
half
the
clusters
are
so
foster
denied.
So
if
we
enabled,
if
we
have
enabled
self-hosting
by
default,
all
new
classes
would
obviously
be
self
hosted
and
all
our
previous
upgraded
clusters
would
be
static,
pod
and
also
the
stabilization
team
of
1/8.
A
A
Well,
now
we
have
self
hosting
as
an
option,
and
you
can
test
it
out
in
1:8,
and
it's
working
everything's
like
checkpointing
with
pod
checkpointing,
is
therefore
bootstrapping.
The
the
right
manifests
are
there,
you
can
do
a
self-hosted
cluster
in
one
night
and
it
won't
work
this
fine
and
it
sits
at
be
the
level,
but
we
don't
just
we
just
don't
have
it
as
a
default,
but
psi
it
will
and
that
will
change
in
one
night
and
we'll
do
static
for
self
hosted
by
default
there.
So
I
don't
know
it's
that
reasonable
to
you.
A
D
B
Think,
that's
probably
the
best
way
to
get
exposure
to
users
running
it,
trying
it
without
forcing
it
on
everyone,
which
is
what
we
want,
because
we
want
users
trying
it
and
giving
us
feedback
and
finding
new
ways
to
break
it,
so
that
we
can
make
it
good
enough
like
so
that
we
can
be
confident
enough
in
it
to
actually
turn
it
on
by
default.
In
1-9,
I
love.
D
B
B
Yeah
I
also
think
that
we
need
to
call
out
the
availability
of
it
in
the
release,
notes
and
in
the
documentation
and
maybe
write
a
blog
post
to
take
and
do
a
demo
at
the
community
meeting
and
stuff
like
that
to
get
to
get
eyeballs
on
it
and
testers
in
the
one
eight
timeframe.
Well,
in
the
one
nine
timeframe,
yep
yeah.
A
A
D
Around
yesterday,
but
I
did
want
to
appear
say
that
it
checked.
My
PR
is
up
I'm
waiting
on
sig
note.
To
give
me
feedback,
I
verified
that
if
we're
behaves
the
way,
I
wanted
to
behave
for
the
very
simple
use
case
that
we
care
about,
but
I'm
still
waiting
for
feedback
from
them.
The
cupola
code
is
a
wunderkind
of
how
it
operates
so
there
might
be
gotchas
in
how
I
did
things
so
I'm
I'm
just
waiting
for
a
feedback
from
you.
A
D
On
the
initialization
routine,
it
reads
the
checkpoint
directory,
it
loads,
all
the
pods
that
were
written
to
disk
and
provided
those
pods
don't
have
anything
special
which
none
of
the
ones
that
we
do
do
it
will
not
try.
It
should
not
try
to
contact
the
API
server.
So
like
that's.
The
one
thing
where
I
want
to
make
sure
I
have
validation
on
is
like
there's
a
there's,
like
literally
like
I,
don't
even
know,
screens
and
screens
of
logs
that
go
through
that.
D
D
Config
Maps
finish
right
and
if
that's
enabled
then
we'll
write
those
to
just
do,
and
it's
not
really
hard
to
do
the
the
scheme
for
loading
and
configuring
is
really
stupid.
Simple.
This
is
not
rocket
science,
so
I,
don't
know
why
people
were
hemming
and
hawing
and
all
of
finagling
issues.
I
understand
the
issues
with
secrets,
but
I
even
think
you
could
feature
that
and
if
somebody
decided
they
wanted
to
do
that,
I
don't
see
why
we
would
prevent
them
from
doing
it,
because
it
would
be
a
default
off
option
on
thing.
D
So
I
think
I
think
some
people
are
being
a
little
bit
pedantic
about
ideas.
I
think
this,
provided
we
have
options
for
people
to
do
crazy
things.
I,
don't
think
there
should
be
anything
preventing
them
from
doing
crazy
things
if
they
really
want
to
do
it.
That's
that's
what
every
good
cluster
management
session
does
right.
You
should
see
the
number
of
knobs
another
in
other
systems.
It's
it's
crazy,
like
there's,
there's
a
literally
a
joke
in
Condor,
a
t-shirt
that
we
made.
This
is
Condor,
there's
a
knob
for
that.
E
A
D
E
D
E
Yeah
so
I'll
take
another
pass
on
it
and
take
a
look.
B
C
To
to
open
PRS,
one
of
them
is
actually
just
approved,
so,
let's
repeat
in
Limerick
you'll
see
one
of
them
adds
extra
group
field
to
the
bootstrap
token
secret.
So
right
now
the
bootstrap
token
authentication
token
you
go
into
a
new
authenticate
as
a
user,
that's
system,
bootstrapper
token
ID
and
a
group
that's
called
system
bootstrappers,
let's
just
fixed.
This
change
makes
it
so
that,
in
addition
to
that
group,
you
can
also
add
yourself
to
extra
groups
as
long
as
all
the
extra
groups
are
still
prefixed
with
system
bootstrappers.
C
So
this
basically
lets
you
have
multiple
bootstrapper
groups
and
as
it
is
right
now,
this
is
sort
of
not
that
useful,
because
there's
nothing
gated
behind,
there's
no
rolls
or
anything
at
bound
to
those
extra
groups.
But
this
will
let
you,
for
example,
add
a
group
for
system
bootstrapper
masters,
which
is
you
know,
for
bootstrapping
new
master
nodes
in
the
future.
H
a
cuvette
game
cluster.
It
also
let
you
tweak
the
csr
auto
approver.
If
you
want
to
do
this,
yes,
our
other
approver,
and
do
something
different
to
validate
nodes
in
different
groups.
C
C
A
It
is
probably
splitting
out
helpless,
client
role
or
whatever,
but
do
we
consider
that
GA
functionality
and
I
mean
at
this
point
in
the
cycle?
It's
all
about
risk
management
so
and,
and
also
I
heard
somewhere-
that
the
kubernetes
package
right
I
think
there
will
be
a
repo
named
kubernetes
package
into
one
nine
time
frame
or
something
and
I
think
it
was
child
that
mentioned
that
it
might
be
useful
there
yeah.
C
A
Basically,
it's
like
I
think
that
plug-in
or
anyway,
there
was
something
yeah,
yeah
I,
think
that
plug-in
and
things
is
staging
can't
depend
on
the
main
repo
and
also
we
have
like
e
to
e
test.
That
will
be
split
it
out
soon,
but
depend
on
this
same
thing,
and
we
can't
have
the
helpers
in
Cuba
em,
because
it's
kind
of
obvious
wish
anyone
shouldn't
depend
on
us
from
from
this
side
of
kubernetes,
so
yeah.
We
would
just
like
talk
about
client
goers,
because
it's
it's
kind
of
we
need
a
client
from
fallen
right.
C
C
C
A
C
Just
it
does
feel
like
so
they
don't
know
this
has
been
discussed,
but
it
would
be
cleaner
in
my
mind
if
we
did
create
an
API
type
for
bootstrap
token
and
somehow
found
a
way
to
annotate
it
so
that
it
got
treated
as
secret.
The
same
way
that
secrets
are
treated
as
secret
and
then
a
lot
of
the
commands
to
work
with
the
token
would
just
fit
naturally
into
Google
and.
A
C
C
A
A
C
A
A
A
Mean
there
are
there:
is
this
function
either?
Where
we
could?
We
could
just
treat
it
as
a
plain
string
like
just
23
chose
right,
and
we
that's
one
use
case
for
boots
or
token,
and
then
we
have
also
this
admin
level
secret
type,
so
it
might
be.
It
might
make
sense
to
split
those
and
have
two
different
types
of
structs,
for
example,
to
like
one
for
the
simple
case
where
we
just
hold
23
bytes
and
one
for
the
more
complicated
case
where
we
actually
want
the
marshal
and
and
unmarshal
from
secrets.
So
like.
C
A
C
C
A
A
E
He
he
worked
around
it,
but
I.
Think
at
this
stage
is
just
having
people
take
a
look
at
the
implementation
and
seeing,
if
that's
reasonable,
someone
also
was
was
asking
about
a
design
doc.
So
I
don't
know
if
we
need
to
go
back
and
find
say
do
that
it's
not
a
big
deal
but
yeah.
So
I
think
if
everyone
can
take
a
look
at
the
implementation
and
if
this
is
reasonable,.
A
A
I'm
gonna,
pull
it
down
locally
and
test
it
out.
I
mean
upgrade,
should
be
a
no
op
in
in
this
case,
just
I
mean
because
we're
relying
on
the
the
control
at
a
dude
upgrading
for
us.
So
let's
say
we
we
upgrade
the
API.
So
first
then
the
1/8
API
server
will
be
started.
It
will
become
in
a
running
state
and.
A
So
basically
we're
four
four
one:
seven
one
eight
we're
gonna
do
static
code
upgrades
all
the
way
and
then,
in
from
one
eight
on
will
we'll
be
able
to
rely
on
this
functional,
a
yeah
that
that
sounds
good
to
me.
One
corner
case
that
I
don't
know
if
we
should
be
worried
about
is
like.
If
we
let's
say
we
upgraded
the
API
server
successfully,
we
upgraded
the
controller
manager
successfully
and
we
are
trying
with
the
new
scheduler,
which
was
added
and
it
didn't
come
up
cleanly.
A
A
C
E
C
E
Least,
I
hope
so
otherwise
you
had
some
problems,
I
mean
I,
don't
think
it
would
proceed
any
further.
Based
on.
Like
your
you
know,
your
maximum
and
minimums
available,
but
yeah
I
mean
I
could
be
totally
wrong.
It's
been
a
while
since
I've
do
that
stuff,
but
yeah
like
so.
The
rollback
of
the
rest
of
the
update
would
be
a
process
that
you
would
kind
of
like,
however,
kupa.
A
A
A
A
D
A
Yes,
I'll,
we
should
for
1/9,
we
should
definitely
I
mean.
Now
we
have
the
phases
in
place,
which
is
which
is
great.
The
next
step
is
probably
wrapping
the
config
in
in
something
actually
useful.
So
we
don't,
as
you
said,
have
to
like
do
this
like
variables
that
are
runtime
specific,
but
not
should
not
be
persisted
in
the
API
that
still
are
really
useful
like
on
runtime,
we
should
probably
wrap
those
in
a
runtime
configuration
object,
which
also
has
the
civilized.
D
I,
don't
know
if
you
want
to
go
this
far,
but
we
can
debate
on
it.
It's
a
good
mental
thought
exercise
is
that
the
this
is.
This
is
a
gripe,
so
get
ready
for
it
inside
of
the
code
inside
of
kubernetes.
It's
it's
very
much
a
document
view
style
architecture
instead
of
a
model
view
controller.
So
the
API
structs
and
changes
to
those
trucks
are
directly
represented
into
the
code
and
any
changes
or
shifts
in
the
API
get
percolated
all
throughout
the
entire
code
base
right,
you
know
rocking
at
circa.
What
year
is
it
I?
D
Don't
know?
1990S
people
came
up
said.
This
is
a
bad
idea,
because
every
time
I
change
something
it
percolates
all
the
way
through,
so
they
in
directed
through
controllers,
so
they
had
a
model
view,
controller
architecture
or
the
models,
typically,
the
API
or
whatever
you
have
to
interpolate.
But
this
way
there's
a
middle
error
and
that
middle
layer
is
your
abstraction
layer
for
your
strux
that
you
care
about
internally
and
then
there's
always
like
a
some
road
to
do.
D
B
A
B
B
A
Yep,
that's
that's
the
one
cool
I
I
hope
we
can
try
to
get
that
in
today.
There
will
be
a
lot
of
I
mean
still.
The
the
main
functionalities
is
kind
of
heavier
than
going
to
see
alive
thoughts.
So
there
will
be
some
some
for
show
some
discussions
there
later,
but
I've
ivory
based
the
the
main,
the
full
pier
and
it
it's
now
like
chained
cleanly
with
this
one.
A
So
once
this
the
CL
IPR
is
its
merged,
it
will
just
I
mean
you
can
already
review
review
the
implementation,
P
implementation
code
cleanly
in
a
separate
comments
in
in
the
main
PR.
So
I
should
probably
split
the
unit
tests
out
there
for
better
review
ability,
because
I
mean
there's
a
lot
of
unit
testing
for
the
I,
don't
maybe
not
more
than
the
code
actually,
but
it's
it's
close.
So.
A
B
That
to
us
yeah,
so
I
put
a
list
of
peers
to
watch
in
the
notes
for
the
next
meeting
next
Tuesday,
because
while
Lukas
is
away,
I'm
gonna
try
and
keep
a
handle
on
what
everything
is
that
needs
to
land
I'm
sure
everyone
else
is
gonna.
Do
that
as
well,
but
maybe
we
can
aggregate.
Maybe
we
can
maintain
that
list
together
or
you
might
argue
that
the
list
should
be
that
the
repo
that,
like
in
in
github,
the
the
issues
and
the
milestone
but
I
think
at
least
for
me.
D
So
I
don't
exactly
know
how
hard
we
want
to
be
with
strict
requirements
on
this
feature.
Freeze
because
we
have
like
things
that
are
mid-flight,
so
it's
like
typically
to
have
like
you
know.
It's
got
to
be
lgt
and
before
the
feature
freeze
day,
but
we
have
like
half
the
codes.
You
can't
like
not
get
the
other
half
in
right.
So
this.
D
A
It
passes
pretty
cleanly
like
now
when,
when
there's
no
CLI
distraction,
it's
I
mean
there's,
maybe
seven
or
eight
files
and
about
I've
tried
to
make
them
like
a
hundred
lines,
each
other
each
file,
doing
the
its
own
thing,
and
and
that's
why
make
it
as
reviewable
as
possible?
I'm,
not
sure
if,
like
actually
splitting
it
technically,
you
know
more
peers,
make
it
easier
for
us
I'll.
A
B
A
B
A
Okay,
so
yeah
we've
we've
had
the
cube
ADM
like
1/8
implementation,
working
group
meeting
so
far
this
hour
and
to
get
the
recording
of
that
one
I'm
gonna
stop
this
meeting
temporarily
join
in
some
seconds
and
it
will
start
making
the
video
of
the
previous
one
while
I'm
recording
this
meeting.
If
that's
okay
with
everyone
great.