►
Description
Recording
[KB] Rolling update demo
https://github.com/cloudfoundry/korifi/pull/2525
[GC] New validator library
Going to migrate other endpoints to use the new validating library that the manifest endpoints use
A
A
A
Cool
so
you've
been
working
on
the
feature
to
do
nice.
Rolling,
kubernetes
updates
for
CF
push
re-stage
restart.
If
you
select
the
rolling
strategy
and
explain
a
bit
how
we
got
this
to
the
work,
so
we've
just
done
a
PR
today,
taking
in
the
feedback,
we've
got
from
gorgi
and
Danny's
draft
PR.
A
The
other
day
there
was
a
bit
of
discussion
about
how
to
update
karifi
when
you've
got
an
existing
app
and
that
app
shouldn't
restart
when
you
deploy
Corinthian
and
needs
to
go
and
sort
of
keep
the
same
name
and
everything.
It
was
a
bit
of
complication
there,
but
we
got
that
all
lined
out
in
the
end.
A
A
These
three
pods
here,
the
runtime
PODS
of
my
app
so
you've
got
three
three
instances
so
to
jump
straight
in.
If
you
do
something
like
CF
restart
strategy,
rolling.
A
This
is
how
we're
controlling
it
now,
so
we've
always
had
this
annotation
called
approv
that
gets
bumped
every
time.
The
app
goes
from
the
start,
State
into
the
Stop
state
started
into
stopped,
and
previously
this
was
also
used
to
compute
the
name
of
the
app
workload
and
the
stateful
set.
A
A
But
yes,
we
want
that
name
to
stay
stable
unless
we've
done
CF
restart
with
the
without
the
rolling.
So
we've
introduced
a
new
annotation
called
last.
Stop
at
rev
and
that
we
now
use
that
for
all
the
namings
and
as
a
sort
of
migration
when
this
version
of
karifi
comes
in,
this
will
be
set
to
be
the
same
as
approv
foreign.
A
A
If
I
don't
do
strategy
rolling,
this
just
sets
the
the
app
status
to
stop.
Then
start
it
again.
A
So
you
can
see
that
stateful
sets
just
disappeared
and
come
back
with
a
new
name,
the
app
workload's,
only
nine
seconds
old
as
well,
and
if
you
look
at
the
app
I
think
the
last
slap
forever
was
I,
don't
know
13
or
14
or
something
it's
probably
14..
So
it's
been
bumped
up
to
15,
but
now
you
see
the
last
stop
at
Forever
is
also
15.,
so
that
that's
why
the
name
of
the
app
workload
and
the
stateful
set
have
changed.
A
A
Eventually,
the
stateful
set
template
apprev,
sorry
version
Whatever,
Gets
bumped
as
well,
and
as
soon
as
you
do
that
that's
the
effect
of
triggering
a
rolling
up
date.
A
So
that's
we're
relying
on
that
fact
that
wasn't
working
until
this
PR,
because
there
was
a
match
label
in
the
stateful
set
pod
selector,
which
specified
version,
so
that
would
say
version
14
and
when
it
changed
to
15
it
would
be
rejected,
because
this
is
immutable.
Unfortunately,
so
we
couldn't
update
stateful
sets
in
the
past.
Now
we
can
because
we
could
remove
version
from
the
selector.
A
What
else
yeah
so,
if
you've
got
an
existing
app,
that's
going
to
have
version
in
the
selector
and
so
we're
stuck?
If
somebody
does
a
rolling,
restart
or
rolling
push
or
something
it's
going
to
fail.
So
nothing's
going
to
work
until
you
get
a
new
stateful
set
created,
so
we're
going
to
require
that
somebody
does
a
CF,
restart
or
a
CF
push
or
whatever
say
free
stage
and
that'll
be
sufficient
to
create
a
new
stateful
set
with
the
right
selector
on
it,
and
then
we
can
do
version
updates
yeah.
A
So
if
I
just
do
that
strategy
rolling
thing
again,
you
just
have
to
guess
what
the
cfap
looks
like
now.
So
we
should
have
appref
bumps
from
15
to
16.,
but
because
we
did
a
rolling
thing,
the
last
stop
approve
should
be
15..
B
It
just
but
that
change
we
made
to
the
to
the
match
labels
is
the
is
a
bit
of
the
our
critical
change,
because
basically
it
means
that
you
can't
just
try
to
restart
rolling
rolling
restarts.
It's
a
an
app
that
you
had
running
before
this
change
came
in
because
that
app
will
have
a
state
faucet
which
uses
the
version
in
its
label
matcher,
which
means
when,
as
soon
as
you
try
to
to
restart
with
rolling
it's
going
to
try
to
update
that
label
and
labels
that
are
part
of
a
matcha
are
immutable.
B
It
concerns
me
slightly,
but
maybe
it's
not
that
big
of
a
deal
if
we
don't
have
that
many
people
around
with
running
apps.
If
you
see
what
I
mean
we
could
point
it
out
in
the
release,
notes
or
but
like
we
couldn't
find
any.
Unfortunately,
I,
don't
think
there
is
any.
There
is
a
way
to
do
this
smoothly
because
you
have
to
re
in
order
to
change
that
selector.
You
have
to
recreate
right.
B
So
you
have
to
do
something
that
triggers
this
hard
restarts,
which
would
be
either
a
push
or
Asia
free
start
without
the
rolling
strategy,
or
maybe
a
restage
anything
that
leads
to
the
recreation
of
the
stateful
set
works,
because
then
the
new
state
Force
doesn't
have
that
label
in
the
selector
and
then
you
can
update
it
and
everything
works.
Fine.
B
B
C
A
How
it
behaves,
we
only
tried
it
on
as
a
cube,
Castle
use,
and
we
didn't
try
on
the
deployment
we
should.
B
C
B
B
And
then
the
CLI
uses
these
endpoints
right
and
we've
implemented
the
create
and
the
get
endpoints
for
deployments
and
forget
we
just
look
at
the
app
and
we
say:
okay.
If
the
app
is
in
a
certain
State,
then
the
deployment
is
ongoing
is
in
progress.
If
the
app
is
another
state,
then
the
deployment
is
complete.
B
I
don't
think
we
ever
say
the
deployment
is
failed,
so
it
might
just
tank.
We
could
it
could
either
so
there's
two
options:
basically
because
there's
two
possible
outputs
from
that
endpoint
either
it
just
returns
immediately
as
if
the
the
restart
that
happened,
but
it
actually
hasn't,
or
it
just
hangs
there
forever.
C
C
E
Is
is
there
a
way
on
the
create
and
deployment
endpoint
to
detect
that
your
app
workload
is
an
older
one
that
can't
support
it
and
just
like
fail
early
there,
or
is
that
messy.
A
E
B
Yeah
I
mean
I
guess
we
could
find
a
way
right
to
say
like
to
have
something
that
is
immediately
evident
on
the
app.
Maybe
I
don't
know
so
that
when
you
try
on
an
app
that
is
flagged,
as
you
know,
old
or
the
like,
even
if
we
have
to
add
some
codes
to
the
our
pre-conciler
to
kind
of
Mark
old
apps
and
then
remove
that
Mark
when
at
the
first
three
start
or
something,
maybe
that's
not
too
bad
because
yeah,
let's
just
telling
people
yeah,
you
can
now
use
strategy.
E
B
B
E
C
99
of
these
would
that
at
last
app
stop
breath
label
be
there
by
default,
or
does
that
get
added
the
first
time
you
try
and
do
or
push
or
restart.
C
C
A
D
D
Yeah
I
mean
I
was
thinking
like
in
the
controller
when
you
see
when
you're
reconciling
yeah
workload,
if
you
notice
that
it
still
has
a
selector
on
it,
you
could,
you
could
either
fail
in
a
meaningful
way
or
you
could
just
delete
the
thing
and
re-reconcile
right
and
force
a
hard
restart,
even
though
they
asked
for
a
rolling
restart.
A
B
Yeah,
the
tricky
bit
is
I.
Think
part
of
the
damage
that
gets
done
is
because
you've
bumped
the
number
kind
of
assuming
that
the
number
the
number
bump
will
cause
all
these
things
that
you
want
and
then
you've
bumped
the
number
and
then
Downstream
things
have
broken,
but
the
number
is
bumped
now
so
logs
are
broken
because
you'll
be
looking
for
a
different
number
and
a
bunch
of
things
are
broken
because
you'll
be
looking
for
a
different
number.
B
B
E
D
D
Fair
enough
yeah,
hopefully
they're
not
trying
this
out
for
the
first
time
in
production.
B
D
B
C
B
Warning
where
it's
like,
oh
you
know,
we
something
like
all
all
workloads
need
to
be
had
restarted
the
first
time
proceeding
without
restarting
we
do
it
anyway,
but
at
least
we
tell
them.
Oh,
you
know
something
like
that.
E
I
wonder
if,
like
this
might
not
help
us
right
now,
but
I
wonder
if
there
is
value
in
like
capturing
like
some
version
like
this
app
was
pushed
with
some
version
of
karifi
thing,
because
I'm
also
just
wondering
how
are
people
like
you,
we
could
tell
them
exactly
what
selectors
and
stuff
to
look
for,
but
in
general,
how
does
someone
know
how
old
a
workload
is.
B
In
a
really,
we
had
like
a
full-fledged
migration
framework
which
worked
like
in
the
spirit
of
I,
think
the
rails
one
or
where,
like
it
persisted,
yeah
version,
it
was
not
the
irini
version,
it
was
like
the
migration
number,
basically
or
the
timestamp,
or
something
on
the
objects
so
that
he
will
be
able
to
say.
Oh,
this
object
hasn't
been
migrated
to
the
latest
thing,
so
it
will
apply
all
migrations
yeah.
B
It
was
inspired
by
rails
right
where
it
would
just
run
when,
when
you
boot
up
the
new
Arena,
this
thing
spins
up-
and
maybe
it's
a
bit
too
much.
But
the
time
is
solved
as
a
bunch
of
problems
because
yeah
we
we
found
ourselves
in
situations
where,
like
we
had,
we
just
had
to
migrate
a
bunch
of
things
up
front.
E
B
E
D
D
D
What
you
really
want
to
do
is
just
change
some
other
field
and
not
change
the
selector.
What
if
you
just
left
the
selector
as
is,
is
that
gonna
not
select
stuff
yeah.
A
It's
it's
fine
on
the
the
stateful
sets
of
pod
relationship,
but
you
need
that
pod
version
to
be
set
correctly
for
locks
at
the
moment
and
things
like
that
and
metrics.
A
So
if,
if
you
then
update
the
version
in
the
template
for
the
Pod,
the
pod's
gonna
get
the
new
version,
but
then
the
selector's
got
the
old
version.
So
then
the
staple
set
loses
its
pods
and
then
creates
some
new
ones
for
you.
A
Yes,
it's
it's
a
shame.
Version
was
in
there
yeah.
D
B
A
B
D
I
was
just
trying
to
think
like
how
do
we
wriggle
ourselves
out
of
this
temporary
upgrade
transition?
Well,
maybe
we
could
just
temporarily
leave
the
diversion
until
the
next
time
they
do
a
hard
restart
or
a
push.
D
B
C
If
the
main
thing
we're
concerned
about
is
downtime
on
the
restart,
could
you,
when
the
app
workload
reconciler
detects
this?
Could
it
create
the
new
stateful
set
and
then
delete
the
old
one
instead
of
the
other
way
around,
so
that
the
new
app
comes
up
first
and
then
there
were
just
extra
apps
that.
A
B
Yeah
just
just
a
question
because
we
were
looking
at
the
validation
stuff
because
we
had
some
refactorings
in
mind.
It
looks
like
at
the
moment
the
most
urgent
say
change
to
the
whole
validation
thing.
Is
this
switch
from
the
old
library
to
the
new
one
right,
so
I
feel
like
at
the
moment
we
probably
only
introduced
a
new
library
for
manifests,
validations
and
and
the
old
one
is
still
there
everywhere
else,
but
I
was
looking
at
the
way
it's
done
right.
B
It's
still
like
in
terms
of
like
how
we
use
the
library
is
not
that
different
from
the
old
one,
like
the
old
one
had
annotations
on
the
objects
like
on
the
structs,
the
new
one,
we
Implement
a
validate
method
on
each
type
that
we
want
to
validate
well,
I.
Think
at
the
beginning
we
had
talked
about
having
like
these
Standalone
validator
objects
instead,
so
my
question
is
given
we
are
using
them
in
very
similar
ways.
Does
it
still
make
sense
to
switch?
Is
it
I,
remember
I,
remember
also.
The
reason
was
error.
B
B
If,
if
we've
got,
we
thought
we
got
back
his
better
error
messages,
because
we
did
see
that
yeah
that
error
messages
seemed
to
look
a
bit
better
and
the
new
library
returns
a
slightly
more
structured
era
that
can
be
deconstructed
to
build
error
messages.
The
way
we
like.
So
that
is
the
main
reason.
B
So
maybe
we
should
just
proceed
going
struck
by
struct
strip
off
the
old
annotations
from
the
old
library
and
replace
them
with
this
validate
method,
implementations
for
which
validate
the
same
things,
and
even
before
we
do
that
probably
backfill
tests,
if
you're
not
there,
so
that
you
can
verify
that
when
you
switch
from
one
library
to
the
other,
you
haven't,
dropped
any
validation.
Basically,
okay,
yeah
I
had
the
suspicion.
I,
remember
talking
a
lot
about
error
messages,
because
they
all
that
all
their
messages
were
just
bad
and
it
was
just
there
wasn't
much.