►
From YouTube: Evaluate the post-deploy migration info
B
Okay
awesome,
so
thank
you
for
joining
this
meeting
is
to
evaluate
and
reassess
the
information
we
have
available
for
post-deployment
migrations,
and
there
are
two
scenarios
in
which
we
are
going
to
need
that
information.
The
first
one
is
when
we
are
executing
the
post
deployment
pipeline,
so
this
pipeline
once
speed
is
started,
it
is
going
to
execute
the
post-employment
migrations
on
staging
and
then
on
production.
B
So
I
left
a
comment
in
an
issue
and
you
responded
that
we
have
this
very
handy
command
on
rails.
That
is
going
to
return
us
the
list
of
pending
migrations
right.
So
my
question
is
now:
how
can
we
return
that
information
or
make
that
information
available
to
release.
A
So
this
is
a
great
question,
so
well
first
well.
The
first
thing
could
be
that
we
just
explored
this
as
an
artifact,
so
I
was
actually
think
that
let
me
send
this
I'm
going
to
paste
the
link
in
the
document,
because
so
that's
the
link
now.
A
What
I
was
thinking
here
is
that
is
this
right,
so
we
have
that
section
of
the
ansible
playbook
that
can
perform
the
type
of
check
and
right
now
it
is
integrated
in
the
regular
pipelines
so
that
we
know
that
we
run
this
after
regular
migrations
so
that
the
status
of
the
database
is
counting
only
post-deployment
migration,
and
so
when
we
run
that
script,
we
scrape
out
only
down
migration.
Now
we
scrape
out
yeah,
we
only
select
down
migrations,
which
means
they
are
not
run.
A
This
gave
us
the
timestamp
of
the
migration
file
and
some
kind
of
uncommon
case
version
of
the
name.
Something
like
this.
So
the
the
second
sentence
is
it's
a
pointer
to
the
to
the
file
of
the
migration
file,
so
that
both
things
can
be
combined
together
to
get
the
the
proper
file
name.
But
what
we
need
right
right
now
is
just
we
want
to
know.
A
If
there
are
so,
we
were
just
counting
and
generating
a
prometheus
scriptable
file
so
that
the
node
exporter
is
publishing
those
information,
and
if
we
want
to
have
the
list,
maybe
we
should
consider
something
like
dumping,
that
output
on
a
file
and
saving
it
as
an
artifact.
A
Now
the
thing
that
I
linked
is
interesting
because
it's
it's
trying
to.
They
are
us
they're,
trying
to
figure
out
if
there
is
an
easy
way
to
trigger
a
pipeline
and
refer
and
reference
the
artifacts
from
that
pipeline
from
the
from
the
upstream
pipeline
and
while
they're
saying
that
the
syntactic
sugar
is
not
yet
there,
they
seems
to
mention
that
there
is
a
clever
trick
with
dot
m
artifacts
to
actually
do
the
thing
right
now.
A
Have
to
figure
out
how,
because
I
was
just
I
remember-
that
conversations
there's
there
should
be
something
in
between
where
you
can
say.
I
want
things
from
this
job
and,
and
it
will
work,
if
you
save
the
say,
maybe
the
pipeline-
I
don't
know
the
pipeline
id
or
something
in
a
dot
environment.
Something
like
this
by
the
way.
A
So
my
point
here
is
this:
the
only
way
the
only
place
where
we
can
get
a
safe
information
about
this
is
the
node
itself,
because
it's
the
only
thing
that
we
have
that
can
check
this
right.
So
we
could
try
to
extract
from
that.
We
could
try
to
the
reason
why
I
was
thinking
about
triggering
a
new
pipeline
just
for
the
specific
purpose
of
checking.
It
is
because,
basically,
one
of
the
problem
that
we
have
is
that
we
don't
really
have
shared
storage
for
the
things
that
we
are
doing.
A
So
if
we
had
some
kind
of
trusted
shared
storage,
we
could
just
say
when
we
do
the
regular
migration.
We
also
upload
the
current
state
in
that
storage.
Okay.
So
when
we
we
run
the
positive
planar
magnesium,
we
just
refer
to
it,
but
yeah.
Given
we
don't
have
it,
we
can
just
leverage
the
regular
artifacts
and
say
now:
I'm
gonna
run
things.
I
I'm
sure
that
I
am
in
the
I
lock
the
environment,
so
I'm
going
to
run
and
nothing
else
is
touching
the
environment.
B
The
way
we
used
to
have
it
for
q,
a
issues
something
like
that,
and
the
second
approach
would
be
to
explore
the
hack
I'm
going
to
call
it
a
hack
that
is
in
this
issue
to
see
if
we
can
make
something
similar
but
yeah,
the
idea
would
be
to
store
it
as
an
artifact.
Okay
got
it.
A
C
A
C
Very
different
than
what
you
all
are
describing
today,
that's
fine,
so
we
know
via
autodeploy
what
container
we
deploy
or
what
version
of
the
app
we
want
to
deploy.
What,
if
we
extended
autodeploy,
I
don't
know
how
to
do
this
by
the
way
where
we
asked
a
new
deployment
to
install
the
same
version
of
the
gitlab
image
into
our
deployments.
C
And
that
will
provide
us
with
just
like
the
the
date
time
stamp
of
that
migration
always
available
prometheus
and
we
could
set
differing
status
where,
if
it
says
hey
it's
been
run,
it
has
a
value
of
say,
zero
and
if
it
has
been
run
it's
pending,
so
it
might
have
a
value
of
water
or
some
of
that
effect.
A
How
do
you
keep
the
cardinality
down?
Because
I
was
thinking
about
the
same
thing?
I
was
trying
to
figure
out
how
the
gitlab
exporter
works
and
if
there
is
a
way
to
add
a
extra
scraper
into
it,
then
I
started.
I
I
reread
myra's
question
and
she
wants
to
know
the
name
of
the
migrations
and
names
to
me.
I
was
thinking
yeah,
you
can
use
labels,
but
then
every
every
migration
will
just
create
a
new
label.
C
A
Yeah
just
talk
about
this.
No,
I
mean
because
my
understanding
of
the
problem
with
labeled
cardinality
is
that
if
we
have
a
label
that
is
name
and
then
you
can
put
in
there
every
name
on
on
the
file
system,
then
it
explodes,
because
it
will
just
be
one
once
and
then
it
will
go
say
one
means
to
run
and
zero
means
that
is
applied
right.
So
you
will
scrape
it
as
one
once
and
then
we
just
go
to
zero
and
you
kind
of
never
care
about.
C
It
anymore
so
precise.
On
that
token,
what
if
we
only
include
the
migrations
that
need
to
run
will
have
a
value
of
one
and
if
the
migration
has
been
run,
it
disappears
from
prometheus
that
helps
keep
the
carnality
low
when
you're
searching
within
a
small
window.
If
you
look
at
the
timeline
of
an
entire
year,
the
cardinality
will
obviously
be
very
high,
but
we
would
never
search
over
the
course
of
it
and
that.
A
That's
a
good
question
now
because
I
wanted
to
actually
what
I
want
to
ask.
So
what's
the
problem
of
labeled
cardinality
is
overall
across
the
whole
timestamp
or
is
just
related
to
the
the
the
window
of
time
you're
searching
because
I
don't
know
the
details
of
the
problem.
I
know
there
is
a
problem
and
that
should
be
avoided
so.
C
A
But
I
don't
think
that
gives
a
structured
information,
but
the
the
thing
that
skybox
was
proposing
is
having
something
to
run
and
can
just
compare
the
basically
compare
the
status
of
the
database
with
the
list
of
migration
known
on
his
own
pod
disk.
So
it
has
no
idea
that
a
migration
is
running
it
just
can
see
that.
C
B
B
Yeah,
I
think
an
artifact
would
be
like
the
fastest
in
which
we
can
achieve
this.
The
information
that
we
need,
and
probably
something
more
robust,
will
be
something
like
what
you
are
proposing.
Starbuck.
B
A
Yeah,
probably
something
that,
like
the
one
you're
proposing,
is
something
that
could
be
easier
to
implement
when
we
will
end
up
having
say
migration
running
as
job
in
kubernetes,
because
then
we
already
entered
that
phase.
When
we
say
now,
we
schedule
extra
stuff
on
the
on
the
cluster
that
are
somehow
tied
to
the
what
we
are
running
at
the
moment.
So
we
have
already
have
everything
in
place
to
put
things
that
can
coexist
with
the
with
the
charge
deployment,
but
that
is
not
part
of
the
shard
deployment
itself.
B
Okay,
well,
I
think
we
have
our
answer
now
for
a
scenario
a
so.
The
next
scenario
is
basically
rolling
back
a
package,
so
right
now
to
roll
back.
We
cannot
roll
back
if
the
package
has
post
deployment
migrations
in
this
case,
since
the
possible
migrations
are
going
to
be
executed
independently,
whenever
we
run
the
chat,
ops
command
check
command.
B
We
need
to
know
if
the
package
has
a
pool
deployment
migration
as
we
do
now,
and
if
the
post
deployment
migration
has
run
now,
you
left
two
proposals
that
basically
about
how
can
we
know
that
post
migration
has
been
executed,
the
first
one
being
basically
to
parse
the
pipelines
and
the
jobs
to
know
and
the
second
one
about
redefining
our
rollback
rules,
which
is
something
that
I
didn't
quite
get
alessio.
I
don't
know
if
you
can.
Please
expand
on
that
sure.
So.
A
Right
now
we
support
rollback
for
one
single
package.
So
when
we
ask
to
check
if
we
can
roll
back,
we
we
have
the
current
version,
the
previous
version.
Eventually,
we
may
have
the
new
version,
which
is
the
something
that
is
rolling
out
at
the
moment,
but
is
not
really
fully
installed
on
the
system
and
so
by
definition,
post
deployment.
Migration
never
runs
so
that's
safe
for
definition
to
roll
back.
So
the
point
is
this
right,
so
we
we
can
go
back
to
one
version
at
maximum.
A
We
did
once
we
did
a
two
versions
rollback,
but
it
was
a
manual
effort
when
robert
did
due
diligence
in
checking
ex
manually
checking
the
presence
of
post-deployment
migration
things
like
that
in
the
previous
one,
but
I
mean
now
we
are
talking
about
the
rules
of
the
post-deployment
migration
that
we
have
in
place
and
the
checks
and
things
like
this.
So
what
I'm
saying
here
is
this:
we
don't
so
because
we
used
to
run
always
run
post
deployment
migration.
A
We
had
to
check
if
the
package
includes
or
not
post
development
migration,
but
if
we
remove
post
development
migration
from
pipeline
generation
from
the
pipeline
that
installs
stuff,
we
no
longer
care
of
the
content
of
the
package
itself.
We
only
care
about
what
what
is
the
last
version
that
run
post
deployment
migration,
because
that
is
the
point
in
time
where
you
can
roll
back.
So
even
extending
on
on
the
current
rules,
we
could
potentially
roll
back
up
until
the
last
version.
We
know
we
run
positive
and
migration.
A
A
So
my
point
was
this:
when
we
schedule
a
post
deployment
migration,
if
there
are
post-deployment
migration,
we
run
them
and
then
we
mark
the
the
package
as
deployed
to
let's
say,
database
production,
whatever
just
creating
a
new
environment
that
refers
the
status
of
the
database
and
not
the
status
of
the
of
the
fleet,
so
that
when
we
check
we
check
that
version
instead
of
the
the
production
version,
for
instance,
because
we
care
about
database,
and
on
top
of
that
we
could.
We
could
extend
this
by
saying
I
schedule
the
post
deployment
migration.
A
I
check
I
checked
and
there's
nothing
to
run.
Then
I
don't
tag
the
packages
deployed.
So
it
gives
me
a
larger
rollback
window
because
I
may
end
up
running
a
possible
migration
right
now
with
something
that
does
not
include
post-development
migration
and
market
it
as
run
and
then
will
mean
that
I
will
no
longer
be
able
to
roll
back
past.
That
point,
which
is
not
true.
B
Yeah,
my
question
to
that
is
that
post
deployment
migrations
are
going
to
be
executed.
Let's
say
that
at
least
once
a
day,
and
that
means
that
they
are
piling
up
from
different
packages
yeah.
So
at
the
moment
we
are
going
to
execute
that
we
kind
of
need
to
know
which
post-climate
migrations
belong
to
which
package.
A
So
what
I'm
saying
here
is
that
we
don't
really
care
which
package
includes
a
post-deployment
migration
as
long
as
we
know
if
it
has
run
or
not
on
the
environment,
because
the
the
code
itself
is
designed
to
run
with
or
without
the
post
deployment.
Migration
is.
When
you
run
it,
then
you
can't.
You
can't
run
the
previous
code.
A
C
So
say
we
have
deployed
three
packages
and
assume
one
migration
each.
So
we
got
three
bottled
up
ready
to
go.
We
deployed
a
fourth
package
which
has
a
migration
we
don't
want.
We
don't
want
to
run
the
migration
of
that
fourth
package.
We
just
want
to
run
the
migrations
for
the
first
three
packages
that
were
deployed
is
that
the
goal.
B
A
You
can
change
in
the
future.
But
the
point
is
that
at
that
point
in
time
you
pick
what
is
running
on
the
environment,
that
you're
targeting
and
you
run
all
its
possible
migration.
So
if
something
got
deployed
just
five
minutes
before
by
deployed,
I
mean
fully
deployed,
so
you
you
upgrade
all
the
old
machines,
then
those
positive
migration
will
be
installed
at
that
point
in
time.
C
A
A
So
the
idea
original
idea
was
to
kind
of
have
a
gout
on
post-deployment
broadband
pressure
so
that,
instead
of
doing
this
on
a
daily
basis-
or
whatever
is
the
schedule
it's
just
due
to
as
a
release
manager,
you
take
a
look
of
how
many
post
development
migration
we
have.
Is
it
a
good
day
to
run
them
or
not,
and
then
you
around
them,
but
I
myra
correct
me
if
I'm
wrong.
What
I
think
that
is
happening
here
is
that
we
are
taking
a
first
step
in
that
direction.
A
We
want
to
make
sure
that
we
don't
delay
too
much
the
execution
of
post-supply
migration
from
the
deployment
itself
and
things
like
that,
and
so
we
are
going
to
start
with
a
daily
thing,
which
kind
of
means
that
there
is
a
point
in
time
every
day,
where
you
kind
of
mark
the
thing
and
say
nope,
we
can't
roll
back
now
from
now
on,
we
can't
roll
back.
We
now
and
then
we
open
another
kind
of
rolled
back
window,
yeah.
C
There's
four
little
chicken
nuggets
rolling
around
on
the
floor,
but
no,
I
do
have.
C
Why
did
my
microphone
change?
It's
still
blue
snowball,
okay,
so
I
have
a
concern
or
many,
but
the
one
that
screams
at
me
is
that
we
do
occasionally
have
migrations.
C
My
concern
is
if
we
bottle
these
up
question,
if
we
I
don't
know
how
database
migrations
are
deployed
today,
if
there's
a
package
with
two
migrations,
for
example,
are
they
deployed
at
or
is
the
migration
run
at
the
same
time
for
both
or
they
run
in
sequential
order,
sequentially
sequential
okay?
So
I
guess
my
concern
I
have
is
sometimes
we
have
post-employed
migrations
that
take
a
long
time
to
run.
I
think
it
was
one
or
two
weeks
ago
we
were
trying
to
drop
an
index,
but
you
know
the
table
was.
B
No,
no,
no!
That's!
Okay!
Yeah!
I
can
see
your
point
scarves,
but
also
consider
that
now
you
will
have
some
sort
of
control
of
when
you
are
going
to
execute
those
migrations
and
when
we
have
a
large
migrations
right
now,
release
managers
are
supposed
to
know.
The
way
they
are
supposed
to
know
is
very
manual
because
the
author
has
to
let
you
know.
Sometimes
it
doesn't
happen,
but
that's
another
story.
B
B
So
this
is
kind
of
the
first
iteration,
the
second
iteration
of
this
epic.
That
will
be
to
actually
classify
the
post
migration
based
on
the
nature,
so
the
nature
can
be
like
how
much
time
is
it
going
to
take
if
it
is
going
to
change
a
structure?
If
it's
going
to
read
data
and
based
on
that
nature,
we
can
execute
it
like
in
a
specific
schedule,
for
example,
that
migration
that
was
dropping
an
index
it
doesn't
have
to
be
executed
like
in
the
weekdays.
B
C
C
A
B
B
C
A
That's
the
thing
we
fixed
and
discussed
in
the
in
the
in
in
the
murducus
itself,
because
the
getting
these
things
out
of
the
regular
pipeline,
we
are
getting
it
this
out
of
the
environment
blocking
thing,
because
the
sequential
order
of
promotion
was
kind
of
giving
us
the
same
protection,
even
though
the
post
deployment
were
already
outside
of
the
environment,
locking.
A
So
what
what
myra
did
is
that
now,
when
you
trigger
a
positive
alignment,
migration
from
release
tools
to
deployer,
it
will
block
the
environment,
so
they
will
compete.
Obviously,
but
the
first
one
that
gets
in
the
other
one
will
have
to
wait
or
bail
out.
B
Starbuck
feel
free
to
add
any
questions
you
might
have
on
the
epic
or
in
an
issue.
We
can
discuss
there,
so
we
have
15
minutes
and
I
just
want
to
be
sure
that
I
get
the
rollback
idea
clear.
So
I'm
just
to
put
an
example
unless
you're
returning
to
your
original
idea,
let's
say
that
we
have
three
migrations:
three
post
deployment,
migrations
pile
up
right
now,
all
of
them
from
three
different
packages,
and
now
we
have
a
new
package
that
is
coming
up
that
was
deployed
on
production.
B
A
A
So
let's
imagine
that
we
have
packages
with
name
as
number,
so
we
have
one
two,
three
four
and
five
and
saying
five,
because
I
want
to
say
that
one
is
a
package
from
yesterday
that
included
post
deployment
migration
that
we
run.
Then
we
have
two
three
four,
which
are
the
the
three
packages
that
you
mentioned:
each
one
of
them
has
a
post-deployment
migration,
and
then
you
have
five,
which
is
the
last
one
that
still
has
a
post-deployment
migration
and
that
we
want
to
roll
back
okay,
okay.
A
So
in
my
proposal
yesterday,
when
we
run
post
deployment
migration
on
its
own
schedule,
we
had
package
one.
So
at
that
point
in
time
when
we
were
marking
the
tracking
deployments,
what
we
do
with
release
tools,
we
would
have
tracked
package,
one
was
deployed
on
db,
slash,
staging
gstg
and
deep
and
then
on
db,
slash
gprd.
A
These
are
some
nice
side
effects
which
are
we
track
on
merge
requests
when
the
positive
and
positive
migration
are
run,
which
is
something
new
that
we
didn't
that
we
didn't
have
right
now,
which
I
think
is
it's
a
cool
addition
for
engineers
working
on
that,
and
it
gives
us
all
the
the
same
api
retrieval
stuff
when
we
can
say.
Oh
have
we
run
this
now
we
say:
have
we
run
this
package
and
we
applied
this
package
in
production.
A
Now
we
can
say:
have
we
run
post
deployment,
migrations
included
on
this
package
in
staging
or
production?
We
only
care
about
staging
and
production,
because
we
don't
run
post-deployment
migration
in
canary
environment
by
definition.
Okay.
So
we
have
this
thing
that
is
tracking
the
deployment
the
post-development
migration
now
back
to
today,
package:
five,
we
need
to
roll
it
back,
so
we
run
the
check
the
the
chat
ups
check
command,
and
today
this
the
check
is
checking
check
is
checking.
Obviously,
it's
a
check.
A
What
we
wanted
to
check
is
the
last
two
deployment
on
db,
slash
gprd,
I
mean
the
last
deployment
actually.
So
what
this
gives
us,
this
point
where
we
say,
dbgp
or
d,
is
running
verse
package.
One.
You
are
on
package,
four,
five,
sorry
you're
on
page
five.
So
in
terms
of
database
compatibility,
this
gives
you
rollback
four,
four,
five,
four,
three
and
two:
you
don't
want
to
do
this.
You
just
won't,
say:
okay,
you
can
go
back
to
three
because
we
you
you
want
to
go
back
one
only
one
time,
but
that's
the
thing
right.
A
A
What
is
running
on
the
on
the
database
environment.
It's
package
one.
So
this
is
package
four,
you
can
roll
back,
it's
not
the
same
so
and
if
the
previous,
let's
say
we
want
to
do
this
on
package
two
package,
two,
you
can
still
say
yes,
because
package
one
is
running
and
it's
already
running,
so
you
want
to
go
back
up
to
one.
So
it's
fine.
B
A
No,
if
you
bail
out
from
recording
the
deployment
when
you
check
that
there
are
no
deployment,
no
no
post
deployment
migration,
so
you
daily
run
the
pipeline
for
running
the
the
migration.
So
we
are
not
talking
road
back.
We're
talking
possible
migration
now,
okay,
so
we
were
discussing
before.
How
can
I
check
if
there
are
pending
post
deployment?
Migration?
A
That's
the
firs!
The
first
point
we
address
today
right
so,
and
this
is
what
we
care
about.
So
when
you
start
the
pipeline
for
post
deployment
migration,
you
can
check
if
there
are
pending
migration.
A
B
Sorry
go
ahead.
What
deployment
is
it
going
to
be
if
we
have
different
post
deployment
migrations
from
different
packages?
The.
A
Package,
that
is
on
the
deploy
box,
because
you
don't
care
who
included,
but
you
care
when
you
run
okay,
got
it
so
for
sure
package
one.
It
includes
that
post
deployment
migration,
even
in
the
case
where
it
was
coming
from
package
zero
right.
So
it
could
be
that
the
migration
was
for
the
previous
package,
but
yeah.
It.
A
Never
got
to
the
post
deployment
pipeline
phase,
you
don't
care,
you
just
want
to
know
a
point
in
time.
Obviously
this
is
not
perfect
because
you
can
have
a
situation
where
migrations
weren't
on
one
package.
The
second
one
didn't
have
migration,
but
you
run
the
the
possible
migration
there
so
you're
marking
something
that
is
potentially
that
you
can.
You
could
potentially
draw
back.
A
So
that's
the
only.
I
think
this
is
the
only
use
case
when
you
want
to
go
back
to
the
to
the
previous
implementation,
the
one
that
we
already
have.
So
let's
say
you
want
to
brought
back
a
package
package,
four
yeah.
Yes,
I
package
five,
eight
four
package,
four
and
he
it
is
marked
as
run
exactly
that
same
package.
So
you
know
that
the
package
is
deployed
to
the
database,
that
that's
the
thing
right.
So
you
deployed
that
package
on
the
database.
A
So
you,
in
that
case
you
can
compare
to
the
real
previous
deployment
deployment
to
the
nodes
and
or
to
the
fleet
and
not
to
the
database
itself
and
see.
There
are
no
neopos
deployment
migration
in
package,
four,
which
means
that
if
I
roll
back,
the
code
is
still
able
to
run
with
the
pos
deployment
applied.
That's
the
only
edge
case.
A
Got
it
I
think,
if
you,
if
we
write
down
those
things,
could
be
easier
because
kind
of
help
us
to
figure
out.
If
you
are,
we
have
gaps
in
this
in
this
approach,
but
to
me
I'm
quite
sure
this
is
okay
and
we
could,
if
we
were
implementing
this
from
scratch
right
now.
I
would
say
just
let's
forget
about
the
edge
case
and
say
that
if
it's
the
same
version,
we
don't
want
to
roll
back
regardless
of
the
content,
but
because
we
are
coming
from
an
older
implementation
that
is
already
doing
this
check.
B
B
First,
one
I'm
going
to
write
down
the
action
items.
The
first
one
will
be
to
make
the
list
of
different
operations
available
to
release
tools
which
is
basically
artifacts
and
then
explore
that.
A
Okay
yeah,
I
would
say
that,
even
if
we
end
up
choosing
for
another
logic
to
to
decide
when
we
want
to
roll
back,
having
tr
being
able
to
track
post
deployment
when
on
the
was
level.
So
you
know.
C
A
We
run
it's
still
a
viable
addition.
Maybe
it's
not
in
line
with
what
we
are
trying
to
do
here,
but
still
an
important
information
to
have
for
developers.
B
Oh
yeah,
definitely
there
is
an
issue
about
that,
because
I
imagine
that
developers
are
going
to
be
puzzled
about
their
positive
migration,
so
yeah.
This
is
super
helpful
okay,
so
I
guess
we
can
start
for
making
the
list
of
of
spending
possibly
migrations,
because
that's
what
we
need
anyway,
I'm
gonna
open
up
the
issue.
I
don't
know
unless
you're
available
to
take
this
one.
A
B
B
Okay,
awesome
scarves
is
there
anything
that
you
want
to
add.