►
From YouTube: 2022-10-05 - Delivery:Orchestration demo - APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Great
okay
so
welcome.
This
is
October
the
5th
2022.
This
is
our
APAC
Emer
orchestration
demo.
So
let's
just
run
through
a
little
quick
action
check-in
and
see.
If
it's
anything,
we
need
to
follow
up
here.
So
I
had
an
action
from
last
week,
which
was
just
to
check
in
with
configure
and
make
sure
that
we
kind
of
knew
they
were
on
board
and
kind
of
knew
where
we
stood
so
I.
Think
that's
looking!
B
Yes,
I
did
only
missing
one
montaner's
overall,
the
reaction
was
very
positive.
I
picked
him,
but
yeah,
oh
no
answer
I
mean
he
answered
me
told
me
that
he
was
going
to
look
at
this
Monday
because
okay
Friday
evening,
and
then
he
probably
slipped
through
so
we're
reaping.
But
overall
reaction
was
very
positive.
A
Okay,
awesome,
that's
great
good
and
then
Myra
has
added
in
her
update,
so
she's
completed
the
gitly
analysis
issue
and
added
in
a
kind
of
overview
there.
So
that's
good
and
Mario
has
also
completed
the
test
to
make
sure
that
Pages
can
build
from
Shah.
A
So
nice
set
of
progress
so
yeah,
let's
jump
to
number
two.
B
Sure
so
I
was
taking
a
look
at
the
overall
status
of
the.
So
how
can
we
move
to
implementation
phase
of
this,
so
I've
been
I've
spent
some
time
doing
some
polishing
here
and
there.
So
the
first
thing.
So
this
is
all
implementation
right.
So
2607
is
the
very
first
thing
we
need
and
is
a
refactoring
for
being
able
to
gather
the
the
the
to
the
currency
deployed
version.
B
B
There
is
a
good
degree
of
not
interaction
between
the
two
things,
so
the
public
release
page
the
pages
released,
could
be
developed
in
the
meantime,
even
though
we
still
don't
have
a
way
to
retrieve
the
the
deployment
version,
because
until
we
link
this
into
the
real
release
process,
which
is
not
the
last
step
the
one
before.
Basically,
he
makes
nothing.
B
So
we
we
can
start
building
and
taking
off
those
things
and
testing
on
other
on
a
fork
or
something
like
that
to
see
if
it's
working
yep,
that
one
is
the
these
are
the
say,
the
two
implementation
side
of
it
I
was
reviewing
Myra
work
on
CNG
and
Omnibus,
and
I
noticed
that
there
is
actually
a
bug
in
the
CNG
image.
There
is
not
reporting
the
right
version
not
be
in
our
case
in
any
case,
so
every
CNG
Pages
image
is
just
reporting
them,
not
real,
not
accurate
version.
B
So
I
have
made
this
merge
request,
which
is
in
review.
I
I
have
to
see.
I
have
still
have
to
check
comments
because
I
saw
was
reviewed
overnight,
I
think
and
then
the
last
section
of
this.
It's
all
about
rolling
this
out,
so
creating
the
stable
branches
for
the
old
releases
and
things
like
that.
So
is
not
a
lot
of
work.
B
A
Awesome
thanks
for
putting
that
together,
yeah
and
I
mean
we
I
was
sent
a
game
just
a
little
earlier
that
I
think
capacity.
Things
are
unfortunate,
but
we're
fine
right,
like
as
we
came
into
this
quarter,
I
think
we
expected
to
have
a
bit
more
capacity
for
lots
of
good
reasons.
We
haven't
had
as
much
so
I
I'm
completely
fine
with
that,
and
it's
it's
I
think
probably
just
a
really
good
one
for
us
to
have
in
mind
as
we
get
into
Q4
right.
A
What
I
will
do
so
Myra
is
back
same
day
as
me,
so
next
Thursday
I
will
ask
that
she
picks
up
that
reflector
for
the
deployment
finder
as
her
kind
of
first
task
she's
around
only
for
about
a
week
and
a
half
before
a
conference.
But
hopefully
that
would
be
enough
time
for
her
to
get
that
refat
to
complete
it,
and
then
at
least
we've
got
that
big
step
moved
along
and
then
I'll
ask
you.
A
If
you
get
kind
of
capacity,
then
perhaps
you
can
do
the
public
release
class
for
Pages
as
a
kind
of
follow-on
from
that.
C
Just
a
question
about
the
refactor
just
to
help
me
understand
more
than
anything
so
I
know
that
we've
got
the
release
metadata
repo
and
that
contains
like
all
the
files
for
like
every
Auto
deploy.
You
know
all
the
version
and
it's
a
Json
file.
How
do
we
know
which
one
of
those
is
actually
what's
running
on.com
at
the
moment
like
I'm,
just
trying
to
understand?
B
B
Is
this
is
already
implied
now?
Maybe
the
deployment
find
or
not,
basically,
the
product
version
class
already
handled
this
thing,
so
you
can
give
them.
You
can
ask
give
me
what's
inside.
C
C
So
basically,
instead
of
now
like
looking
at
the
three
spots
for
their
deployments,
we're
just
looking
at
the
one,
but
then
we
will
still
need
to
actually
get
that
like
with
that
will
give
us
a
version
right.
But
then
we
need
to
get
the
file
and
read
from
that
extrapolate
out
like
the
bits
of
data.
We
need
correct.
So
this.
B
Is
the
class
helper
just
gives
you
the
representation
of
the
components
in
the
deployment,
so
it
gives
you
the
shot.
So
you
can
definitely
ask
tell
me
it's
already
in
the
hell
in
the
class
once
once,
you
have
a
product
version
which
is.
A
C
Is
actually
doing
that
data
yeah
cool
okay,
we
just
said
just
to
understand
okay,
so
those
files
will
now
will
really
be
moving
towards,
like
the
actual
single
source
of
Truth
for
this
kind
of
stuff
that
repo
and
that
and
those
classes
and
sorry
and
those
files
which
makes
sense
I'm
just
clarifying.
A
We
may
also
well
we're
expecting
as
well.
Matt
will
be
back
at
some
point.
Hopefully
this
quarter
as
well,
so
we
may
also
have
some
additional
kind
of
Ruby
capacity
there.
He
he
could
probably
pick
up
one
of
these
tasks,
so
I
agree
that
we're
looking
we're
looking
pretty
good
I
think
that
the
I
feel,
like
the
big
things
with
the
pages,
were,
were
actually
the
kind
of
the
understanding
and
defining
managed
versioning
and
getting
the
maintainers
on
board.
So
I
think
we're
we're
looking
pretty
good
there.
C
B
C
I
don't
want
to
drag
them
too
much
together,
but
I
would
really
hope
that
if
we-
because
we
have
to
do
pages-
and
we
have
to
do
cars,
I
would
hope
we
can
try
and
make
the
cas
one.
Almost
literally
like
a
cut
and
paste,
you
know
like
very
little
independent
logic.
It
should
really
be
almost
identical
to
the
pages
one
like
we
could
try
and
you
know,
as
we
work
through
the
pages
really
be
thinking
about.
How
can
this
be?
Just
like?
C
Almost
the
like,
you
know,
can
this
be
almost
identical
for
cars
as
well?
Not
like?
Oh,
no.
You
have
to
do
something
special,
because
both
those
components,
especially
once
they
get
the
you
know
the
full
managed
versioning
stuff.
They
should
be
very
similar
in
how
they
work
and
from
our
perspective
of
like
packaging
and
deploying
them
yeah.
B
C
B
A
So
what
is
the
stage
that
we
would
have
Pages
or
any
other
component
from
gitly
have
the
bit
where
release
tools
is
gathering
up
the
the
merge
changes
and
bringing
them
into
Auto
deploys
with
the
release,
don't
really
sort
of
Bot
doing
that
step?
Does
that
make
any
sense?
You
know
the
creating
of
the.
A
It's
like
okay,
great,
because
what
I
want
to
make
sure
we
think
about
when
we
do
that.
One
thing
that
Myra
highlighted
in
the
kind
of
gitly
analysis
is
the
way
that
failures
get
alerted
on
that
stuff.
Is.
It
doesn't
really
work
well
for
gitly
gitly
Team,
so
it
might
be
a
good
time
for
us
to
actually
do
a
review
of
that
when
we
do
that
for
someone
else,
yeah
yeah.
B
B
So
if
I
remember
correctly,
because
this
is
a
note
that
I
left
in
the
blueprint
itself
that
in
the
requirements,
the
original
version
was
mentioning
the
the
automated
version
bumping
as
a
requirement
for
that
which
is
not
accurate,
because
it's
actually
an
alternative,
so
you
want
to
bump
the
version
file.
If
you
want
to
get
continuous
delivery,
let's
say
release
deploy,
so
exactly
you
get.
So
that's
the
if
you
don't
want
to
get
into
independent
deployment.
Your
your
top
level
of
automation
is
automate
release,
bumping
and
deploy
with
also
deploy.
B
But
if
you
want
to
go
independent,
there's
no
need
to
about
to
bump
the
version,
because
it's
the
development
team
that
will
run
their
own
deployments.
So
it's
a
synchronization
problem
that
we
still
have
to
figure
out
then,
because
within
we
want,
how
do
we
know
what's
the
right
version,
but
that's
another
topic
right,
so
they
will
never
reach
that
point
about
I.
C
B
B
C
I
would
almost
say
for
master,
so
so,
let's
say
you're
a
pro
your
component
and
you've
adopted
independent,
deploys.
Whatever
that
looks
like
I
would
almost
say
in
the
gitlab
repo
for
master,
the
version
file
should
probably
contain
like
independent,
deploy
or
something
like
that
right
like
just
to.
We
should
for
master,
obviously
on
the
branch,
stable
branches
that'll
reference
stuff
that
we
but.
A
C
B
The
tricky
point
where
we
want
to
untangle
the
thing
right
so
what
we
build
and
test
today,
workers,
because
it's
in
the
same
repo,
obviously
and
I,
don't
know
if
pages
is
built
or
not
that
that's
the
thing
right.
So
maybe
what
we
want
to
do
is
not
go
into
that
direction,
for
unit
testing
for
the
other
components
and
so
find
an
alternative
proposal
to
give
the
hand-to-end
testing
yeah,
which
I
think
is
also
something
that
mikhaila
pointed
out
in
the
in
the
blueprint
as
well.
B
So
maybe
we
should
just
say
what
is
in
place
today
can
stay,
as
is
because
I
don't
think
we
are
going
to
have
easily
independent
deployment
very
soon,
especially
with
the
team
split
they
are
having
and
the
huge
re-architecturing
and
everything.
So
that's
not
a
problem
at
all
and
workers
is
mono
reboot
inside.
So
that's
fine
as
well.
We
are
not
doing
anything
for
them
because
it's
already
in
there
there's.
No,
it
there's
no
need
to
synchronize
stuff,
because
it's
in
the
same
repo,
so
probably
we're
in
a
good
spot.
I,
don't
know.
B
C
To
be
clear,
I
think
like
there's
unit
tests,
which
will
run
on
their
code
some,
you
know
some
components
will
run
on
the
code.
I've
called
it
out
as
system
level,
testing
I'm,
not
sure
if
that's
the
right
level,
but
yeah
this.
This
kind
of
like
using
the
gitlab
repo
for
this
like
end-to-end
integration
testing,
is
it's
okay,
but
yeah
I
think
we
need
a
better
interface.
Just
like
we
just
like
we're
seeing
now.
We
need
this
interface
with
distribution
for
packaging.
C
It's
almost
like
a
pipeline
is
okay,
we're
going
to
package
it
via
CNG.
Now
we
need
some
interface
with
quality
to
say:
okay,
this
new
component
needs
to
do
this.
Like
full
system.
You
know
I
I,
you
know
integration,
testing,
okay,
that's
fine,
and
actually
thinking
about
it
more
like
going
back
to
what
we
were
talking
about
earlier.
C
If
a
component
and
some
of
the
issues
we
have,
if
a
component
goes
to
Independent
deploy,
aren't
we
going
to
have
to
make
release
tools,
go
back
again
now
and
start
going
back
to
that
components
deployments
to
figure
out
what
to
tag?
If
you
know
what
I
mean,
because.
B
B
That's
that's
a
good
question
and
it
it
goes
in
line
with
the
conversation
that
I
started
today
on
platform
right,
so
it
it
it.
It
really
depends
on
what
we
want
to
do.
I
would
say
that,
for
so,
the
release
meta
data
is
a
Tracker
of
packages.
B
So
when
we
ask
the
packagers
to
build
something
which
contains
stuff,
then
we
track
so
up
on
so
without
independent
deployment.
That
thing
is
a
valid
source
of
truth,
because
we
we
package
first
and
then
we
deploy
with
independent
deployment.
This
is
Gonna
Change
and
we
we
have
to
figure
out
how
we
want
to
revisit
this
thing.
If
we
want
to
add
extra
information
there,
if
we
or
if
we
want
to
track
at
the
project
level
for
their
own
independent
deployment,
that's
still
an
option.
C
We
could
we
could
make
our
because
we
will
own
the
independent
deployment
pipeline.
We
can
make
a
step
in
that
pipeline
to
populate
the
release
metadata
repo
with
like
a
component
deployment.
You
know,
like
independent
deployment,
you
know
and
then
create
the
files
like
under
a
different
subdirectory
per
component
like
independent
deployment
components.
This
is
their
directory
with
their
files
of
what
they've
done.
That's
another
like
we.
We
can
work
around
that
I
guess
and
that
still
keeps
everything
within
the
one
repo
and
we
can
like
use
that
as
the
source
of
Truth.
C
It
really
comes
down
to
yeah,
where
we,
if
we
want
to
pull
in
all
of
that
source
of
Truth
into
that
one
Reaper
that
one
set
of
deployments
in
that
repo
and
there's
one
set
of
files
in
that
repo
or
the
model
of
yeah
every
you
have
to
go
to
you
know
their
repo
and
see
their
deployments
to
see
what
versions
do
you
have
to
go
to
this
repo
and
see
their
deployments?
It
there's
pros
and
cons
to
both
ways
as
well.
C
I
can
see
good
good
things
and
bad
things,
probably
keeping
everything
on
one
repo.
Together
might
be
the
easiest
essentially
for
especially
for
like
permissions
and
just
in
general,
like
ease
of
finding
things
as
a
single
source
of
Truth,
so
but
yeah
we'll
have
to
figure
that
out
as
well.
C
C
Let's
say
your
pages
and
your
independent
deploy
so
you're
doing
your
independent
deploy,
it's
kind
of
like
you
have
to
go
into
your
security,
mirror
merge
your
thing:
do
your
independent
deploy
to
gitlab.com,
then
we
have
to
say
yep
thumbs
up
that's
running
on
digitalab.com
and
then
you
know
essentially
package
that
up
for
the
security
release
as
well.
So
that's
going
to
be.
C
A
This
is
the
stuff
which
actually
I
think
tries
in
Graham
was
what
we
were
talking
about
on
the
POC,
so
I
left
it
out
this
morning
or
earlier
today
we
were
Graham
and
I
were
talking
about
the
POC.
That
scubek
was
mentioning,
and
that
is
going
to
end
up
being
a
kind
of
paired
thing.
Hopefully,
in
a
few
weeks,
so
Grandma's
Cabot
can
spend
some
time
working
on
that.
A
But
we
were
talking
a
little
bit
about
almost
trying
to
figure
out
like
what
are
going
to
be
the
the
pain
points
and
like
what
the
pros
and
cons
as
we
sort
of
start
to
look
at
those
options.
A
Foreign,
do
we
have
a
what
would
be
like
a
high
level
kind
of
description
for
for
that
version
tracking,
because
I
just
want
to
make
I've
got
a
epic
I'm
starting
trying
to
break
these
things
down.
It'd
be
good
just
to
put
a
line
on
that,
so
we
don't
forget
it
like.
Would
it
be?
Is
it
kind
of
a
is
it
like
a
single
source
of
Truth
for
independent
deployments
or
like
how
would
we
categorize
that
whole
kind
of
problem
of
of
needing
to
make
sure
we
know
what
what
went
out.
C
It's
a
single
source
of
Truth
or
even
I
would
almost
use
the
word
Ledger
of
deployments.
Right
like
you
need
to
be
able
to
see
a
history
of
these
are
the
deployments
of
the
component
that
happened
at
which
point
in
time,
because
I
guess,
when
we
tag
an
RC,
we
just
we
just
kind
of
choose
literally
what
is
running
when
you
target
right,
like
we
just
say:
okay,
I'm
tagging
right
now,
whatever
happens
to
be
running
right
when
I,
do
it
at
that
point
in
time
that
that's
it.
B
If
we
want
to
see
so
I'm
building
my
thoughts
as
I'm
speaking
so
from
one
point,
we
Leverage
The
deployment
tracker
to
find
out
what
what's
in
there
and
we're
gonna
leverage
the
release
metadata
deployment
tracker.
But
at
the
end
of
the
story,
what
we
could
be
doing
is
just
asking
the
cluster.
What
are
you
running.
C
B
We
are
fine
with
what
is
running
right
now.
What
are
you
running
and
and
built
right.
B
And
but
you
have
this
because
these
are
already
so
I'm
we're
not
talking
about
removing
tracking
on
Project
levels,
and
things
like
that,
because
this
gives
Great
Value
gives
you
a
mystery
of
what
happened,
gives
you
merge
request
tracking
all
the
things,
but
this
is
valuable
when
you
want
to
roll
back
because
say,
let
me
see
what
we
would
deployed
before,
but
that's
that's
a
different
business
logic
than
the
business
logic
related
to
I'm
gonna
build
is
building
the
monthly
release.
C
B
That's
how
we're
doing,
but
it's
just
we're
just
reading,
what's
in
production
and
creating
a
package
with
those
packages
inside
that
this
is
what
we
are
doing,
and
this
could
just
in
a
world
where
everything
starts
with
a
package
which
is
the
world
where
we
live
right.
Now,
that's
easy
right,
because
then
it's
one
package
that
has
all
this
information-
and
they
are
all
nice
and
tightly
tracked
at
GitHub
repo
level
and
it
just
works.
B
But
if
we're
going
to
change
every
components
independently,
because
this
is
what
we
aim
with
independent
deployment,
then
keeping
track
of
those
values
going
to
be
harder
much
much
harder.
So
we
can
just
I'm
sorry.
My
goal
is
to
try
to
find
the
information
in
a
single
place,
which
is
generic
enough
for
so
that
we
can
look
at
the
single
information
instead
of
having
to
go
in
every
single
project
and
having
to
re-implement
this.
For
every
time
we
had
a
new
component
I.
C
I
think
yeah
I
think
having
a
metric
like
we'd,
have
to
define
a
standard
metric
name
and
a
way
that
they
can
expose
somehow
like
the
shot
of
what
they're
running
right,
because
that's
what
we
need
I
think
a
version
number
may
not
be
as
useful
as
a
shutter,
I
I,
don't
know
one
of
the
two
right,
because
we
need
to
populate
the
shards
into
like
the
stable
Branch
when
we
cut
it
so,
but
I
think
it
wouldn't
be
impossible.
We
all
the
components
to
be.
C
You
know,
as
part
of
independent
deploy
for
them
to
somehow
expose
the
standard
metric
that
we
can
Define
with
a
standard
format.
That
is
this
is
the
Shah
of
what
was
built
or
running
or
the
code,
or
what
have
you
and
then
we
can
just
yeah.
Basically,
query:
query,
you
know
Prometheus
Thanos
and
just
say
you
know
get
me
the
Met
get
me
the
version
of
these
components
and
then
we
could
even
do
stuff
like
confirm
how
long
the
component
has
been
running
like
okay,
it's
running
this
version.
C
It's
been
running
for
five
minutes.
It's
been
running
for
a
day.
It's
been
running,
for
you
know,
probably
a
few
hours
I,
don't
know
if
we
care
about
that.
Maybe
that's,
but
I
could
also
do
something
like
combine
that
with
yeah.
Okay,
it's
running
this
release
and
what
is
the
deployment
health
of
that
release?
I,
don't
once
again,
that's
a
lot
of
this
is
more
complicated
than
what
we
do
now,
but
it
would
also
allow
us
to
start
cross-checking.
C
Some
of
those
things
not
only
is
what
subversion
that's
currently
running,
but
make
sure
it
is
a
healthy
version.
So
you're,
not
just
tagging
a
version
that
got
deployed
five
minutes
ago
or
what
have
you
that
would
be
this.
That
might
be
the
simplest
mechanism
you
know
making
and
then
I
guess
we
could.
Probably,
if
we
had
that
information,
it
might
make
things
like
some
chat.
Ops
commands
easier
right,
because
then
you
could
just
go
to
the
metric
or
you
could
even
do
a
dashboard
and
you
just
dashboard
what
versions
are
running
on.com?
B
Release,
longevity
will
still
be
will
still
be
a
problem
in
any
case,
with
the
thing
that
we
are
describing.
I
was
thinking
about.
B
A
message
and
a
comment
in
the
in
the
blueprint
about
this
as
well
right.
So
if
some
development
team
merged
a
lot-
and
we
are
really
focused
on-
have
giving
them
continuous
delivery
option,
this
means
that
some
packages
will
just
leave
for
a
short
amount
of
time
in
production
because
they
will
be
overwritten
by
the
next
one,
especially
during
peak
hour
of
working
when
just
just
merging
stuff.
So
this
is
something
I
mean.
B
This
is
the
part
that
concerned
me
about
this
whole
thing
right
because
it
came
with
a
great
great
development
maturity
model
which
not
necessarily
every
team
has
because
I
mean
changing
from
deploy
once
a
month
to
deploy
every
commit.
It's
a
big
big,
big,
big
change,
so
I
do
understand,
you're,
going
to
build
the
new
components,
and
you
are
baking
this
since
the
beginning.
B
So
you
know,
since
day
is
zero,
that
this
is
how
it
will
look
like,
and
then
you
this
is
in
this
is
how
you're
approaching
the
problem,
but
from
a
component
that
has
already
been
there
that
got
deployed
once
a
month
or
twice
or
three
times,
depending
on
how
much
they
were
tagging.
This
is
a
big
big
change,
knowing
that
every
time
you
merge
something,
this
will
end
up
in
production
and
the
only
things
preventing
you
from
breaking
something
is
test
coverage
and
metrics
Health
metrics.
B
Them
that's
the
command
to
dial
left
right,
so
yeah,
it's
all
about
I
mean
it
has
more
complexity,
but
usually
what
what
we
can
see
is
that
there
could
be
say
three
options.
One
is
the
the
Release
Train,
which
is
basically
how
obviously
boy
works
when
you
have
a
schedule,
and
so
again
we
do
five
deployment.
We
know
we
want
to
do
five
deployments
each
day,
so
we
have
a
schedule
and
by
the
time
this
the
trains
start
whatever
is
green.
Whatever
is
the
latest.
B
It
gets
deployed
this
is
very
similar
to
also
deploying
right.
So
it's
a
fixed
schedule.
You
build
stuff,
you
deploy
stuff
and
if
it's
ready
by
the
time
the
train
starts,
then
it
goes
on
the
train.
If
it's
not
ready,
you
take
the
next
one.
Then
do
you
have
there's
this
other
thing
which
you
may
have
continuous
delivery
up
until
the
Cannery
stage,
so
basically
with
a
manual
promotion
which
is
still
we
have
in
our
supply,
but
it's
kind
of
you
can.
B
We
can
think
if
it
makes
sense
to
have
this
on
Commit
level.
So
every
commit
goes
new,
build
and
goes
on,
Canary
staging
and
then
POA
and
then
production,
Canary,
then
QA
and
then
stop
stop
it.
There's
not
there's
nothing
more.
Then
there
is
the
deliberate
choice
for
development
team
to
go
in
one
of
those
build
and
say,
promote
and
this
rollout
to
production
and
then
the
third
one
which
I
think
is
what
is
described
in
the
blueprint,
is
continuous
delivery.
C
So
what
we
could
do
is
something
like
if
you
open
a
merge
request
with
an
empty
commit
or
even
just
a
normal
Mr,
and
you
do
want
to
deploy
this
and
you
put
like
I,
don't
know,
bracket
deploy
bracket
somewhere
in
the
merge
request
title
and
then
we
could
be
like
okay,
when
that's
merged,
we
do
a
deploy,
but
for
every
other
merge
request,
we
don't
like
I
I
think
we
might
be
able
to
have
something
where
the
developers
can
have
some
kind
of
control.
I'm.
C
So
sorry,
maybe
I'm
not
yet
no
I
get
what
you're
saying
sorry.
Maybe
I
said
the
wrong
thing,
maybe
not
in
the
merge
request
title
but
maybe
in
the
commit
so
like.
If
you
put
a
commit
with
that
in
the
actual
git
commit,
then
you
might
have
the
ability
it
that's,
not
a
really
great
option.
It's
not
a
great
interface,
but
we
might
have
something
where
we
can
with
CI.
Somehow,
like
maybe
you
put
some
like
somehow,
you
have
some
kind
of
ability.
B
Yeah
because
they're
still
clunky,
because
if
we
you
have
two
maintainers
merging
two
things
more
or
less
at
the
same
time,
one
edit.
A
B
A
Wanted
to
so
I
think,
like
a
lot
of
this
is
kind
of
implementation,
detail
which
I
think,
like
you
know,
probably
we
can.
We
can
like.
We
sounds
like
we
have
at
least
a
couple
of
options:
I
guess
so,
when
it's
just
kind
of
jump
back
to
the
the
over
the
kind
of
original
idea
and
what
we
have
in
the
blueprint
like
I
guess
so
I
was
thinking
about
the
blueprint
kind
of
of
our
Target
state.
So.
A
Do
you
think
that
continuous
deployment
is
our
Target
state
or
is
our
Target
state
to
actually
have
like
an
in-between
option
of
some
variety
that
lets?
That's
teams
choose.
B
I'm
thinking
that
the
target
seat
is
giving
choice
to
the
team,
because
a
deployment
strategy
is
something
that
has
to
take
into
account
the
Dynamics
of
the
project
itself.
So
just
give
an
example.
You
may
have
a
component.
That
is
a
very
says.
Low
start
say
you
have
a
something
like
think
about.
We
are
developing
redis
as
an
example
right,
so
we
we
end
up
building
our
own
redis
version
as
a
component,
and
so
every
deployment
is
expensive,
because
maybe
you
are
we're
building
our
expense.
B
Our
stateful
red
is
whatever,
so
you
have
to
dump
the
cash
and
you
have
to
reload
right.
So
if
every
commits
ends
up
in
a
deployment
you
you
never
reach
Peak
Performance,
because
you're
always
jumping
and
loading.
A
Yeah,
so
how
about
we
say,
then
the
in
between
is
continuous
delivery,
which
is
at
some
point.
There
is
a
decision
that
gets
made
by
the
team
where
they
basically
say.
We
now
want
to
trigger
a
deployment
right
yeah,
but
we
how
specifically
we
do
that
I
think
we
need
to
figure
that
out,
because
yeah.
A
B
B
Way,
we
have
to
give
the
full
options
immediately.
We.
A
C
B
C
So
the
three
options
are
a
like:
a
cron
style
Cadence
like
a
regular
scheduled
Cadence,
a
fully
CD
where
it's
just
every
commit
every
pipeline
is
going
to.
You
know,
do
a
deployment
and
if
you
have
lots
of
commits
they'll
all
back
up
behind
each
other,
you
know
slowly
deploying
through
and
then
the
third
option
is
somewhere
between
the
Middle,
where
it's
kind
of
like
you,
you
decide
on
some
level
of
continuous
deployment,
but
then
you
also
give
that.
B
Yeah
I
think
that
also
for
a
continuous
deployment,
it
would
be
nice
opportunity
for
us
to
try
to
experiment
a
bit
more
with
merge
strain
and
see
if
there
is
an
option
to
detect
a
merge
train
and
deploy
only
from
the
last
commit
in
the
train.
C
Yeah
I
I,
agree,
I
think
doing
that
kind
of
pipeline
optimization
it's
the
same
with
the
delivery,
sorry,
the
deployments
and
environments
mechanism
being
able
to
say:
okay.
This
is
an
old
deployment
like
I've
got
three
old
deployments
and
here's
a
new
one
we're
all
waiting:
okay,
who's
going
next
or
rather
than
go
through
those
three
old
ones,
just
skip
straight
to
the
new
one
and
stuff
like
that.
We
can
get
some
efficiencies
there
I
think,
but
that
comes
back
to
how
we
mirror,
because
currently
we
deploy
from
Ops
and
everything.
C
A
Cool
yeah,
I
think
that
makes
sense
and
I
think
that's
the
I
guess
almost
the
challenge
of
the
of
the
blueprint.
Is
it
it's
whatever
our
Target
state
is
right,
so
I
think
as
long
as
it's
not
impossible,
we
should
stick
it
in
there
and
then
it
may
be
that
you
know
the
first
year
of
iterations.
Don't
get
us
there
and
that's
okay,
but.
B
B
A
A
Are
there
any
actions
that
we
should
take
from
this
versioning
work,
I,
think
I'm
going
to
take
an
action
I
just
want
to
mention
we
can
adjust
name
or
expand
or
whatever
I
do
want
to
just
capture
this
concept
of
the
deployment
Ledger
on
on
the
breakdown,
epic,
that
I've
mentioned
to
you
both
if
I
could
find
a
link,
I'll
link
it.
But
if
is
there
any
other
actions
that
we
want
to
take
from
this?
A
Okay,
if
you
think
about
I,
think
a
lot
of
it
is
kind
of
like
feed
into
the
blueprint,
so
I
think
yeah
yeah,
that's
probably
where
we'll
naturally
do
that
anyway.
So
awesome
and
then
I
just
briefly
wanted
to
mention
before
we
wrap
up
at
the
end
about
okrs
I,
know:
you've
both
heard
this
several
times
already
this
week,
but
just
so
we
kind
of
have
it
all.
So
what
we
have
on
the
issue
linked
is
very
much
a
proposal
of.
A
Is
there
some
way
we
could
work
with
system
to
set
a
reasonable,
Q4
sized
okr
around
independent
deployments,
I've
added
into
the
orchestration
section
that
a
little
mentioned
about
the
maintenance
policy,
our
kind
of
overarching
I
guess
not
Arrow
at
one
of
our
big
okrs
for
Q4
will
certainly
be
to
begin
implementation.
Hopefully
complete
implementation
of
the
maintenance
policy
extension.
So
keep
that
in
mind
as
well
that
we
will
be
a
slightly
bigger
team.
A
We'll
have
Matt
back
we'll
have
Steve
with
us
as
well,
and
hopefully
we'll
hire
an
additional
backend
engineer
reasonably
easily
in
Q4
as
well.
So
we
will
have
some
additional
people,
but
at
the
same
time
we
also
know
there
are
holidays
and
things
like
that
going
on
so
feel
free
to
Just
Dance
ideas
around
discuss,
drop
of
things
in
there,
which
you
would
like
to
fix
or
would
be
fun
to
fix,
like
I'm
Keen.
That,
actually,
you
know
we're
excited
by
the
work.
A
No
okay,
awesome
well,
I
appreciate
all
the
chats
and
great
I
it
feels
like
everything
is
moving
along,
like
I
feel
like
we're,
probably
like
pushing
that
little
Boulder
along
and
then
like.
It
feels
like,
maybe
in
a
couple
of
weeks
with
the
Kaz
Canary
stuff,
making,
hopefully
making
good
progress
once
the
Readiness
review
is
unblocked
and
hopefully
we'll
get
the
version
refactor
as
well
in
a
couple
of
weeks,
and
it
feels
like
perhaps
that
both
will
will
actually
sort
of
make
quite
a
big,
visible
jump
forward.
A
So
appreciate
you
both
continuing
to
push
this
along
and
cool
I'll
I'll
chat
to
you
on
a
couple
of
weeks.