►
From YouTube: App Runtime Deployments Working Group [Mar 10, 2022]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
App
runtime
deployments
working
group
so
today
I
wanted
to
use
the
first
meeting
as
an
introduction
round
for
yeah
all
the
approvers
or
all
the
ones
who
are
in
the
call
now,
and
then
I
would
give
the
word
to
dave,
who
can
hopefully
give
us
the
first
introduction
to
the
cf
deployment
release
process
so
that
we
have
a
good
overview
and
based
on
that,
then
I
should
hopefully
soon
start
to
create
a
roadmap
for
the
handover,
so
yeah,
our
working
groups
charter
is
here
in
case
anyone
has
not
yet
seen
it
here.
A
A
We
have
so
far,
I've
done
a
few
organizational
things
like
managing
the
github
organizations
and
teams.
This
should
all
be
in
place.
Now
we
have
a
list
of
repositories
belonging
to
our
working
group,
so
mainly
cf
deployment,
and
then
the
test
acceptance
test
smoke,
test,
repos
and
I've
also
started
with
the
very
first
draft
of
dashboard
kanban
style
dashboard,
but
this
will
have
to
evolve
over
time
so
yeah.
Maybe
we
first
start
with
a
very
brief
introduction
round,
because
maybe
you
know
each
other,
but
I
guess
not
everyone.
A
I
think
the
from
fidelity.
Did
anyone
join
yep
we've
got
myself
andy
from
fidelity,
yeah,
okay,
you're
too.
Okay,
then
please
introduce.
B
Yourself,
just
a
few
sentences
sure
so
I've
been
working,
I'm
jim
connor
I've
been
working
with
cloud
foundry
for
the
last
10
years.
Now
I
did
a
couple
of
years
at
sky
tv,
a
couple
of
years
for
the
uk
government,
digital
services
team
and
the
last
six
years
at
fidelity,
all
of
them
open
source
cloud,
foundry.
C
My
name's
andy
driver
I've
been
working
for
fidelity
for
a
while,
originally
we're
using
cloud
foundry
eating
the
pivotal
offering,
and
then
we
migrated
from
pivotal
cloud
foundry
over
to
open
source
and
been
working
it
for
you
a
number
of
years.
I
work
with
you.
D
E
Yep,
I'm
stefan
merker,
I'm
actually
not
listed
in
the
list
of
approvers
for
cf
deployments.
However,
I
represent
sap
in
the
technical
oversight
committee
of
cloud
foundry
and
yeah.
I
was
involved
in
creating
this
working
group
and
I
have
a
background
of
five
six
years
of
cloud
foundry
with
landscapes
at
sap.
Basically,
okay,.
F
And
johannes
hi,
my
name
is
haas,
I'm
with
sap
in
cloud
found
with
sapm
for
a
couple
of
years
already
and
we've
cloud
foundry
now
for
about
three
years,
and
we
have
done
numerous
cf
deployment
updates
within
our
cloud
foundry
deployment,
so
yeah
looking
forward
to
contributing
to
it
good.
Okay,
we
had
andrew
and
jim.
G
Then
phillip
yeah
hi,
I'm
philip,
I'm
in
stefan
and
johns
and
johannes
team
yeah,
I'm
also
yeah
working
with
cloud
founder
for.
C
G
Years,
knowing
the
transition
from
cf
release
to
cf
deployment,
yeah.
H
I
Piotro,
how
do
I
pronounce
that
correctly
you
pronounce
that
correctly?
It's
my
name
is
kels
kombolsky.
I
work
for
fidelity
international,
I'm
together
in
a
team
with
jim
and
andy
and
sean.
We
have
a
big
cloud,
fundraising
installation
here.
I
I'm
familiar
with
cloud
foundry
for
the
last
like
10
years,
I
would
say
yeah,
I'm
a
platform
engineer.
Basically.
J
Hey
everyone
chris
clark,
I'm
the
program
manager
at
the
linux
foundation
on
cloud
foundry.
I've
been
working
at
the
cloud
render
foundation
for
almost
six
years
now
so
going
back
quite
a
ways
and
yeah.
So
I'm
here,
according.
J
I
help
out
with
various
administrative
things
for
for
the
community
and
yeah
feel
free
to
reach
out
anytime.
If
you
have
any
questions
or
anything,
okay,.
K
Yeah
hi
good
evening
so
yeah
my
name
is
working
with
sap,
so
it's
almost
now
a
year
working
in
this
area.
So
I
have
done
some
couple
of
few
cf
deployments,
but
not
more
than
that,
so
I'm
fresh
or
with
zero
experience,
I
would
say,
but
fresh
so
yeah,
looking
forward
to
working
with
the
team
and
calixia.
A
Okay
thanks
yeah
last
one
would
be
me.
I
basically
share
the
same
history
as
or
his
history
with
stefan
in
sap
we
started
working
with
cloud
foundry
six
years
ago,
almost
seven
and
yeah.
I
also
had
the
pleasure
to
collect
much
experience
using
cf
deployment
to
integrate
this
into
our
many
landscapes
and
get
it
up
and
running.
B
A
A
That's
that's
a
bit
unfortunate,
but
yeah.
Okay,
we
can't
likely
find
a
slot
which
fits
for
everyone,
but
okay.
Maybe
I
can
set
up
a
separate
meeting
with
him
just
to
get
get
acquainted,
good,
okay,
but
then
now
I
would
hand
over
to
dave
and
take
notes.
A
H
Okay
sounds
good,
so
today
I
just
wanted
to
give
you
a
an
introduction
to
the
processes
we
have
for
managing
cloud
foundry,
predominantly
the
the
release,
bump
processes,
the
the
ci
testing
processes
and
then
finally,
the
the
release
processes
that
we
have
in
place
just
so
that
you
can
start
to
get
a
feel
for
how
this
works
today
and
how
we
can
take
this
forward.
H
H
So
in
each
of
these
jobs,
each
of
these
represents
a
single
bosch
release
in
in
cloud
foundry.
So,
for
example,
this
is
bpm
is
the
top
of
the
list.
I
guess
conquest
sorts
it
by
the
overall
length
of
the
of
the
job
name,
so
it
starts
with
the
shortest.
So
we
have
this.
H
This
lock
job
is
just
requiring
a
boston
environment,
a
bubble
environment
to
do
testing
on,
and
then
this
job
kicks
off
it
will
we
validate
against
release
candidate
because
that's
considered
stable,
so
the
the
idea
is,
if
anything
fails
it's
failing
because
of
the
new
release
bump.
So
it
will.
It
runs
some
some
tasks
to
bump
the
version
on
the
against
release,
candidate,
locally
bubbles,
up
the
environment
and
then
goes
ahead
and
and
deploys
and
runs
smoke
tests.
H
H
If
it
fails-
and
I
can
go
to
one
that
failed
here-
I
think
no,
no,
that
failed
far
too
early.
Let's
see
if
this
one
failed
in
a
better
way,
nope
still
early
well,
never
mind,
I.
I
will
just
talk
through
what
happens.
If
it
fails,
because
you
can
see
the
grade
up
tasks,
if
it
fails,
it
pushes
the
changes
to
a
branch
retrieves
the
bosch
logs
and
there
and
stores
those
in
gcs
and
notify,
tries
to
notify
the
component
team
via
slack
to
say,
hey
this.
H
This
thing
failed
for
a
reason
the
logs
are
here.
You
can
debug
it
and
figure
out
what's
going
on
and
then
it
always
tears
down
the
environment
because
we
don't
want
to
leave
it
lying
around.
Otherwise,
we
very
quickly
run
out
of
pools
pulled
environments,
so
that
goes
through
there
are
the
there
are
like
three
different
categories
that
we
split
this
up
into.
Otherwise
it
gets
a
bit
unwieldy.
H
These
are
what
we
call
the
base
releases
that
are
are
compiled.
They
have
actually
referenced
in
the
safe
deployment
yaml
itself.
There
are
the
the
build
packs
and
then
there
are
releases
that
are
mentioned
in
ops
files
and
the
difference
here
is
these
are
not
compiled,
but
other
than
that.
It's
all
exactly
the
same
testing
so
once
that
flows
through
that
hits
to
develop,
and
that
will
then
hit
the
main
pipeline
that
I'll
get
to
in
a
second.
H
The
other
thing
that
we
have
obviously
are
our
prs,
which
have
we
already
have
a
couple
open
at
the
moment.
They
they
have
a
slight.
They
have
a
much
smaller
set
of
tests
run
against
them.
The
only
checks
that
we
have
here
are
that
the
base
branch
should
be
developed,
and
then
we
have
a
set
of
fairly
simple
unit
tests,
and
really
these
are
just
checking
for
consistency
within
the
yaml
itself.
H
They're
checking
that
the
if
it's
an
ops
file
it's
mentioned
in
the
readme
and
that
it
actually
has
test
coverage
to
make
sure
that
it
interpolates
against
the
base
manifest.
So
it's
a
fairly
low
bar
for
for
validating
prs.
H
If
we
had
concerns
about
a
particular
pr,
we
would
speed
up
a
bubble
environment
manually
and
deployed
to
make
sure
that
it
worked
that
way
before
we
watched
it.
So
those
are
pretty
much
those
that's
all.
The
changes
that
come
in
those
are
the
two
main
main
avenues.
We
we
moved
very
early
on
in
my
time
on
release
integration
to
a
model
where
we
nothing
was
committed
directly
to
develop.
H
Everything
was
validated
in
some
way,
shape
or
form.
First,
all
of
the
changes
that
we
made
as
a
team
went
through
the
pr
process.
So
that
way
we
felt
the
pain
that
that
contributors
felt
and
I
think
that
helped
us
to
to
improve
that
flow
once
it
hits
cf
deployment
develop.
It
then
goes
through
the
main
testing
pipeline,
which
is
is
this
again?
H
It
will
run
all
the
unit
tests
and
make
sure
that
everything
is
is
kind
of
sane
before
it
starts,
and
then
it
will
go
through
a
series
of
test
scenarios
that
we
have
that
long-lived
bubble
environments
there's
no
deployment
there.
Normally
they
get
cleaned
up
at
the
end.
But
if
anything
fails
in
the
middle,
then
they'll
stick
around,
but
we
have
a
scenario
for,
and
I
can
isolate
these
things,
so
we
have
a
fresh
install
that
is
just
during
a
clean
install.
It
also
includes
isolation
segments,
so
it's
got
isolated,
diego
cell
and
isolated
routing.
H
We
have
an
experimental
pipeline.
This
is
where
teams
could
request
that
an
experimental
ops
file
be
validated
by
us
in
preparation
for
being
promoted
either
to
be
merged
into
into
the
deployment
manifest
itself
all
being
promoted
to
a
production
grade.
Pops
file,
a
stable
apps
file,
as
you
might
say,
bosch
lite,
testing,
bosch,
backup
and
restore
testing,
and
then
windows
testing
with
the
cats
for
windows.
H
So
all
of
those
things
run
in
parallel,
like
I
said
they
each
claim
a
lock
as
a
specific
lock
for
their
environment,
because
there
are
nuances
to
the
way
that
bubble
is
set
up
for
them.
They
will
all
do
their
appropriate
deployment
and
then
run
their
appropriate
tests.
So
bbr's
running
dreads
flashlight.
We
gave
up
on
running
cats
a
long
time
ago.
It
was
just
too
unstable
upgrades
windows,
fresh
experimental.
H
There
are
any
cats
and
smoke
tests
and
then,
once
all
of
those
pass,
it
funnels
back
into
this
blessed
manifest
job,
and
that
is
where
the
change
then
gets
merged
from
develop
into
release
candidates,
and
then
that
will
trigger
the
job.
That
kind
of
starts
to
build
out.
The
release
notes
template.
So
this
is
where
we
get
the
table
of
all
the
release.
Changes
that
go
into
the
release
notes
and
the
other
thing
that
that
meant
triggers
is
the
the
deletion
of
all
these
environments.
H
H
H
So
basically
we
would
start
by
looking
at
the
release,
notes
gathering
all
of
that
information.
Look
at
the
list
of
okay.
Well,
what's
changed,
based
on
the
the
previous
release
and
start
to
decide?
Okay,
well,
is
this
a
breaking
change?
Should
this
be
a
major
release,
or
can
this
be
a
minor
release?
H
H
Operators
can
have
a
lot
of
their
own
ops
files,
and
so
we
want
to
be
very
careful
like
to
us
if
the
operator
has
to
get
involved,
because
the
structure
of
the
ammo
that
their
ups
file
is
modifying
has
changed
to
the
point
where
it
won't
interpolate
anymore.
That's
a
breaking
change
and
that
has
been
it
can
be
fairly
restrictive.
H
H
If
a
property
for
a
job
gets
removed
and
the
operator
had
an
ops
file
that
was
modifying
it,
that
could
also
be
considered
a
breaking
change.
So
really,
the
catch-all
is
anything
that
requires
active
band.
Intervention
on
the
part
of
the
operator
becomes
problematic,
obviously
removing
ops
files
that
they
may
be
using.
H
We
do
have
a
mechanism
for
promoting
ops
files
where
we
basically
drop
a
symbolic
link
in
experimental
that
points
to
the
the
the
stable
version
which
avoids
a
breaking
change
and
then
the
next
time
we
have
a
breaking
change.
We
just
clean
all
those
up
again.
H
So
if
anything
read
matches
those
criteria,
we'd
say:
okay,
this
is
a
major
release,
so
we'd
run
the
the
the
ship
hit
major
job.
If
not
ship
it
miner,
I
don't
think
we've
ever
ever
used
patch.
I
mean
that
it
has
been
run,
but
I
can
actually
click
this
thing.
Yeah
december
2017
was
the
last
time
that
was
run,
so
we
certainly
have
never
cut
a
patch
release.
In
my
in
my
memory
but
yeah,
then
we
we
run
one
of
those
two
jobs.
Github's
made
it
a
little
easier
these
days.
H
It's
got.
The
automatic
release
note
generate
button
that
I'll
use
to
generate
the
the
pr
section
of
it
paste
in
the.
Let
me
see
I
can
go
find
the
last
release.
H
Here,
yeah
yeah,
so
so
this
stuff
we
massage
based
on
what
github
gave
us
paste
in
the
template.
If
there's
anything
particular
that
we
want
to
call
out,
we
can
put
notes
about
a
particular
release
bump
and
then
this
is
more
github
stuff
that
it
gives
us,
which
is
nice
and
yeah.
That
is
the
the
release
process.
A
So
you
make
heavy
use
of
bootstrap
bosch
to
set
up
test
environments.
Where
is
this
I
mean
if
there
are
several
of
these
jobs
running
in
parallel,
you
are
really
deploying
a
lot
of
different
bootstrap
bosch
cloud,
foundry
foundations
right.
H
H
I
think
this
was
set
up
because
of
the
problems
we
had
initially
with
gcp
getting
sufficient
quota,
but
we
have.
If
I
drop
this
down,
we
have
separate
projects
for
each
environment.
So
if
I
go
into
trollonia,
that's
the
the
upgrade
environment
and
I
just
go
into
vms.
H
I
think
this
is
deployed
at
the
moment,
because
cis
yeah
so
you'll
see
that
that's
one
there's
one
cloud
foundry
deployment
there.
If
I
go
into,
I
think
it
is
bbr,
so
bbr
is
here
yeah,
so
each
one
you'll
see
is
completely
separate.
We
some
of
them.
I
think
we
scale
down
to
one
az,
so
there's
only
one
vm
per
instance
group,
but
some
of
them,
especially
if
we're
running
cats,
then
they
kind
of
need
that
capacity
to
run
caps
reasonably
well.
H
The
assumption
being
that
bosch
and
the
underlying
infrastructure
will
take
care
of
the
nuances
of
the
different
operating
systems
or
the
different
cloud
providers
there.
That
we've
got
that.
A
B
H
I
think
it's
just
the
environments
again.
This
predates
me
our
the
dev
environment
that
I
can
spin
up
if
necessary.
If
I
go
to
infrastructure,
we've
got
luna
trelloni,
hermione,
snitch
bellatrix,
for
some
reason
was
our
stable
environment
and
then
the
the
development
environment
was
maxine.
So
yes,
it's
just
the
environments,
but
yes
they
they
go
throughout.
H
But
then
there
are
the
odd
outlier
which
is
like
the
the
bbr
environment
is.
If
I
can
find
this,
it
is
for
some
reason
baba
yaga
yeah.
I
don't
understand
that
reference
I'll
have
to
look
at
that.
H
H
So
we
have
a
job
that
tries
to
run
every
tuesday
morning,
it's
red
because
it
couldn't
acquire
the
lock
for
it,
but
this
just
runs
bubble
up
to
to
make
sure
that
it's
up
to
date
every
week
and
if
we
have
really
serious
problems
with
it,
we
can
completely
destroy
and
recreate
it.
So
there's
one
of
these
for
each
of
the
the
environments
nci
and
also
one
for
for
cats.
H
Cats
is
the
environment
that
we
use
to
test
cats,
so
any
change
that
comes
into
cats
similarly
will
go
through
a
testing
process
before
landing
on
unreleased
candidates,
and
that's
that
has
its
own
environment
that
we,
that
is
long-lived
it
gets.
It
is
kept
up
to
date.
Every
time
we
cut
a
new
release
of
safe
deployment
and
then
every
time
a
catch
change
comes
in
it
just
let's
see
where
does
it
trigger?
If
that
is
the
case,
I'm
just
looking
for.
H
H
Now,
there's
a
conquest
pool
resource
and
there's
a
concourse
pool
trigger
resource,
so
the
pool
resource
actually
takes
care
of
you
can
claim
and
unclaim
lock
files
and
those
are
basically
just
represented
by
by
files
inside
these
directories
the
contents
of
those
files,
the
name
is
obviously
the
name
of
the
lock.
The
contents
can
be
metadata
related
to
the
environment.
H
H
So
we
have
a
separate
repository
for
those,
but
then
there
is
also
the
concourse
pool
trigger
resource,
and
this
thing
is
basically
just
responsible
for
watching
the
pool,
repository
and
triggering
based
on
events
that
happen
in
it.
So,
for
example,
I
don't
think
we
use
that
too
much
here,
but
actually
we
might
not
use
that
here.
That
may
be
used
in
other
pipelines,
I'm
thinking
of
so
I
think
we
it's
just
a
cool
resource
that
manages
it.
H
It's
triggered
by
the
changes
to
to
develop,
but
the
trigger
resource
is
nice
because
you
can
have
pipelines
that
actually
manage
a
pool
and
say:
okay,
if
you
want
a
dynamic
pool,
then
it
can
spin
up
environments
and
add
files,
and
then
that
can
trigger
other
jobs.
So
we
do
use
that
elsewhere.
F
H
There
are-
and
I
can
we
should
find
these
in
here.
So
if
I
go
into
the
ci
directory
of
cf
deployment
and
I
go
to
the
pipelines,
we've
tried
for
the
for
the
the
major
pipelines
to
have
a
markdown
file
next
to
each
the
ammo
for
each
pipeline.
H
So
this
has
a
lot
of
information
of
the
information
I've
been
presenting
today
that
walks,
through
the
exactly
the
same
process,
probably
in
a
bit
more
detail
that
I've
forgotten,
but
that
will
go
through
all
of
that.
It
talks
about
the
release
management,
pipeline
management.
H
H
H
Oh,
it
looks
like
so
I
never
finished
filling
that
out,
but
that's
okay,
so
there's
release
processes,
for
I
think,
oh,
the
cat's
one
will
be
in
the
cat's
rookie,
I'm
sure,
but
then
yes,
it
talks
about
the
the
environments
and
just
the
github
repositories
in
general,
the
team
was
responsible
for
oh
one
thing
I
didn't
touch
on
and
I
think
this
will
be
something
we'll
want
to
start
looking
at
sooner
rather
than
later.
H
The
other
thing
we
had
in
the
update
releases
pipeline
was
an
automatic
stem
cell
thumping
job.
This
has
not
run
since
xenial
stopped,
so
I
think
one
of
the
first
orders
of
business
is
going
to
need
to
be
to
get
this
up
to
date
with
bionic
and
get
this
pumping
at
releases
based
on
those
those
changes.
H
This
had
two
flows,
one
if
it
was
a
minor
stem
cell
bump
and
one
if
it
was
a
major
stem
cell
bomb,
the
difference
there
being
that
major
stem
cell
bumps
to
my
understanding,
involve
kernel
changes
and
so
all
of
the
compiled
releases
have
to
be
recreated,
and
so
the
way
we
handle
that
we
have
a
a
job
at
the
beginning
that
that
detects
what
type
of
stem
cell
bump
is
it,
and
so
it
looks
that
way.
Is
this
a
major
or
is
this?
H
If
I
go
back,
I'm
sure
this
will
say
that
yeah
this
is
a
minor,
so
you
can
see
if
the
minor
number
changes,
then
the
assumption
was
the
kernel
number
is
not
changing,
and
so
we
we
don't
need
to
recompile
everything.
So
what
that
allowed
it
that
is
to
do
was
the
minor
stem
cell
bump
would
literally
just
it
would
again
test
everything
by
deploying.
H
If
all
of
that
passed,
it
would
make
the
change
directly
on
main
trying
to
short
circuit
everything,
because
we
know
okay,
so
minus
stem
cell
bump
it
deploys.
It
runs
smoke
tests,
everything's,
fine.
We
would
go
straight
ahead
and
this
would
cut
a
release
at
that
point,
and
it
would
then
also
make
the
change
on
develop
so
that
it
could
flow
through
the
main
pipeline
and
everything
would
be
in
sync.
H
If
it
was
a
major
stem
cell
bump,
then,
and
I'll
go
back
to
a
green
run
here,
then
this
is
where
it
would
now
say:
okay,
I've
got
to
deploy
everything
and
then
export
all
the
new
compiled
releases
upload
all
of
those
to
gcs,
and
because
this
was
such
a
big
change.
This
change
went
on
to
develop
and
would
go
through
the
main
pipeline
to
then
be
validated
and
released
in
a
more
controlled
way.
H
D
H
When
I
first
joined
the
team,
we
had
a
pipeline
that
would
automatically
make
the
change
undeveloped,
but
there
was
no
validation
and
that
was
causing
us
a
lot
of
pain,
because
the
main
pipeline
would
go
red
because
teams
would
push
changes
in
new
versions
of
the
release
and
that
you
know
we
had
no
differentiation
between
consuming
a
major,
minor
or
a
patch,
and
so
when
a
major
came
through
and
we
needed
to
make
manifest
changes
and
things,
it
really
slowed
us
down.
H
So
all
of
this
work
was
in
service
of
keeping
the
main
pipeline
flowing,
isolating
any
changes
like
that
that
needed
attention
to
something
that
either
we
could
look
at
or
the
team
could
look
at,
or
certainly
help
us
with.
They
could
make
a
pr
from
that
branch
if
it
failed
and
then
we'd
get
the
bump
and
the
the
the
corresponding
manifest
changes
that
were
required.
H
And
then,
similarly,
the
the
stem
cell
work
we
again
used
to
just
have,
I
think,
a
job
that
would
make
the
change
on
develop
and
then
let
it
flow
through,
and
this
was
then
all
optimizations
around
reducing
the
amount
of
work
that
we
need
to
do,
but
also
getting
the
changes
out
and
released
as
quickly
as
possible.
A
Okay,
now
one
of
our
next
big
stories
is
to
release
the
jammy
stem
cell
or
integrate
this
into
cf
deployment.
A
H
We
have
support
for
jeremy
if,
if
the
operator
wants
to
bring
it
themselves,
then
everything's
in
place
in
safe
deployment
to
to
allow
that
to
work,
but
right
now
that
would
be
a
custom
maps
file
to
do
that.
J
A
E
Maybe
one
question
from
my
side
when
we
or
let's
say
like
this:
in
the
recent
times
the
release
cadence
of
deployment
went
down
a
little
bit.
We
have.
E
Cf-18
again,
my
question
is
what
what
needs
to
be
done
in
order
to
get
back
to
a
let's
say,
more
privately,
a
delivery
or
or
something
like
that,
and
maybe
the
next
release.
Could
we
do
it
somehow,
together
just
to
learn.
H
Yeah,
definitely
I
mean
I've.
I
started
working
with
carson
and
tom
who's,
not
not
here
today
on
the
vmware
side,
bringing
them
up
to
speed
and
that's
part
of
what's
across
the
slow
down
was
I
was.
I
was
taking
time
like
coordinating
with
them
rather
than
me
just
saying:
oh
okay,
it's
friday
morning,
I
can
kind
of
release.
I
wanted
to
work
with
them
and
make
sure
they
were
in
the
loop
and
understood
the
process
so
yeah.
H
I
I'd
love
to
to
go
through
that
process
with
with
people
here
we
have
to
figure
out
a
good
time
to
do
that,
and
then
I
think
it's
mainly
just
a
question
of
deciding
what
cadence
we
want,
what
criteria
we
want
for
when
it's
when
things
are
ready
to
cut
a
release
and
and
coordinating
who's
going
to
to
do
it
at
that
point,
you
know
I'm
trying
very
much.
You
know,
for,
I
guess
the
previous
six
months.
It's
just
been
me
and
so
I've
I've
been
just
saying,
okay.
H
A
Yeah
well,
ideally,
we
would
have
a
new
cf
deployment
every
sprint
that
would
every
would
be
every
two
weeks
for
us.
That
would
be
the
perfect
case,
but
yeah
not
sure
if
we
can
achieve
this
yeah.
Okay,
so
will
the
immediate
next
steps
for
us
would
be
to
set
up
our
own
concourse
server,
similar
to
the
bosch
ci
server,
where
their
pipelines
are
running
and
yeah.
I
already
talked
to
ruben
and
chris
about
the
account
and
stuff
so.
A
Is
not
so
much
the
problem
yeah
and
when
we
have
that
and
we
could
start
moving
the
pipelines.
H
I
can
start
thinking
about
what
it
will
take
to
move
the
pipelines,
what
what
we're
going
to
need
in
terms
of
resources,
how
we
can
hopefully
consolidate
a
lot
of
what
the
sprawl
that
we
had
on
release
integration
with
all
of
these
different
tcp
projects
and
try
and
bring
that
down
into
a
more
manageable
set
that
we
can
use
as
a
working
group
yep.
E
Regarding
the
stem
cell,
would
it
make
sense
to
anyway
still
get
the
bionic
stem
cell
process
running
just
to
have
everything
in
a
nice
working
shape
before
switching
over
to
gemmy
or
running
jimmy?
In
addition,
I
don't
know
I.
C
E
Room
asking
for
that:
couldn't
we
get
that
back
to
work.
B
I
mean
certainly
from
fidelity's
point
of
view
having
just
dealt
with
the
pole
kit
vulnerability,
that's
a
fairly
recent
one
and
having
to
re-jig
all
our
stem
cells.
Having
some
fresh
stem
cells
arriving
would
be
really
really
good.
Roundabout
now
and
anything
we
can
do
to
help
get
stem
cells
moving
again,
we'll
be
all
over.
That.
E
I
mean
from
our
point
of
view,
we
are
over
also
deploying
our
landscapes
with
the
bionic
stem
cell
and
the
stem
cell
themselves.
They
are
flowing
it's
just
that
they
are
not,
let's
say
integrated
into
cf
deployment
and
let's
say
what
could
say,
there's
a
little
bit
lesser
trust
in
cf
deployment,
because
it
looks
like
it
yeah
it
never
gets
tested
with
that
one.
I
think
there
are
some
jobs
that
actually
using
the
bionic
stem
cell
in
the
background
somewhere.
I
I
think
I've
found
some
but
yeah.
E
H
E
H
Most
things
are
in
place.
I
I
don't
know
what
the
timeline
is
for
jeremy,
but
it
probably
would
be
good
to
go
through
it
with
bionic
just
to
become
more
familiar
with
the
process
and
the
tooling
that
we
have
for
managing
this
and
then
next
time,
through
with
jamie
a
should
be
a
lot
easier,
but
be
you
know,
maybe
that's
something
that
other
people
can
can
work
on,
and
it
doesn't
have
to
be
me
working
on
that
at
that
point.
So
it
helps
to
spread
the
knowledge
which
I
think
would
be
really
good.
A
A
A
Meetings
are
scheduled
only
once
a
month.
I
really
can't
tell
if
this
is
enough
for
the
beginning
or
not,
but
let's
leave
it
at
that.
If
you
need
more
meetings,
no
problem
so
yeah,
I
will
try
to
compile
a
roadmap
with
all
our
tasks
and
to
do
this,
I
already
started
to
get
familiar
with
git
projects
and
this
fancy
kanban
board
and
the
fancy
upload
script
that
ruben
provided.