►
From YouTube: App Runtime Deployments Working Group [October 13, 2022]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
then,
let's
get
started
a
short
disclaimer.
My
shipment
is
only
four
stations
close
to
me,
so
I
may
have
run
to
the
door.
Let's
headphones
microphone.
Okay,
we'll
give
you
feedback
Carson.
He
has
no
headphones
and
a
new
microphone,
so
migration
of
the
pipelines
is
more
or
less
complete,
so
all
pipelines
are
happily
running.
A
We
basically
have
problems
because
some
of
the
test
environments
are
located
in
the
US
and
Concourse
is
in
Europe
and
that
could
lead
to
some
connection
timeouts,
sometimes
not
always.
But
apart
from
that,
it's
running
fine
and
so
now
the
question:
are
we
ready
to
delete
the
old
pipelines
from
the
release?
Integration,
Concourse.
C
I
think
I
think
we're
feeling
pretty
good
about
it
right
is
there
the
only
reason
to
keep
it
around
if
everything
is
running
smoothly?
Is
history
I
think
so
as
long
as
a
pipeline
has
run
enough
times
and
got
enough
history
I
think
we're
probably
good
to
kill
it.
A
Okay,
so
good,
then
keep
a
little
longer
for
history.
Okay,
that's
fine!
But
it's
just
because
our
backlog
item
said
the
old
pipeline
should
have
been
deleted.
B
D
Yeah,
we
can
also
say
we
agree
that
we
can
leave
at
the
end
of
the
year
or
something
like
that.
I
mean
this
doesn't
already
matter
so
much
yeah,
okay,.
A
A
Yeah,
so
is
this
still
anything
that
could
prevent
us
from
merging
this
into
sea
of
deployment,
Concourse
tasks.
B
C
C
One
Dave
and
I
had
previously
said
we
were
going
to
hold
off
to
send
a
message
to
the
CFT
mailing
list,
just
because
this
changes
the
default
for
anyone
who
is
using
the
the
container
image
right
and
not
a
lot
of
people
use
the
container
image
and
not
every
project
has
moved
to
Ginkgo
V2,
okay,
but
yeah
Dave
and
I.
Just
never
got
around
sending
that
email
that
that
was
the
only
thing
that
was
that
was
being
waited
for.
C
If,
if
we
want
to
say
it's
not
worth
sending
an
email,
I'm
I'm,
okay
with
that
two
otherwise
I
can
try
and
prioritize
sending
that
email.
Today.
A
B
A
A
Had
its
last
release
two
years
ago
and
I
wanted
I
wondered
if
there
is
still
if
this
release
process
is
still
used
or
if
you
just
use
the
latest
version
and
and
don't
do
not
release
anything
anymore.
A
A
A
A
D
A
version
from
the
main
branch
just
used
by
then
the
other
pipelines
so.
C
B
A
C
B
A
Yeah
I
mean
there
is
any
way,
not
much
ongoing
activity
here
so
well:
okay,
good
yeah,
updating
go
with
with
go
mod.
This
would
be
something
this
is
emerged
directly
yeah,
but
this
is
directly
for
the
main
branch.
So
and
okay
I
mean
there
is
no
urgent
need
for
a
release
process.
Then
we
can
just
work
with
PRS
for
the
main
branch
and
then
that's.
It
use
the
latest
version
on
Main
and
if
that's
good
enough.
B
A
Past,
yes,
I
I
checked
the
difference
and
it
was
quite
large
I
think.
A
Okay,
good-
and
this
is
a
follow-up
item-
merge
developed
to
Main
in.
B
A
B
A
C
Sure
you
might
want
to
add
an
action
item
to
delete
the
develop
Branch
as
well,
leaving
it
around
that'd
be
confusing.
A
Okay,
good
then
I
noticed
that
someone.
A
Who
is
this
Maria
Sheldon
created
a
new
dotnet
core
build
pack
test
with
quite
an
impressive
amount
of
resources,
so
these
are
several
megabytes
of
dlls
and
that
caused
this
particular
kept
acceptance
test
to
fail,
because
packaging
package
uploading
took
too
long,
and
then
we
got
an
Gateway
time
out
from
the
elb,
so
HTTP
504
and
after
an
hour
or
so
analyzing
timeouts
with
one
of
our
colleagues
who
is
very
good
with
TCP
and
HTTP
keep
alive
and
all
that
stuff
we
increased
the
idle
timeout
of
the
load,
balancer
and
things
are
running
again,
so
I
think
I'm
yeah.
A
You
approved
this
and
yeah
I
just
wanted
to
ask.
Is
everyone
aware
of
this
problem
and
can
this
test
application
somehow
be
reduced
to
a
more
manageable
size.
C
A
A
So
I
had
to
increase
the
ELD
idle
time
out,
I,
don't
know
I'm
pretty
sure.
Someone
else
will
also
complain
with
a
with
a
slow
network
connection.
C
Out
of
curiosity
was:
did
this
only
appear
in
AWS
or
also
in
gcp.
A
Yeah,
the
experimentalist,
the
the
Hermione
is
the
only
here
it
is
AWS
environment.
That's
where
this
test
runs
I,
don't
know
why
it
runs
only
there.
It
must
have
some
other
reason,
but
luckily
we
we
had
this
something
similar
also
for
our
performance
test,
so
yeah
it
could
be
easily
fixed,
but.
C
It's
I
guess
it's
just
surprising
to
me
that
this
test
only
runs
on
AWS,
because
I
thought
this
was
part
of
of
the
detect
Suite,
which
should
be
pretty
common.
B
C
A
Checked
I
think
I
checked
the
other
tests
cut
either
other
cats,
executions
and
I
think
I
did
not
find
fresh,
no
fresh
cuts,
upgrade
cats.
Take
a
green
one.
Dotnet
is
just
nothing.
B
C
A
C
B
C
If
this
was
where
the
Gap
was
it's
another
example
of
a
AWS
messing
us
up
recently,
it's
probably
still
worth
raising
the
issue
because
they're,
if
it,
if
it's
an
issue
with
AWS,
this
probably
is
still
hard
on
gcp.
Even
if
it's
not
breaking,
maybe
we
can
find
ways
to
reduce
the
the
file
size,
yeah.
A
A
Aws
default
timeout
was
60
seconds,
which
was
not
enough
good,
okay,
fine.
So
then
I
will
raise
an
issue
on
that
and
then
I
hope
we
can
reduce
this
somehow,
because
this
is
really
is
tons
of
dlls
I
I,
don't
know
if
this
is
really
necessary
for.
B
A
A
C
Okay
sure
yeah
I
mean
like,
like
Dave,
has
said
in
the
past:
we've
historically
not
tracked
old
stem
cells
once
we've
updated,
but
I,
don't
I,
don't
there's
no
point,
and
if
you
want
to
do
this,
it's
easy
enough
to
do.
A
Yeah
we
want
to
have
a
basic
validation
so
that
we
can
detect
a
bionic
problems.
D
To
see
them
because
at
the
moment
we
are
still
on
bionic,
okay,
that
won't
last
long
as
soon
as
the
Gemmy
release
is
there
I
think.
Four
weeks
later,
we
are
on
Jimmy
unless
there's
a
big
issue
on
the
big
landscape,
but
there.
B
D
Some
activities
here
and
there
to
have
stem
cell
hardening
sips,
compliancy
Etc
and
all
this
still
happens
on
bionic,
so
just
to
understand,
if
there
are
problems
with
bionic,
is
helpful.
Let's
say
this
doesn't
mean
that
we
have
to
fix
everything
or
to
fall
into
panic,
but
we
should
get
aware
of
it.
I
think
cool.
C
D
Yeah
or
let's
say
in
April
or
May
next
year,
if
there's
no
bionic
stem
cell
updates
anymore,
then
of
course
this
can
be,
it
doesn't
make
sense
anymore
and
those
other
activities
say
they
need
to
come
up
with
a
plan.
Then
cool.
C
One
thing
with
moving
this
pipeline
over
that
I
just
noticed
is
we're:
gonna
have
to
either
repurpose
one
of
the
existing
environments
that
we
have
open
source
or
make
a
new
one
for
this
pipeline,
a
sort
of
a
sub
step,
because
this
is
still
we
were
able
to
use
our
internal
like
toolsmith's
pools
because,
like
a
VMware
Concourse,
but
if
we
shift
it
over
to
the
okay
first
one,
we
can't
do
that.
D
Already
using
the
region,
maybe
we
we
should
talk
about
which
regions
we
use,
then
it.
D
Little
bit
longer,
but
it's
just
to
get
this
also
rolling.
C
Sure
I
think
we've
already
done
one
environment
move
over
to
the
community
with
Cedric
I
think
after
Sven,
Johann
and
I
collaborated
on
that.
So
hopefully
it's
straightforward
to
do
the
to
do
the
next
one,
but
yeah
I
think
the
the
only
even
for
a
temporary
environment.
It
would
probably
be
nice
to
add
it
to
the
infrastructure
CI
so
that
we
have
an
easy,
consistent
way
to
roll
it
up
and
roll
it
down.
If
we,
if
we
need
to,
we.
C
Exactly
and
we
could
even
use
like
stable,
Bellatrix
and
Dev
aren't
currently
in
use
if
we
just
update
these
credentials
to
actually
work
and
have
this
be
running,
this
could
take
the
place
of
a
temporary
okay
environment.
B
A
A
Yeah,
what
is
this
yeah
right?
Yeah
exactly
so
we
would
yeah
rename
this
to
bionic
stem
cell,
make
the
necessary
adaptions
upload
it
to
ours,
pause
it
on
the
old
one
on
yours
and
make
sure
it
works
with
the
Bellatrix
environment.
And
then
we
have
our
basic
bionic
validation,
yeah,
okay,
good!
So
it's
a
bit
of
work
but
sounds
like
a
plan.
D
Maybe
just
a
question
into
the
round:
let's
assume
tomorrow
we
have
a
CF
deployment
version
22
with
Jimmy
as
stem
cell
and
yeah.
We
announce
it
and
one.
Two
weeks
later,
somebody
comes
up
with
a
major
problem.
Let's
assume
Cloud
controller
doesn't
work
in
large
environments,
because
memory
consumption
is
bigger.
That's
not
very.
It
may
happen.
I
heard
some
rumors
here,
because
the
Posh
team
detected
such
issues.
What
do
we
do
then?
Do
we
roll
it
back?
D
Do
we
provide
updates
for
the
yeah
on
the
on
the
CF
deployment
version,
21
line
say:
do
we
say
bad
luck?
We
now
work
or
focus
on
fixing
those
issues
and.
A
C
So
I
could
say
from
a
how
I
can
say
from
a
historical
context
how
it
was
handled
in
the
past.
I,
don't
know
if
you're
gonna
like
it,
though
it
was
on,
we
take.
The
cfd
has
been
labeled
not
already
not
supposed
to
be
supported
for
production
environments
right.
C
So
historically,
we've
done
significant
testing
on
stem
cells
before
we
release
to
feel
confident
that
it
is
working
the
way
it
should
be
working
and
can
handle
the
load
of
a
cloud
Foundry,
but
once
we
make
that
release,
if
it's
not
working
for
you,
we're
going
to
try
and
fix
it.
But
in
the
meantime
you
are
either
stuck
on
that
version
or
you're
rolling
back
right.
C
D
That
correctly
yeah
what
what
we
would
and
most
likely
do
is.
We
would
still
take
this
version
22,
but
apply
the
Ops
file.
That
switches
us
back
to
bionic,
which
will
in
this
case,
probably
solves
the
the
issue,
and
then
we
have
to
go
from
there
and
and
see
how
to
continue
so
I'm,
not
that
pessimistic,
but
it
it
could
happen
and
I
just
wanted
yeah
to
bring
this
on
the
table.
D
No
I
don't
know
when
there
will
come
in
the
face.
First,
let's
say
heart:
dependency
on
Jimmy
I,
don't
know
what
happens
when
sea
of
Linux
fs4
comes
out,
don't
know
in
theory.
It
should
also
work
on
bionic,
but
in
practice
we
will
see
foreign.
B
A
So
someone
from
the
working
group
like
the
next
cake
and,
of
course
the
ones
on
the
conference
can
celebrate
there.
A
C
I
have
one
before
I
go
into
the
cake.
Looked
pretty
sweet
I'm
jealous
that
I
couldn't.
B
C
To
try
some
of
that,
while
we
were
looking
scrolling
over
the
release,
integration,
CI
I
saw
one
pipeline
that
I'd
sort
of
forgotten
about.
If
you
want
to
navigate
back
there
and
find
timeout
scales.
B
C
This
one
is
interesting.
We
may
want
to
not
delete
this
for
a
little
while,
but
it
hasn't
been
a
problem
recently.
So
we
might
be
good.
C
The
the
history
of
this
one
was
that
when
David
Stevenson
changed
all
of
the
what
was
it
the
VM
sizes
yeah
in
in
Bosch
deployment
and
and
then
that
got
moved
to
bubble
that
broke
most
cats
runs
because
the
base
level
bubble
deployment
was
too
underpowered
to
successfully
run
cats
with
the
default
configurations.
C
The
immediate
solution
to
that
was
a
an
Ops
file
which
I
think
is
in
CF
deployment
operations
test
that
increases
the
size
of
the
I,
think
it
increases
the
number
of
VMS
and
the
size
of
certain
VMS
to
make
that
to
make
the
the
bubble
or
the
cfd
pass.
Cats
and
I
think
we've
applied
that
in
all
of
our
environments.
C
So
this
stopped
being
an
issue,
but
theoretically
That
was
supposed
to
be
a
stop
Gap
and
we
would
go
and
look
at
cats
and
try
and
make
it
pass
on
the
default
cfd
and
the
way
that
we
were
going
to
solve.
That
was
that
we
had
if
we
change
the
timeout
scale
of
cats,
which
is
an
input
to
the
integration
config,
it
would
actually
pass
even
with
the
weaker
configuration,
because
the
the
thing
that
was
slower,
that
was,
that
was
causing
cats
to
fail,
was
that
apps
weren't
deploying
quickly
enough
for
the
most
part.
C
So
if
you
increase
the
timeout
scale
to
wait
after
a
CF
push
or
other
certain
CF
actions,
it
would
be
fine,
it'd
be
a
little
slower,
but
it
would
be
fine,
and
so
Dave
and
I
set
up
this
pipeline
to
try
and
discover
the
optimal
timeout
scale
to
set
along
with
the
number
of
nodes
that
we
could
run
each
timeout
scale
with
right,
because
we've
been
recommending
12
for
a
full
cfd
environment
for
some
time
now,
but
no
one's
actually
validated
that
12
is
the
optimal
number
in
a
while.
C
So
we've
been
running
it
with,
if
you
were
to
zoom
out
on
this
pane,
this
represents
timeout
scale
of
1.1
x,
what
it
was
set
to
previously,
and
it
runs
it
with
increasing
node
values
of
one
two
one
through
15.,
so
it
theoretically
with
this
whole
graph.
C
If
you
were
to
select
multiple
scale,
1.1
scale
1.2
with
like
shift
you'd,
see
kind
of
the
full
layout
of
what
cats,
what
how
cats
is
passing
with
different
timeout
scales
and
different
node
values,
where
runcats
1.19
means
timeout
scale
of
1.1
what
it
was
previously
and
nine
nodes,
and
that
would
get
us
to
a
recommendation
that
we
could
then
backtrack
and
go
into
cats
and
actually
just
change
each
default.
C
Each
each
timeout
value
that
the
scale
that
the
timeout
scale
would
apply
to
and
just
increase
that
by
whatever
optimal
number
we
arrived
upon.
And
then
we
could
also
change
the
recommendation
for
nodes.
If
that
changed,
that
we've
sort
of
forgotten
about
it,
because
everything's
just
been
working.
But
if
we
want
everyone
to
take
out
that
Ops
file,
that
I
think
is
being
applied
to
all
of
our
cfd
environments
to
increase
its
size
from
the
base.
We
will
probably
want
to
go
back
and
have
a
look
at
this.
C
I
think
it's
in
what
file
is
this
or
what
folder
is
this?
This.
B
C
That
was
how
we
fixed
the
problem
either
that
or
we
had
manually
increase
the
timeout
scale
to
two
on
several
of
the
on
several
of
the
they
catch
jobs
themselves
in
the
cfd
pipeline.
C
C
One
caveat
to
this
graph:
is
it's
probably
not
relevant
anymore
because
the
it
claims
tools,
environments
which
all
apply
that
same
Ops
file?
So
it
it's
not
actually
testing
what
we
want
it
to
be
testing
and
I'm
not
and
I'm,
not
confident
when
that
was
introduced,
so
I'm
not
sure
how
many
of
the
runs
actually
ran
with
the
older
environments.
B
A
C
Okay,
one
interesting
finding
was
that
it
was
kind
of
consistently
passing
past
12
nodes
and
being
faster
than
having
12
nodes,
which
was
pretty
cool.
Now,
there's
some
red
in
it,
but
previously
some
of
the
scale
values
were
consistently
passing
and
speeding
up
some
of
the
CF
deployment
cats
Times
by
a
decent
number.
A
Okay,
so
you
determined
that
yeah,
somehow
9
or
10,
is
the
maximum
notes
or
successful
execution
yeah.
Well,
okay,
so
this
is
yeah
for
for
finding
the
maximum
note.
Timeout
combination
for
Success,
sorry.
B
C
Yeah
I
no
specific
action
item
on
this
I
just
wanted
to
call
it
out
in
case
this
graph
gets
deleted
like
maybe
we
want
to
save
this
for
a
little
bit.
C
Exactly
yeah
and
I
guess
your
hands
just
dipped
but
yeah.
If
the
idea
of
running
it
in
the
graph
style
like
this
was
we
could
Click
on
each
run
to
see
how
long
they
have
ran.
So
we
could
get
an
idea
of
like
where
the
point
of
slow
down
from
parallel
execution
was
happening
and
I
think
we
saw
that
it
was
happening
around
14..
If
you
click
into
some
of
these
13s,
they
were
running
I.
Think
if
you
click
into
like
1.5
13.
C
C
I
guess
experimental
takes
closer
to
40.
I,
don't
know
how
many
test
Suites
we're
running.
C
D
I
mean
in
general,
I
think
it's
a
nice
thing
that
we
run
over
tests
with
the
standard
configuration
that
we
also
yeah
will
see
on
a
let's
say
on
the
naive
cfd
deployment
user.
That
would
that
make
some
more
sense.
Yeah.
C
A
C
A
C
Yeah
the
they
did
at
one
point
work,
but
they
each
run.
They
since.
C
Down
yeah,
okay,
but
the
integration
config
carries
over
it's
just
the
username
password
and
API
that
are
updated.
Each
run.
C
Anyway,
there's
a
big
deviation
that
may
not
make
a
lot
of
sense,
but
the
point
is
we're
doing
a
little
bit
of
magic
to
make
cats
work
on
our
default
environments
and
either
the
magic
is
through
an
Ops
file
or
by
increasing
the
timeout
scales.
And
if
we
want
to
get
back
to
running
cats
in
the
default
environment,
we
probably
want
to
either
back
Port
the
timeout
scale
to
cfd
proper
as
the
default
or
bump
up
the
size
of
our
VMS
in
in
CF
deployment.
By
default.
A
A
C
Yeah
I
think
that
the
the
the
timeout
scale
should
be
up
should
be
like
the
max
timeout
I
haven't
validated
this,
but
hopefully
it
would
only
increase
increasing
the
timeout
scale
by
two,
for
example,
would
have
the
same,
hopefully
the
same
effect
as
increasing
it
by
1.5,
because
it
just
increases
the
max
time
that
we'll
wait
for
something
to
complete,
but
I
haven't
validated.
That
and
again
you
know
if
it
slows
everything
down
too
much,
because
it's
just
taking
forever
forever
family
plans
to
run
then
we
may
want
to.
A
Can
use
it
for
fine
tuning
I
mean
it's
not
our
most
urgent
problem.
First,
we
I
think
we
should
try
to
move
all
the
test
environments
to
into
the
same
region
as
The
Concourse
runs
this.
C
A
A
B
Yes,
yeah,
maybe
one
quick
topic:
we
could
talk
about,
send
you
earlier
so
I
changed
the
one
with
the
UAA
stuff,
ncf
deployment.
B
So
what
I
change
is
basically
I
change
the
UAA
Scopes
to
be
a
yamlist
instead
of
a
string,
the
first
one.
It's
all.
It's
basically
needed
for
this
Cloud
controller
with
API
weight
limiting
and
so
the
Scopes
are
no
longer
a
string,
their
list
and
with
a
list
it
can
be
easily
manipulated
in
Ops
file,
because
if
it's
a
string
then
yeah
it's
a
bit
cumbersome,
make
any
modifications
and
I've
just
scroll
down.
B
And
if
you
go
back
to
the
comments
in
the
pr
and
that's
one
problem,
I've
just
encountered
in
the
end,
my
last
control.
The
problem
is
that
industry
in
the
UAA
in
the
UA
Yama
definition,
it
prefers
Scopes
or
over
scope.
So
if
someone
has
an
AUX
file
changing
and
the
scope
variable-
and
we
know
it
would
switch
it
now
over
to
Scopes-
then
basically
Scopes
would
win
over
the
old
scope
field.
So
it's
a
bit
of
a
breaking
change.
I
would
say:
I
mean
it's
good.
B
If
we
release
jamming
out
and
we
could
just
ship
it
with
that
as
well.
It's
just
I,
guess
I
mean
I.
Guess
it's
fine
I
mean
I
hope
not
too
many
people
start
doing
that,
but
I
guess.
We
should
mention
that
in
the
release
notes,
yeah
but
I,
don't
know
the
question
is
basically
how
were
such
things
handled
in
the
past
and
yeah?
How
should
such
things
be
released?.
D
I
think
in
the
past,
if
there
wasn't
incompatible
change
in
the
let's
say
manifest
configuration,
then
it
required
a
new
major
version
which
we
are
going
to
cut,
and
you
know
it
was
announced
in
the
release,
notes,
I,
think.
B
Yeah
I
think
it's
not
a
problem
in
it's,
not
a
problem
in
in
CF
deployment,
because
I
think
there
are
the
only
obsides
which
add
new
clients.
Then
it
doesn't
matter,
then
you
can
use
either
way.
But,
for
example,
with
an
sap,
we
have
Ops
files
which
make
some
modifications.
A
B
B
A
Yeah,
anyway,
always
for
every
job
that
every
commit
that
is
pushed
onto
the
develop
Branch,
because
a
new
pipeline
commit-
and
you
have
to
wait
until
all
these
jobs
are
green
and
then
derive
at
less
manifest
and
then
you're
ready
for
the
next
release.
A
So
I
think
the
Jammy
stuff
is
now
through
if
I'm
not
wrong,
and
then
we
cut
a
release
from
that
and
then
your
PR
when
we
merge
it
tomorrow
anyway,
take
a
day
or
so
until
it
is
through
the
pipeline.
Yeah.
B
A
Okay,
good,
but
yes,
of
course,
this
is.
We
wish,
of
course,
have
to
explain
in
detail
in
the
release
section
what
the
breaking
changes
and
what
people
have
to
do.
A
Okay,
so
where's
the
link
now
again.