►
From YouTube: App Runtime Deployments Working Group [April 13, 2023]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
so
migration
of
the
bot
accounts
this
is
completed,
so
we
now
have
this
little
guy
here.
The
ardwg
gitbot
completely
new
GitHub
account
it
just
has
one
SSH
key
and
one
token
or
for
releasing
SSH
key
for
keep
Push
Pull
and
that's
all
credentials
are
stored
in
our
password.
So
yeah
as
we
still
do
not
have
a
good
solution
for
sharing
credentials
with
the
community
and
it
is
tied
to
the
up
deployments.
A
Cloud
Foundry
org
mail-
and
this
is
important
because
you
get
sometimes
a
verification
code
sent
to
that
mail.
If
you
try
to
log
in
okay,
but
I,
think
everything
works,
so
you
can
see
that
releases
can
be
created
and
git
operations,
work
and
I
notified.
A
So
yeah
here,
for
example,
this
one
so
I-
think
every
git
operation
should
now
be
done
by
exactly
this
one
gitbot
and
I
notified
Dave
and
Carlson
that
they
can
delete.
B
A
Three
other
gitbots
and
Branch
protection
is
also
working
as
expected,
except
for
this
single
cleanup,
stale
branches
job
which
tries
to
delete
something
that
accidentally
matches
the
regex
okay.
So
this.
B
A
All
done
deploy
Keys
have
been
deleted
if
I
didn't
miss
one.
The
only
thing
that's
left
to
do
is
use
a
distribution
list
as
mail
address
once
we
have
that.
A
Okay,
update
Samsung
Miner
is
green
again.
This
is
all
fine.
A
Before
we
come
to
the
hottest
topic,
but
yeah
I
guess
in
this
small
round
it
doesn't
make
sense
to
the
other
bigger
discussion,
so
after
I
started
with
the
story
to
make
fs4
the
default
stack,
so
the
pull
request
is
ready
for
review.
A
Where
is
the
zoom
chat?
Yeah,
okay.
I
can
also
paste
it
in
the
slack
Channel
so
yeah
as
planned.
This
request
integrates
all
the
fs4
stuff
into
the
into
the
CF
deployment
yaml
and
makes
fs4
the
default
stack.
A
The
old
files,
the
the
experimentals
Ops
files
are
still
there,
but
empty
so
like
this
should
make
the
transition
a
little
easier.
But
of
course
we
will
remove
those
in
one
of
the
next
releases
and
we
have
an
Ops
file
for
making
fs3
the
default
again.
A
So
aftab
is
currently
testing
this
locally.
A
With
the
cloud
Foundry
acceptance
tests,
it
would
be
nice
if
we
would
have
the
acceptance
tests
running
automatically
here,
but
yeah.
A
A
B
A
B
For
the
next
Sprint,
that
means
and
in
May
to
remove
the
Sea
of
Linux
fs3
stack
from
the
standard
sea
of
deployment
and
yeah
just
have
an
Ops
file
to
re-edit
again.
A
Okay,
good
yeah,
then
that's
already
discussed
several
times
the
RFC
I
think
only
after
yeah,
this
RFC
for
managing
the
incompatible
sea
of
Linux
fs4
changes.
A
Yeah
needs
probably
a
little
bit
more
input
for
our
site.
I
added
a
quick
comment:
okay,
how
we
could
solve
or
how
we
would
prefer
things
absolutely
handled
from
CF
deployment
point
of
view.
A
So
one
main
idea
is
to
have
one
Bosch
release
with
different
versioning
schemes:
zero
dot,
x,
dot
y
being
compatible
so
with
the
Ruby
and
Python
runtimes,
and
no
sorry,
yeah
yeah,
so
different
versioning
schemes,
one
Bosch
release
are
yeah.
This
would
be
difficult
for
CF
deployment
because
we
could
not
easily
manage
two
different
versions
of
one
in
the
same
release.
A
A
But
okay.
This.
B
A
A
little
bit
more
of
should
be
detailed
out
a
bit
better
and
there
are
a
few
other
ideas
how
to
handle
this
technically.
A
A
B
A
Are
we
I'm,
pretty
sure
we
can
remove
it?
Collaborate.
A
A
Yeah
sorry
I
opened
Anonymous
tab.
A
So
let
me
check
the
GitHub.
A
A
So
so,
okay,
yeah
indeed
yeah
so
the
run
this
group
only
contains
now
two
members,
that's
me
and
you
bought.
This
is
fine
and
automatically
configured,
and
indeed
yes.
This
was
now
wrong.
I
think
this
can
now
simply
be
removed.
A
B
B
B
A
Should
please
everything
is
clean,
so
deploy
keys
are
deleted
and
yeah
the
the
groups
are.
A
A
A
So
this
fantastic
epic
should
then
almost
be
finished.
We
would
end
yeah
good
any
other
topic,
if
not
maybe
one
more
thing
which
is
yeah
so
again,
the
cat's
tests
are
very
unstable
and
I
already
asked
here
in
the
in
related
red.
A
So
this
is
the
pattern
we
use
for
pushing
test
applications.
We
do
a
CF
push
and
then
wait
for
weight,
see
if
push
timeout
duration
seconds.
B
This
is
still
moment,
I
think
this
timeout
is
required
because
it
covers
from
entering
CF
push
until
CF
push
finished.
The
60
seconds
are
just
one
small
step
within
the
Sea
of
push.
B
B
A
B
That's
independent
I
mean
the
health
check
is
just
a
thing
when
the
application
container
comes
up,
so
this
is
completely
independent
of
API
nodes.
Okay,.
B
Should
be
excluded,
it
should
really
be
the
container
starts.
So
let's
say
you
have
just
very
very
little
memory
and
you
have
t
types
of
VMS
that
are
already
exhausted
with
the
CPU
cycles,
and
maybe
your
container
just
gets
two
little
CPU
Cycles
to
start
up,
but
for
Java
application.
I
would
say
yes
that
could
happen
because
it
has
to
do
but
a
nginx
static
file.
Actually
I
mean
it
should
take
okay,
milliseconds
to
start
such
a
container.
No.
A
B
Increase
that
that
was
in
the
copy,
Channel
One
discussion,
because
copy
team
also
observes
such
flakiness,
and
that
was
for
one
application
that
uses
a
binary
build
pack
and
took
a
HTTP
contain
application
that
was
implemented
with
netcat
and
Bash.
A.
B
Okay,
Ricky
bad
quote,
but
anyway,
this
one
is
also
for
unknown
reason.
It
doesn't
come
up
and
is
often
problematic,
and
nobody
knows
exactly
why
and
their
ideas
to
look
into
the
build
pack
life
cycle
or
something
was
changed
in
routing.
There's
no
real
root
cause,
yet,
okay,
just
maybe
to
double
check
whether
it's
the
same
application
that
fails
couldn't
find
out.
If
it's.
A
A
Okay,
good,
but
I
could
check
if
the
Diego
cells
are
maybe
a
little
bit
overloaded
and
if,
if
things
get
better,
if
we
add
a
few
cells
or
scale
up
a
little
bit.
B
B
And
maybe
there's
this
in
the
copy
channel?
Oh
I
could
look
it
up.
There's
a
discussion
about
that
problem
and
a
Carson
knows
that
in
detail
as
well.
Yeah
I
wanted
to
ask
him
to
tonight
about
it
and
here's
the
currently
they
are
asking
all
the
other
teams.
That
say
remember.
It
was
a
certain
day
when
it
started
3rd
of
April
or
something
like
that.
Maybe
we
can
double
check
if
that
coincides.
A
Not
really
I
think
at
least
the
experimental
tests
have
always
been
a
bit
flaky,
but
you
can
also
double
check.
These
These
are
the
only
ones
that
run
on
AWS
the
rest
on
gcp.
Maybe
they
have
the
wrong
the
M
size
or
something
think
the
upgrade
is
also
not
really
stable
and
rash
is
also
flaky.