►
From YouTube: App Runtime Deployments Working Group [February 9, 2023]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
On
the
agenda
today
are
the
relocation
or
migration
of
our
Concours,
so
the
old
instance
is
going
to
be
retired.
I
would
tear
down
the
old
cluster
tomorrow.
Everything
has
already
been
paused
and
we
have
moved
to
the
new
approach
provided
by
the
Ari
colleagues.
A
Runtime
interfaces
infrastructure,
so
The
Concourse.
This
is
the
new
approach
that
we
have
adopted.
There
were
quite
a
lot
of
pitfalls,
but
we
contributed
to
the
documentation
so
that,
hopefully,
the
next
project
adopting
this
approach
will
have
it
a
bit
easier.
So
it's
some
terraform
plugins
do
not
work
on
arm.
You
have
to
compile
by
yourself
end
and
end,
but
nevertheless
it
now
worked
time.
A
Deployments
so
this
is
the
project
where
the
configuration
is
hosted.
As
you
can
see,
we
had
to
copy
only
a
few
of
the
files
actions
controller
could
actually
be
removed.
We
don't
use
that
so
here
is
our
configuration.
A
This
is
where
we
specify
the
project
name:
the
zoned
domain,
the
GitHub
team
for
authentication
and.
A
The
the
terraform
stuff,
the
terraform
modules,
are
just
referenced
here
with
via
git
with
a
specific
tag
that,
of
course
needs
to
be
updated
from
time
to
time,
and
so
we
have
a
quite
good
separation
of
configuration
and
coding.
B
A
So
it
says
migrate
to
updated
apis.
That's
the
problem
we
had.
This
is
the
new
cluster
and
yeah.
We
currently
have
16
broken
nodes.
I
had
to
scale
up
because.
A
This
terrifying
big
job,
which
recompiles
all
the
releases
we
had
to
execute
it
once
to
produce
a
valid
version
of
the
stem
cell
resource,
and
this
one
consumes
really
a
lot
of
resources,
so
I
had
to
scale
up
the
Bosch
director
to
16
CPUs
and
also
scale
up
the
notes.
These
were
all
failing
because
of
network
timeouts
and
other
resource
constraints.
A
We
have
actually
Auto
scaling
on
for
the
notes,
but
this
will
have
absolutely
no
effect
because
the
there
is
no
trigger
event
that
would
cause
any
up
or
down
scaling.
The
pot
Auto
scaling
is
not
activated
and
typically
when,
when
you
request
more
pots-
and
there
are
no
more
nodes,
the
pots
can
be
placed
on
that
kubernetes
would
add
more
nodes.
However,
I
tried
pod
Auto
scaling
and
it's
also
not
a
good
idea,
because
if
kubernetes
down
scales,
the
pods
you
get
orange
Conquest
jobs
because
it
says
worker
is
missing
to
work
or
something.
A
So
this
does
simply
not
work.
What
also
did
not
work
so
nice
was
the
automatic
update
of
the.
B
A
We
have
to
live
with
yeah,
so
I
would
scale
down
the
nodes
to
to
maybe
eight
as
on
our
old
instance,
and
only
if
there
will
be
a
major
stem
cell
update.
Then
we
have
to
remember
that
we
have
to
scale
up
again
so
that
this
job
passes.
A
But
okay,
everything
else
is
more
or
less
green.
The
orange
ones
have
not
been
triggered
yet.
A
The
deployment
pipeline
is
as
green
as
it
should
be.
I
see,
Carson.
You
have
repaired
a
lot
of
the
infrastructure
environments.
Some
of
them
were
broken.
It
wasn't
exactly
clear
to
me
why
today
I've
repaired
the
windows
infrastructure,
there
were
two
conflicting
jump
box.
Ips,
so
I
tore
down
everything,
set
it
up
again
from
scratch
and
it
works
again.
So
I
hope
this
is
all
fine
now
goods
and
if
there
are
no
objections,
I
would
remove
the
old
Concourse
tomorrow.
A
A
A
Real
opinion
Sam.
B
C
A
Okay,
good,
then,
regarding
the
environments,
there
is
still
one
environment
called
no
credential
credential
de
Conquest
gcp
service
account
Json,
which
it
is
used
here
to
upload
compiled
releases
into
buckets
and
it's
pointing
to
and
not
to
our
gcp
project
but
to
the
CI
Reliant
Concourse
project.
A
Okay,
so
it
would
basically
mean
changing
the
service
account,
creating
the
buckets
as
needed.
So
just
let
me
check:
where
is
the
cloud
storage.
A
So
these
are
the
buckets
we
have.
This
is
what
you
already
did
Carson.
This
is
a
bucket
or
Bosch
locks
of
failed
jobs
so
that
we
can
retrieve
those
logs
later
for
debugging
that
should
work,
I,
think
it's
empty
because
anything
failed
at
the
moment
and
in
the
same
manner
we
would
need
to
migrate
the
buckets
for
the
compiled
releases.
C
C
The
the
tricky
part
about
the
whole
transfer
process
usually,
is
that
the
bucket
names
are
unique
that
you
have
to
like
delete
one
or
like
copy
everything
to
a
backup
bucket,
delete
that
bucket
delete
the
old
bucket
quickly,
recreate
a
new
bucket
and
then
and
then
transfer
them
all
to
the
to
the
new
Bucket
from.
C
Yeah
but
but
can
do
yeah
I'll
just
I'll.
Add
it
to
my
to-do
list.
A
A
C
A
A
B
A
Every
now
and
then
good,
no,
what
else
did
I
want.
A
C
A
Are
good
making
group
progress
it's
slowly,
but
progress?
Okay,
then,
let's
go
over
last
times
meeting
minutes.
A
We
wanted
to
transfer
this
Docker
registry
to
check
for
the
Cloud
Foundry
acceptance
tests
that
check
access
to
a
private
registry.
I
haven't
done
it
yet
I
have
an
idea.
We
have
one
gcp
mail
left
which
we
could
use
to
create
a
free
Docker
account
where
you
can
host,
then
exactly
one
private
repository
for
free
or
something
haven't
done
it
yet
because
of
the
Concourse
migration,
but
could.
B
A
A
A
B
A
Okay,
shouldn't.
A
Okay,
good,
lock,
Regatta,
release
stuff,
it's
not
strictly.
Our
topic.
I
have
not
heard
any
news
which.
A
Yeah,
so
no
new
releases,
okay
yeah.
We
discussed
some
approaches
how
to
best
work
around
the
missing
recent
block
sand
gloves.
So
it's
it's
not
really
our
concern
yet
so.
A
A
Yeah
well,
okay,
good
yeah,
build
Tech
updates.
We
have
that
three.
C
A
A
B
It's
still
a
bit
early
and
there
are
still
three
build
packs
missing,
but
we
should
make
up
our
minds
how
we
continue
with
the
CF
Linux
fs4,
how
to
make
it
the
default
or
when
to
make
it
the
default
in
in
CF
deployment.
Basically,
I
mean
there
are
the.
A
B
A
A
A
B
Standard
deployment
and
have
just
Ops
files
that
at
city
of
Linux
fs3,
maybe
remove
seriously
but
I,
would
not
add
a
Ops
5
for
removing
the
new
stack
I
mean
if
somebody
needs
that
they
can
invent
their
own
Ops
file.
This
is
pretty
special.
A
Yeah
I
think
then
we
can
make
a
yeah
somewhat
hard
switch
and
integrate
this
into
cfd
deployment
Journal.
The
still
is
open,
pull
requests
is.
C
Sorry,
my
my
understanding
was
that
if
we
hard
switch
it
like
remove
CF
Linux
fs3,
then
that
would
cause
problems
for
anyone
running
a
a
deployment
where,
where
there's
like,
even
a
single
app
that
still
runs
CF,
Linux,
fs3
I,
guess
they
could
use
the
Ops
file.
To
add
it
back
in
is
what
we're
talking
about
yeah.
So
there's
there's
also,
there's
also
the
option
of
just
keeping
it
as
like
a
secondary
option
in
cfd
for
some
period
of
time
before
removing
it.
B
C
B
Still
decide
to
to
ship
it
anyway,
but
then
you
need
to
take
care
for
OS
maintenance,
Etc,
so
I
would
not
keep
it
in
the
standard
deployments.
That
should
be
right.
C
I
guess
it
would
be
I'm
curious
if
we
should,
after
we
get
all
the
Ops
files
in
whether
we
should
start
running
a
a
like
some
kind
of
Pipeline,
with
no
CF
clinics
fs3
just
to
see
if
there's
anything
breaking
there
before
we
make
that
full
transition
over,
because
if
you
know
certain
Ops
files
or
errands,
if
they
exist,
may
involve
CF
Linux
fs3
apps
that
we
don't
know
about.
C
B
Yeah,
maybe
everybody
can
think
about
it
and
we
discuss
it
as
a
topic
on
the
next
meeting.
I
mean
it's:
it's
not
that
urgent.
We
can
also
do
it
step
by
step
with
the
next
version.
Let's
say
if
the
build
picks
out
or
there
should
both
stacks,
then,
once
two
weeks
later
we
switched
the
default,
and
two
weeks
later,
we
remove
CF
Linux
fs3
from
the
default
shipment,
but
still
have,
of
course,
the
Ops
files
to
add
them
again,
so
that
we
have
a
nice
transition
yeah
and
then
then
we
have
May.
A
A
I
mean
we
first
change,
f
is
four
to
default,
and
then
we
might
already
find
something
breaking
here
could
be,
and
then,
when
we
remove
fs3
more
likely,
we
will
find
something
that
breaks
and
yeah.
Then
we
need
a
kind
of
yeah
we'll
put
integrated
into
the
bionic
validation.
This
one
runs
with
fs3
so
with
the
Ops
file
at.
B
B
I
mean
I
know
it
from
the
communication
that
we
have
to
do
at
the
moment
yeah
for
for
our
sap
customers,
that's
a
health,
open
effort
and
yeah
a
little
bit
communication
yeah.
It
doesn't
harm
good,
so
lots
of
major
version
updates.
A
B
A
A
Just
from
from
CF
deployment
point
of
view,
this
should
be
all
rather
easy
and
compared
to
what
we
have
here
at
sap,
forcing
the
customers
to
migrate
from
three
to
four
somehow
in
time,
and
there
is
not
too
much
time
left.
C
C
I
hijacked
it
re-ran,
it
got
the
same
result,
but
I
haven't
tried
to
there's
no
Bim
or
anything
on
the
container.
So
I
can't
like
edit
the
code
on
the
container
to
print
stuff
out
and
see.
What's
going
on,
the
the
output
is
cut
off
because
it's
so
large,
so
I
can't
see
what
the
difference
is
between
the
expected
and
the
actual.
C
A
C
C
C
A
Yeah
the
complicated
stuff
is
working
that
was
was
the
important
part,
don't
be
afraid
of
a
few
red
drops.
Yeah
I
messed
around
a
lot
with
the
pool,
locking
stuff
to
get
the
windows
up
and
running
again,
but
eventually
it
succeeded
yeah.
So
that's!
This
is
all
fine
good.