►
From YouTube: App Runtime Deployments Working Group [May 11, 2023]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
last
time
the
Sea
of
Linux
fs4
compatibility
release.
A
It's
all
in
preparation,
but
as
of
today,
there
is
still
no
release
published
on
Posh
IO.
So
yeah
here
is
nothing
to
do
for
us.
We
simply
have
to
wait
until
at
least
one
release
is
available.
A
A
A
Kubernetes
update,
I
have
not
yet
looked
into
that.
That
will
come
soon
cut
the
major
cats
release
after
a
major
CF
deployment,
release
I've
added
a
short
hint
in
the
release
Wiki.
So
next
time
someone
cuts
an
incompatibility
of
deployment
release.
He
or
she
is
reminded
to
check
the
cat's
release.
A
A
The
Google
Calendar
looks
correct:
okay,
I
think
there
was
no
time
zones
that
that's
why
we
had
some
trouble
when,
when
summer
time,
Daylight
Saving
Time
started.
A
Okay,
good,
then
the
agenda
Carson,
you
proposed
to
move
the
CF
deployment.
A
No,
the
infrastructure
pipeline,
which
sets
up
and
tears
down
the
long-lived
bubble,
ends
from
the
safe
deployment
project
to
the
runtime
CI
project.
B
Yeah,
it's
actually
a
move
back
technically,
when
we
first
when
this
working
group
was
first
started,
it
existed
in
runtime,
CI
yeah
under
CI
and
Dave.
I
believe
told
me
that
the
logic
there
was
that,
because
it
included
the
cat's
environment
and
other
environments
not
related
to
cfd,
that
runtime
CI
seemed
a
more
logical
place
for
it,
because
it
wasn't
all
like
cfd
related,
which,
at
the
time,
I
was
like
fair
enough,
but
I've
already
moved
it.
B
So,
whatever
we'll
just
leave
it
here,
but
I'm
getting
with
all
the
like
recent
bubble
stuff,
it's
been
kind
of
annoying
to
make
like
PR's
to
cfd,
to
fix
bubble
the
bubble
environments,
at
least
in
my
eyes
like
it-
would
make
more
sense
to
just
do
our
relent
nprs,
because
most
of
the
most
of
the
time
like
we're
going
to
be
doing
work
in
runtime
CI
when
we
are
fixing
issues
related
to
our
like
bubble
or
Concourse
or
whatever
right
so
I
would
rather
have
the
like
environment
stuff
closer
to
the
the
the
sorry
the
infrastructure
pipeline
closer
to
the
environment.
B
Stuff
that
it
manages
is,
is
the
only
thing
I
was
thinking
there.
If
that
makes
sense
to
you
all
I
think
it's
an
easy
just
copy.
B
A
Yeah
then,
let's
come
to
our
most
complex
and
hurt
things
matter:
the
migration
to
terraform
146
that
came
in
as
a
surprise,
with
the
Bosch
bootloader
update
I
mean
we
have
to
be
happy
that
there
is
some
maintenance
work
in
Bosch
bootloader
and
the
the
terraform
version
was
really
old
and
it
was
about
time
for
an
update
and
actually
yeah
that
that
hasn't
changed
too
much
in
terraform
itself.
It's
just
that
some.
They
turned
a
lot
of
warnings
now
into
errors.
Yeah
I
mean,
if
means,
if
you
have.
A
File,
it
must
really
overwrite
something,
and
if
you
have
accounts
index
in
one
of
your
resources,
you
have
to
use
an
index
to
access
those
resources
so
they're
just
getting
a
bit
stricter
and
yeah.
We
have
to
to
fix
some
yeah,
not
so
correctly
formulated
resource
definitions,
yeah,
but
nevertheless,
you
see
I've
tried
to
collect
everything
all
our
issues
here.
So
the
good
news
is
that
no
update
releases
is.
A
So
here
we
needed
a
clean
bubble,
destroy
of
the
four
test
environments
and
then
Bubble
Up
worked
again.
The
only
thing
was,
or
were
those
this
year,
this
gcp
DNS
stuff.
This
uses
the
G
Cloud
CLI.
A
A
Ci
Docker
file:
it's
the
image
built
from
this
Docker
file,
so
I've
built
it
locally,
uploaded
it
to
an
up
runtime
deployments,
Repository
and
change
this
here
in
the
manage
gcp
DNS
test
definition.
A
So
the
the
infrastructure
pipeline
is
currently
using
this
this
Branch
here,
but
now,
just
today,
they've
published
a
new
version
of
this
image
here
to
Cloud
Foundry
bubble
deployment.
So
I
think
we
can
just
make
a
final
move
to
this
image
here
and
then
then
we're
fine
yeah
makes
sense.
A
C
A
Complicated
stuff,
the
other
environments
have
all
been
destroyed
and
set
up
again
that
we
have,
with
a
little
bits
of
fixing
our
own
terraform
files
like
removing
override
suffixes,
where
there
was
nothing
to
override
this
worked.
The
Bosch.
B
The
I
mean
the
it
it.
The
Bosch
light,
gcp
plan
patch
had
an
override
file
in
it
and
we
were
pulling
that
override
file
every
time
in
our
in
CI,
which
was
like
the
right
move
at
the
time.
But
now
the
override
files
actually
need
to
override
something
that
override
file,
which
declares
all
net
new
stuff,
cannot
can
no
longer
be
an
override
file,
or
else
the
terraform
fails.
B
So
I've
made
a
PR
to
rewrite
the
override
file
to
a
non-override
file
in
bubble
and
since
I
didn't
want
to
like
copy
the
file
to
our
thing,
because
eventually
we
will
want
to
just
keep
copying
that
plan
patch
I
just
set
up
a
snowflake
environment
with
my
edited
file
and
paused
all
the
recreate
jobs
and
commented
out
some
of
the
recreate
stuff.
So
once
a
new
bubble
is
released
with
that
fix,
we
can
go
uncomment
and
unpause
the
recreate
stuff
for
Bosch
light.
B
Until
then,
it
basically
is
just
going
to
maintain
the
same
like
the
Bosch
director
throughout
each
cfd
run.
D
Think
yeah
the
the
only
concern
I
have
there
is.
Historically,
we
had
problems
with
Google
rate,
limiting
our
Bosch
light.
Vms.
C
D
So
recreating
there
was
a
way
to
get
around
that
because
we'd
run
a
set
of
tests
and
then
tear
everything
down
and
stand
up
a
new
one.
Instead,
we'd
have
new
new
quotas
depending
on
how
long
this
lasts.
Having
a
single
long-lived
VM
might
be
running
to
problems
there.
I
don't
know
we'll
have
to
keep
an
eye
on
it.
B
Yeah,
it's
been
failing
intermittently,
which
might
be
related
to
that
or
might
not
it's
been
sort
of
unclear
to
me.
If
you
look
at
the
blue
Bosch
light
deploy
it
pretty
much
fails
every
other
run
interesting,
which
is
weird.
It's
like
it's.
It's
really
strange.
It's
like
yeah
see
it's
like
almost
every
other
run
since
I
yeah
pushed
the
fix.
B
The
weird
thing
that
I've
noticed
is
that
the
failure,
the
only
difference
between
the
two
runs
as
far
as
I
can
tell,
is
that
once
it
succeeds,
it's
going
to
tear
down
the
CF,
the
cfd
deployment
and
then
Bosch
delete
everything
every
release
or
like
stem
cell
or
everything
that
was
uploaded
yes
and
after
that
deletion
occurs
is
when
it
fails
every
time,
but
if
we
rerun
it
when
it
hasn't
deleted
everything,
it
succeeds
so
a
little
confusing
like
what
what's
going
on
there.
B
It's
like,
but
I,
think
it
gets
us
it's
working.
It
seems
to
be
working
enough.
At
least
every
other
run
is
fine
to
the
point
of
like
we
can
get
commits
through
this
pipeline
for
now,
as
long
as
it
stays
this
way,
it's
just
kind
of
annoying
and
hopefully
bubble
releases
that
9.0.1
pretty
soon
the
pr
was
merged
so
as
as
soon
as
they
cut.
C
C
B
It
also
The
Knocks,
I
I,
haven't
gone
into
the
logs,
so
I
haven't
really
tried
that
hard
to
figure
it
out.
I've
just
been
like
retry
I.
C
D
B
So
maybe
maybe
we
can
wait
on
putting
it
back
and
I
can
actually
check
with
the
error
messages.
First,
the
one
other
comment
about
the
Bosch
light
stuff.
We
do
maintain
a
bubble
config
for
snitch
and
we
only
appear
to
do
that
because
we
we
want
to
change
the
VM
type
for
it.
The
boss,
light
VM
type
yeah
to
E2,
standard
16.
and
I
was
a
little
surprised
by
that,
because
I
was
like
Oh
I
thought.
B
You
know,
Bosch
deployment
I
thought
we
changed
all
the
bubble
up
stuff
all
the
bubble
environment
like
VM
types
A,
while
back
or
I
thought
Stevenson
did
that
turns
out.
He
only
did
that
for
the
defaults
and
Bosch
light
has
like
its
own
special
case
within
Bosch
deployment.
So
it's
still
pulling
an
old
N2
VM.
A
B
B
So
that's
that's
all
that
was
the
only
thing
I
wanted
to
call
out.
Otherwise,
the
the
snitch
thing
with
just
changing
the
override
file
seems
to
be
working.
Nice.
A
This
was
yeah
so
now.
Luckily
we
we
have
now
a
new
release:
yeah
nine
zero
one.
This
was
also
automatically
pushed
to
our
CF
deployment,
Concourse
tasks,
image
and
should
now
be
used
everywhere.
We
use
bubble
so
here
now
we
have
all
the
fixes
from
the
from
the
last
weeks.
Is
this
euros?
A
Yes
exactly?
This?
Is
the
plan
patch
fix
good,
so
the
state
is
now
we
have
all
our
environments
up
and
running,
except
for
the
only
AWS
environment.
A
So
I've
tried
to
Bubble
it
up
again
this
morning
with
a
development
version
of
Bosch
bootloader,
which
contains
the
same
as
901
and
still
we
we
don't
get
really
through
so
I
I
almost
but
by
hacking
around
widely
and
fixing
DNS
stuff
directly
in
the
consoles
I
manage
it
almost
I
almost
got
it
working,
but
The
Blob
store
credentials
were
missing
somehow
in
credit,
Hub
I,
don't
know.
D
A
Those
so
this
is
now
the
latest
state
of
torn
it
down
again
cleaned
up.
Everything
luckily
bubble
has
a
nice
command
cleanup
leftovers,
where
you
just
specify
your
account
credentials
and
the
name
of
the
environment,
and
then
it
really
goes
through
the
the
whole
infrastructure
provider
and
eliminates
all
orphan
resources.
This
is
very
nice.
A
This
you
can
use
bubble
clean
up
leftovers
if
a
correct
bubble,
Destroyer
is
not
possible
for
whatever
reason
so
that
helped
now
setting
it
up
again,
then
you
get
a
few
errors
in
here
in
this
area
where
the
AWS
Route
53
resources
are
defined.
So
here's
again
this
stuff.
C
A
A
C
D
A
A
Okay-
and
we
also
did-
we-
have
one
hosted
Zone
here
with
yeah-
well
all
the
typical
records
for
accessing
the
load
balancer.
A
The
challenge
is
here
that
our
route
hosted
zone
is
managed
in
GPS
a
gcp
account,
so
you
have
to
transfer
right
one
time.
The
name
servers
over
to
the
gcp
account
I
I've
added
a
documentation
to
the
experimental
and
how
to
do
this.
A
A
Well,
yes,
for
once
this
was
obviously
not
tested.
Yeah,
okay,
yeah
try
to
hack
it
until
it
almost
worked,
but
still
got
got
something.
A
So
we
could.
D
A
And
yeah.
A
A
C
A
Okay,
good
yeah
seems
like
yeah
I've
renamed
this
several
times
for
The
Last
Days,
so
you
have
to
remove
the
override
suffix.
This.
A
Use
a
policy
templating
thing:
it
looks
a
bit
ugly,
but
it
works,
it
doesn't
use
any
outdated
resources
and
for
outputs.
It
wants
to
have
a
sensitive
here
for
the
key,
so
this
should
be
fine
and
I
could
create
a
pull
request
for
Bosch
bootloader
so
that
at
least
this
here
gets
fixed.
This
should
then
be
the
easy
part.
A
A
A
A
To
to
find
problems
in
particular
because
now
yeah
you
see
this
is
the
Sea
of
deployment
develop
Branch.
So
this
is
now
accumulated
all
these
changes
over
the
last
there
is
a
so
would
be
nice
if
we
get
it
running
validated
at
least
one
time
on
our
AWS
environment.
C
A
Yeah
so
I
try
to
deploy
the
AWS.
A
With
a
locally
built
bubble,
I
I
got
the
DNS
stuff
running
with
a
little
bit
of
hacking.
What
was
missing
and
I
would
which
also
I
couldn't
really
explain
is
that
some
of
the
blob
store
outputs
were
not
present
in
the
bors
and
then
what
happened
was
was
this
here?
It
tries
to
do
a
Bosch,
deploy
everything
as
usual,
and
then
then
you
get
lots
of
these
Aromas
such
as
yeah.
A
It
does
not
find
certain
keys
in
and
also
not
the
region
in
gret
hub,
don't
know
who
is
supposed
to
fill
credit
up
with
those
credentials.
B
That's
a
weird
one,
because
that
might
not
be
a
terraform
incompatibility.
It
might
be
bubble
not
putting
the
right
terraform
files
together
or
in
a
way
that
doesn't
work
somehow,
because
bubble's,
the
one
with
pipes,
those
files
all
together
into
one
thing.
You
could
look
at
the
output
of
the
final
terraform
files,
maybe
and
and
see
whether
the
those
variables
have
been
set
and
traced
back
from
there
like
why
they
didn't
appear.
B
A
I
expected
somewhere
to
find
a
vast
file
as
terraform
output,
and
that
should
serve
as
input
for
the
Bosch
interpolate
right
and
these
forests,
but
I
did
not
find
this,
so
here's
a
small
Gap
in
the
worst
case,
you
could
enter
those
manually
and
yeah
weird
yeah.
This
is
a
bit
weird.
B
You
said
that
this
was
you
building
it
manually
right.
Are
you
confident
that
you
built
the
that
you
like
brought
in
all
the
same
plan
patches
and
all
the
bubble
config
stuff
in
the
same
way
as
our
pipeline
does.
A
More
or
less
yeah
everything
in
in
the
AWS
console
that
I
expected,
so
the
buckets
the
load
balancers
everything
was
there,
and
this
lookup
was
working
for
resolving
the
stuff
and
yeah
Bosch
director
was
working
and
then
just
at
this
point
for
the
final
CF
deployment,
it
said
that
several
keys
and
things
were
missing
in
Greta.
This
was.
A
Yeah,
okay,
good,
so
next
best
thing
is
then
Dave.
If
you
could
try
to
analyze
this
stuff
here,
a
little
bit
more
in
detail
and
either
open
an
issue
or
a
pull
request
so
that
we
get
the
DNS
stuff
fixed
I
will
make
a
pull
request
for
the
S3
blob
store,
stuff
yeah,
and
then
we
have
to
try
again
to
to
finalize
the
setup.
D
A
A
Okay
good,
but
then
we
are
quite
close
to
to
to
getting
our
pipeline
green
again.
I
mean
everything
else
is
running
fine.
It's
really
just
the
experimental
environment
that
blocks
things.
B
One
one
other
weird
thing
about
this
process
that
I
wanted
to
call
out
was
the
network
lb
gcp
plan
patch
that
we
were
applying
in
most
of
our
environments,
the
one
that
the
one
that
I
ended
up
just
ripping
out
of
everywhere.
B
The
final
conclusion
that
I
got
from
the
people
after
like
finding
the
people
who
actually
wrote
that
plan
patch
tracking
them
down
and
pinging
them,
was
that
they
only
made
that
plan
patch
to
test
new
gcp
functionality
that
had
been
added
and
it
wasn't
strictly
necessary
for
any
of
like
bubble
environments
to
work,
so
I
felt
pretty
okay
with
just
ripping
it
out
at
that
point,
is
their
memory
said
like
oh,
some
shiny,
new
network
load
balancer
had
come
in
and
gcp
it
wasn't
available
anywhere
else.
B
We
wanted
to
like
try
it
out,
because
at
the
time,
apparently
they
whatever
work,
I
I,
guess
whatever
open
source
work,
they
were
doing
involved
like
pitching
the
best
possible
setup
for
CF
deployment.
They
wanted
to
give
what
best
possible
setup
was
I,
don't
know
how
that
translated
from
here's
this
planned
perhaps
to
test
this
cool
new
feature
to
that
plan.
B
Patch
is
now
being
used
in
all
these
CF
deployment
environments,
but
I
thought
it
was
a
good
thing
to
remember
that
not
everything
we
deploy
in
the
pipeline
is
necessary
to
deploy
in
the
pipelag.
Some
of
the
some
of
these
things
are
like
historical
artifacts
that
don't
actually
need
to
be
in
there.
So
I
think.
A
Okay,
to
feel
okay
approaching
this:
where
was
it,
it
had
a
good
name.
B
Network
Json,
what
was
it
Network,
yeah.
A
He
had
this
one
yeah,
so
yeah
yeah,
but
the
other
one.
So
this
plan
patch
was
applied
for
most
of
or
many
of
our
gcp
environments.
C
D
A
Really
clear
what
it's
exactly
doing
and
yeah
it
turned
out.
We
don't
need
this
anymore.
This
is
an
alternative
load.
Balancer
set
up,
but
the
default
load
balancer
setup
is,
is
working
fine
for
our
gcp
environment
yeah.
So
luckily
we
could
just
remove
that
and
yeah
get
the
gcp
environments
working
again.
A
A
C
A
Get
it
running
until
the
next
working
group
meeting
yeah,
okay,
then
it's
then
we
really
have
to
skip
it.
C
You
know
I
guess
once
the
pipeline
is
working
again
and
we
have
cut
a
release.
We
would
then
probably
continue
with
removing
C
of
Linux
of
a
three
in
the
pr
is
open,
but.
D
A
Open
Florian
is
working
on
that
I
added
a
few
comments
that
here
has
not
updated
the
pull
request.
Yet,
okay,
but
of
course
this
is
also
quite
a
big,
a
little
bit
bigger
change
than
anticipated,
so
removing
everything
that
has
to
do
with
fs3
and
moving
it
to
an
Ops
file.
A
If
anyone
wants
still
yeah
this,
this
would
be
the
new
Ops
file
if
anyone
still
wants
to
use
or
has
to
use
fs3
yeah,
of
course,
for
now
it
doesn't
make
sense
to
to
bring
this
in
we'll
do
this
then,
after
the
next
regular
release.
B
One
other
thing:
it's
not
related
to
this.
The
I
don't
know
if,
like
folks,
have
noticed
yet,
but
the
security
folks
from
cff
reached
out
and
asked
me
to
update
release,
notes
on
cfd
to
denote
a
routing
release.
Cve
that
hasn't
been
released
yet
and
I'm
not
like
I,
don't
have
a
ton
of
details,
nor
am
I
allowed
to
talk
about.
B
Guess
but
just
wanted
to
call
out
that
I
did
update
the
release,
notes
to
specify
that
most
of
the
latest
versions
are
affected
by
a
cve.
A
Version
two:
six:
two:
there
is
a
problem
that
applications
or
are
not
registered
as
healthy
anymore
or
something
and
then
the
route
is
unregistered.
So
we
have
to
update
to
266.
266
right,
but
as
this
should
already.
A
It's
in
develop
already:
okay,
so
we'd
as
soon
as
we
can
cut
a
release.
This
is
in
and
yeah,
and
this
should
be
fine.