►
From YouTube: App Runtime Deployments Working Group [June 23, 2022]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
For
all
working
groups
that
doesn't
matter
so
much
because
up
runtime
deployments
working
group
now
I
have
it-
has.
C
C
B
Screen
one
screen:
two,
that's
the
wrong
one:
yep.
B
Groups,
one
for
the
bots
there's
only
one
and
then
there's
a
runtime
deployment
and
now
comes
cf
deployments
at
the
project
area.
There's
only
one
in
this
working
group
and
here
are
all
the
committers
who
have
write
access
for
this
repository
john
has
a
work
obliters
maintainer
everywhere,
but
you
should
not
maintain
something
manually.
That's
maybe
I
need
to
discuss
that
whether
we
should
keep
that
with
maintainers
or
not,
but
as
it
was
so
in
the
past,
we
have
kept
it
because
every
change
should
be
done
by
an
automation.
That
means
you
added
the.
A
C
A
B
Check
whether
these
teams
are
okay,
if
you
want
to
get
rid
of
the
other
teams,
this
is
still
under
discussion.
I
guess
in
the
next
toc
meeting
I
mean,
of
course,
if
you
want
to
get
rid
of
them,
you
could
file
a
pr
in
this
cloud,
foundry
I'm
in
the
community
project,
but
that's
a
pretty
cumbersome
stuff,
because
it's
a
it's
a
huge
file
that
is
basically
a
dump
in
the
yamaha
file
of
the
whole
organization,
and
there
are
hundreds
of
projects
and
hundreds
of
teams
inside.
A
Okay,
so
I
mean
the
first
team
is
still
used
for
authorization
to
some
concourse
instances.
Yes,
we
cannot
just
automatically
delete
this.
A
And
maybe
for
something
else,
so
yeah
who
should
carefully
check
yeah
yeah,
so.
B
B
A
A
Good
yeah,
you
can
unshare.
If
you
want
okay,
then
let
me
give
a
brief
update
on
what
happened
last
last
week,
so
we
had
a
very
productive
pairing
session
with
carson
and
we
managed
to
migrate
the
first
pipeline
from
the
vmware
concourse
to
our
application.
Runtime
deployments
concourse,
that
is
the
sea
of
smoke
tests
pipeline.
A
So
we
we
transferred
all
required
credentials
to
the
cretub
of
this
concourse
instance
and
made
sure
that
the
pipeline
itself
is
working
and
as
you
see
it
is
the
tests
themselves
are
still
failing
because
of
the
ongoing
ginkgo
version.
Two
migration.
D
That's
correct:
I
made
a
pr
last
night
that
I
hope
will
resolve
that
issue
for
smoke.
Tests.
A
A
The
the
back
end,
that
is
the
bosch
bootstrap
loader
resources
for
testing,
are
also
not
yet
migrated.
This
will
then
also
be
the
next
big
effort
to
migrate
those
to
our
gcp
account.
A
Okay,
good,
then
checking
our.
A
D
It's
small,
yes,
I
this
would
just
be
signaling
that
folks
can
start
experimenting
with
jammy
with
an
experimental
ops
file
that
will
quickly
and
easily
turn
on
jammy
for
your
system.
If
you
have
already
uploaded
a
jammy
stem
cell
there's,
maybe
something
more
to
add
to
it.
If
there's
a
way
to
automatically
upload
that
stem
cell,
I'm
not
sure.
If
that's
a
pos,
I
don't
think
that's
a
possibility,
though
it's
also
going
to
be
horribly
broken
for
most
situations
right
now,
because
I
think
cappy's
still
dealing
with
openssl
three
and
yeah.
A
D
A
Good,
let
me
check
the
dash
the
project
board
once
more.
A
Yeah,
so
I'm
trying
to
keep
this
overview
thing
up
to
date,
but
remember
these
are
manually
uploaded
using
ruben's
tool,
so
I'm
doing
this
from
time
to
time
to
collect
all
the
issues
and
pull
requests
in
one
places
but
yeah.
So
this
is
not
perfect,
but
okay.
A
Good.
Then,
let's
check
but
no
check
the
meeting
notes
from
the
last
meeting
so
yeah.
This
is
done
and
yeah
as
soon
as
we
are
ready.
We'll
ask
you
for
another
pairing
session
for
the
re
for
the
next
pipelines,
the
the
cloud
foundry
acceptance
test.
I
think
here
nothing
happens
right,
so
yeah.
A
B
Okay,
so
we
will
eventually
we
will
address
this
problem.
There's
some
hard-coded
v2
calls.
The
interesting
thing
is
it's
calling
an
api
that
doesn't
exist
in
v3,
so
we
have
to
replace
it
somehow,
it's
something.
A
Yeah
yeah
I've
had
requests
from
other
users
who
were
wondering
where
this
has
gone,
but
I
think
it
is.
It
is
somewhere
here
yeah,
so
it's
there.
D
Oh,
I
meant
to
ask
why
the
new
smoke
test
pipeline
is
paused.
Is
that
just
because
it's
failing
right
now.
A
Yes,
because
it's
failing
and
we
weren't
a
bit
unsure
if
you
have
really
passed
the
other
one,
the
old
one
and
yeah
well,
the
question
is:
do
you
already
want
to
delete
it?
Would
we
lose
anything
valuable.
D
I
am
fine
to
delete
the
old
one.
We
may
want
to
give
it
a
couple
weeks
because
we
would
lose
the
history
of
failures.
D
D
B
That
means
we
could
unpause
the
pipeline
on
the
new
conquest.
Now
right,
because
we
know
the
others.
A
A
D
Yeah
I
haven't
been
at
that
engaged
with
that
pr
as
much
recently,
which
is
mm-hmm,
but
it's
that
is
blocking
any
successful
cats.
Ci
runs
right
now,
so
we
do
need
to
let
the
oh
okay.
So
that's
the
smoke
test,
one
yeah
the
I
I
made
some
commits
last
night
that
I
think
that
I
was
able
to
test
locally
and
saw
them
succeed,
but
I
think
I
think
it's
good
to
go.
D
I
mean,
if
you
can,
if
you
minimize
the
vendor
change,
which
is
going
to
be
the
least
readable
stuff
and
ignore
godot
sum,
the
rest
of
it
should
pretty
much
just
be
changing
ginkgo
to
ginkgo
v2.
There
are
some
caveats
like
the
way
that
custom
reporters
have
changed
is
pretty
significant,
so
I
I
had
to.
C
B
B
D
I
don't
think
there's
much
urgency
with
migrating
the
foundations
over
the
easiest
one
to
do
is
cats,
because
that's
the
one
that
cf
smoke
tests
and
and
see
if
acceptance
tests
use,
it
might
be
worth
doing
prioritizing
that
to
see
how
hard
it
would
be
to
move
a
foundation
over
and
then
using
that
as
a
judge,
but
in
general
I'd
prefer,
I
think,
jammy
stuff
over
foundation,
migration,
stuff.
B
Because
when
when
I
look
now
at
our
sap
team,
where
we
work
on,
when
we
say
jeremy,
it
would
mean
that
we
focus
on
making
the
cloud
controller
work
on
ruby's
3.1,
because
that's
a
big
blocker
and.
A
B
D
C
D
D
Oh
for
the
automatically
bumping
golan
package
on
cs
smoke
test
release.
I
made
an
issue
about
that
10
days
ago,
on
github.