►
From YouTube: App Runtime Deployments Working Group [July 28, 2022]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah
I
could
go
to
the
agenda.
Johan
is
in
vacation,
so
he
won't
join
after
this
way.
That
makes
it
quite
a
bit
challenging
at
the
moment
to
yes.
B
A
Some
topics,
but
we
will
see
okay,
maybe
first
topic
just
to
mention
it
shortly.
There's
still
this
discussion
on
its
way
about
the
bubble
project,
whether
that
should
be
moved
to
application,
runtime.
B
B
A
Okay,
so
we
just
wait
until
there's
an
outcome
of
this
discussion
and
that's
all
fine.
I
mean
it's
not
urgent,
right,
yeah.
B
B
No
I'm
we're
in
a
good
place.
At
the
moment
we
have
jeremy
support.
So,
yes,
I
think
it's
just
a
question
of
how
we
want
to
support
it.
Long
term.
A
C
A
Prs
to
them-
that's-
maybe
not
the
most
urgent
use
case
for
our
group
here,
because
nobody
is
applying,
but
it
could
be
used
also
for
granting
read
access
to
pipelines,
etc.
If
we
don't
want
to
make
them
public,
that's
something.
B
A
Come
back
right,
there's
also
under
discussion,
a
concept
of
having
bots
work
group
area,
since
we
have
only
one
or
two
areas:
it's
not
important,
but
there
are
other
working
groups
who
have
really
separate
working
groups
and
right.
B
A
B
B
A
Yeah
yeah
that
I'm
pretty
I'm
sure
everybody
yeah
wait
and
if
somebody
complains,
of
course,
we
can
revert
or
add
the
teams.
C
A
A
That's
currently
not
in
the
plan
for
github
for
the
cloud
foundry
org,
but
chris
clark
wanted
to
reach
out
to
microsoft,
and
if
we
get
that
we
could
grant
all
the
approvers
maintain
roles
that
would
allow
to
configure
ci,
etc.
I
mean,
from
a
toc
point
of
view,
we
only
want
to
prevent
that
ripples
get
deleted.
We.
A
Other
rights,
yes,
but
but
at
the
moment
this
yeah
concept
is
a
bit
strange:
either
you
are
admin
or
you
just
have
right
access,
but
let's
see
what
comes
out.
If
that
will
not
happen,
then
we
will
probably
introduce
something
like
a
group
admin
or
so
that's
still
in
that
discussion
it
doesn't
solve
the
deletion
problem,
but
at
least
it's
unlimited
to
one
right.
C
So
that
maintain
question
reminds
me
that
it
was
originally
raised,
because
the
bots
in
arp
that
are
used
in
ci
to
push
to
github
cannot
pass
the
branch
protection
rules
without
admin
access.
Have
we
checked
on
the
bot
accounts
that
we
use
in
our
ci,
because
I've
just
realized
I've
not
done
that.
A
Would
not
get
they
are
in
the
working
group
and
they
would
not
get
admin
rights
at
the
moment
exactly.
Maybe
they
still
have
it
because
of
these
non-standard
teams,
but
that
could
could
get
a
problem.
We
will
see
yeah
we
discussed
about
it.
There
might
also
be
another
possibility
to
bypass
it
by
special
configuration
in
the
branch
protection
with,
maybe
so
not
just
admin,
but
to
exclude
those
users
also
need
to
look
it
up.
B
C
C
Yeah,
yes,
the
also
we
would
lose
our
admin
access
right,
so
it'd
be
all
on
johan
who's
on
vacation
to
bypass
the
branch
protection
rules.
If
there
are
any
right,
okay,.
A
I
have
admin
roles
because
I'm
an
awk
admin,
so
I
can.
If
something
is
burning,
I
can
can
help
out.
Okay.
B
C
A
Yeah,
I
don't
know
where
we
are,
I
haven't
followed
up,
I
think
sven
is
knows
most
and
yeah,
so
I
had
a
hiring
session.
Yeah
didn't
get
that
far
because
we
couldn't
create
the
domain
we
needed
for
the
bubble
environment,
but
I
think
we
have
now
like
kind
of
a
self-service
right
where
we
can
just
register
one.
B
I
know
they
were
talking
about
that.
Have
they
actually
set
up
the
dns
zone,
because
I
think
chris
clark
put
in
a
request
to
the
linux
foundation,
who
actually
maintained
cloudfoundry.org
to
get
ci.cloudfoundry.org
that
he
could
then
control.
So
he
could
then
grant
subdomains
of
that
to
each
of
the
working
groups
so
that
each
working
group
can
then
control
its
own
dns.
C
Chris
clark
has
ci.cloudfoundry.org
set
up,
so
theoretically,
the
hard
part
is
dot.
Also,
it
sounded
like
stevenson
was
pitching
earlier
in
that
in
that
thread,
which
is
in
pfc.
If
anyone
wants
to
check
it
out
that,
rather
than
filing
lots
of
dns
tickets,
working
groups
could
actually
self-service
off
of
ci.cloudfoundry.org.
C
That
that
makes
more
sense,
but
the
way
that
it's
written
right
now
makes
it
sound
like.
Maybe
we
can
already
self-service
off
of
cs
cloudfoundry.org,
possibly
need
a
message
to
clear
it
up
and
see
where
we're
at.
A
Okay,
pipeline
access.
When
I
look
at
this
new
concourse-
and
I
just
see
two
pipelines
and
all
the
others
either
not
migrated
or
they
have,
they
don't
have
public
access,
because
I
have
no
oil
in
this
working
group
and
that's
maybe
so.
The
question
is:
is
there
any
reason
why
we
would
not
make
oil
pipelines
public?
That's
maybe
some
the
easiest
thing:
they
were
public
on
the
old
concourse.
There
seems
to
be
no
no
hard
reason
right,
like
yeah
credentials
and
the
locks
etc.
A
B
B
Makes
sense
I'm
trying
to
access
ci
right
now
and
it's
not
responding
to
me-
did
the
address
change.
Oh.
A
C
A
Post
cf
deployment,
where
everything
is
grey
because
yeah
it
just
doesn't
run
yet,
and
these
are
the
two
pipelines
I
can
see.
A
And
then
there
was
this
question
on
one
slight
discussion:
whether
we
need
to
scale
out
scale
up
so
conquest
workers
on
the
new
chrome
course
that's
something
because
the
unit
tests
were
too
slow
or
something
was
mentioned
again.
I
I
have
no
access
to
the
yas
account,
so
somebody
would
need
to
look
inside
and
then
find
out
with.
It
needs
to
be
changed
and
if
so,
just
do
it
right,
throw
money
on
the
problem.
C
Yeah
I
I
posted
that
message.
I
don't
have
much
experience
at
that
level
of
concourse,
so
I
didn't
run
any
tests
to
see
if
the
cpu
was
being
overloaded
or
anything
like
that,
but
the
exact
same
tests
with
the
exact
same
command
slowed
down
by
around
130
on
the
new
concourse
there
is.
There
are
some
alternatives
to
to
to
beefing
up
the
concourse
workers
and
that's
that
we
could
try
and
get
them
to
run
in
parallel.
C
They're
currently
run
serially
serially.
So
that's
an
alternative
to
throwing
money
at
the
problem,
but
yeah
it's
it's.
It's
significantly
slower
in
a
not
great
way.
A
A
Okay
yeah,
so
it
was
regarding
the
pipelines.
What
I
had
on
my
list,
then
yeah.
You
know
what
already
on
the
jami
stem
cell.
The
cloud
controller
should
not
be
the
blocker
anymore,
there's
a
release
that
contains
ruby
3.1.
Of
course
it
was
not
tested
yet
on
gemmy.
So
that's
maybe
something
yeah.
What
could
be
done
next.
B
Yeah
carson-
and
I
have
a
a
session
this
afternoon
to
to
set
up
a
test
pipeline
to
validate
army
against.
I
guess
it
will
be
against
develop
of
cf
deployment
just
to
test
every
change
as
it
comes
in.
We
do
have
the
latest
cloud
control
or
cappy
release
1.134,
which
is
ruby31,
so
should
be
fine,
but
we'll
we'll
find
out
this
afternoon
when
we
get
that
up
and
running
it.
B
A
Will
this
pipeline
be
publicly
available,
so
then?
Yes,
the
the
the
safety.
B
B
Okay,
well,
there's
something
weird
with
those
two
releases
and
the
same
thing
happened
with
1.132
and
1.133,
which
are
the
two
releases
on
ruby,
3.0.
A
B
So
we
don't
know
everything's
green
again
now
and
everything's
flowing.
So
I
suspect
that
carson
and
I
will
cut
a
release
tomorrow
is
our
normal
pattern.
I
mean
as
soon
as
the
pipeline's
green
and
at
that
point
we'll
look
at
cats
and
the
concourse
tasks
and
see
if
they
need
releases
cut
as
well,
but
wanted
to
make
sure
you
were
aware,
because
I
know
you
had
plans
to
try
and.
A
A
We
don't
use
it.
Okay,
we
use
rds
instances
and
have
some
specialized.
A
That's
not
but
yeah,
it
shows
something
it's
fishy.
There.
B
B
A
B
Okay,
yeah
everyone.
I
talked
to
couldn't
explain
why
it
was
suddenly
started
happening
and
I
haven't
asked,
but
I'm
pretty
sure
they
they
won't
know
why
it
stopped
happening
again
either
but
yeah.
If
we
updated
the
ld
library
path,
then
the
library
loading
would
work.
It
just
wasn't
happening
automatically
as
part
of
the
the
pre-start
script.
So
I'm
not.
A
Sure
yep,
the
final
thing
I
had
on
my
list
was
yeah
for
the
cf
smoke
test.
There's
now
this
automatic
update
of
golang
versions,
that's
cool
thanks
for
your
help
and
yeah
curious.
What
are
the
new
versions
now
in
the
next
cf
deployment
release.
C
To
I'll
turn
that
I
yesterday
someone,
someone
was
asking
that
we
bump
the
version
of
cf
test
helpers
to
v2,
since
we
made
a
somewhat
breaking
change
about
upgrading
to
ginkgo
v2.
So
fair
enough,
I
was
taking
a
look
at
that
library
and
we
do
kind
of
weird
things
with
it
right
now.
We
treat
it
like
a
sub
module.
We
just
pull
it
in,
but
it
is
a
go
dependency
in
multiple
places,
so
just
that
just
pulling
in
thing
is
sort
of
a
flaky
job.
C
So
I
made
the
pr
to
upgrade
to
v2
they've
approved
it,
so
I
merged
it,
and
I've
been
going
and
upgrading
the
repository
or
the
code
bases
that
use
it
cats
and
smoke
tests
and
for
both
of
them.
I've
also
looked
to
try
and
remove
the
ci
element.
That
goes
to
automatically
pull
it
in
and
update
it.
C
Thinking
that
we
in
the
future,
we
can
just
manually,
cut
a
release
for
that
library
and
depend
a
bot
or
the
smoke
tests
like
upgrade
pipeline,
we'll
pull
it
in
whenever
there's
a
new
version,
and
that
might
be
a
better
flow
for
cf
test
helpers
than
letting
it
just
break
all
the
time.
B
Yeah,
I
think
that
makes
sense
that
job
was
years
old
now
and
predates
having
depend
about
do
nice
pr's
for
bumping
dependencies
so
yeah
I
like
that.
A
Yep,
so
from
our
side,
we
would
continue
on
this
migration
with
the
pipelines
and
see
yeah.
As
far
as
we
get,
I
mean
the
pipelines
are
moved
over,
but
to
get
them
running
if
something
is
needed.
If
we
can
do
something
here,
it's
a
bit
difficult
to
for
us
to
understand
all
the
details
and
then
whom
to
ask
if
some
problems
like
the
subdomain
problem.
B
A
A
A
A
B
No,
I
think
we
just
need
to
you
know.
Carson
said
we
need
to
make
sure
we've
got
the
bot
accounts
in
the
in
the
team,
so
we'll
try
and
get
it.
We
can
look
at
that
this
afternoon
as
well.