►
From YouTube: App Runtime Deployments Working Group [July 14, 2022]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
then,
let's
get
started
welcome
to
our
bi-weekly
workgroup
booting.
First
briefly,
few
organizational
things
I
will
be
on
vacation
for
the
next
two
meetings,
but
stefan
maca
has
gladly
accepted
to
represent
me,
so
we
can
also
continue
our
working
efforts
during
vacation
time.
A
So
we
will
get
a
few
more
repositories
into
the
responsibility
area
of
our
working
group.
Bosch
bootloader
shall
be
transferred.
A
Here's
the
open,
pull
request,
yeah
they've
proposed
to
delay
the
transfer
until
we
have
finished
our
current
efforts
with
migrating
all
the
pipelines,
but
then
later
we
will
also
have
bootstrap
bosch
and
let
me
check
yeah.
Okay.
The
these
two
repositories
will
then
also
be
part
of
our
working
group.
B
One
thing
we
could
discuss
what
we
need
and
dave
for
that
hello,
yeah.
B
Would
be,
let's
say
if
everybody
agrees
and
we
go
ahead,
we
are
talking
about
this
bootstrap
the
bubble
movement.
If
that
is
supposed
to
happen,
the
question
is:
would
we
open
a
new
area
in
the
cfd
to
have
separate,
or
maybe
also
to
demonstrate
that
we
do
not
have
sufficient
committees
for
this
project?
Yeah.
C
It's
a
combination
of
not
really
having
the
the
the
team
size,
but
also
not
having
really
the
skill
set.
It's
it's
very
different
from
maintaining
cf
deployment,
there's
a
lot
of
much
more
lower
level
infrastructure
knowledge
required
to
work
with
terraform
and
and
then
knowledge
of
more
in-depth
knowledge
of
of
bosch
working
with
bosch
deployment
and
making
sure
that
the
the
cloud
configuring
the
runtime
could
figure
set
up
correctly.
D
C
It's
late,
that's
why
I
personally
just
said:
can
we
just
delay
the
decision
until
we're
ready
to
at
least
talk
about
it
as
a
working
group
and
decide
if
this
is
the
right
home
for
this
yeah?
I
think
the
challenge
is
that.
C
C
C
They
don't
use
bubble,
they
use
their
own
terraform
templates,
but
they
basically
do
the
same
thing,
but
I
get
the
feeling
that
that
they
don't
want
it
because
they
didn't
write
it
and
they
don't
know
how
to
maintain
it,
which
is
understandable,
but
it
it's
left
it
in
a
very
isolated
position.
Since
the
infrastructure
team
was
disbanded
about
three
or
four
years
ago,
it
it
stagnated
for
a
while.
C
I
I
took
it
upon
myself
to
resurrect
it
and
at
least
get
a
pipeline
up
and
running
so
that
it
bumped
bosch
deployment
so
that
we
were.
We
were
up
to
date
and
then
I
handed
it
off
to
a
team
that
was
supposed
to
to
manage
that
they
got
a
little
bit,
maybe
optimistic
about
how
much
management
they
could
do
and
they
tried
to
actually
modernize
it
and
then
that
derailed
their
maintenance
on
it.
C
And
so
it
languished
again
until
reuben
stevenson
and
myself
resurrected
it
a
few
days
a
few
weeks
ago,
and
so.
B
Okay,
it's
far
difficult
to
here's
this
decision
because
we
don't
know
the
history
and
yeah.
C
B
C
Maybe
we
should
use
whatever
tooling
the
bosch
team
uses
to
stand
up
their
bosch
deployment
environments
and
that
way
we're
just
using
yeah
infrastructure
that
is
maintained
by
them
or
at
least
tooling,
to
produce
infrastructure,
that's
maintained
by
them,
and
then
all
we
have
to
focus
ourselves
on
is
oh
well.
In
addition
to
that
infrastructure,
we
need
a
load
balancer
and
we
can
have
a
small
piece
of
tooling.
That
specifically
does
that
for
cloud
foundry.
C
The
other
concern
with
with
bubble,
unfortunately,
is
it's
also
how
people
stand
up
comcast
environments,
if
they're
running
concourse
still
on
on
vms,
so
that
and
that's
definitely
not
our
problem
to
try
and
fix,
but
it
would
leave
the
people
that
are
are
still
running
conquest
that
way
a
little
bit
in
the
lurch.
They
would
have
to
figure
out
how
they
want
to
handle
that
going
forward.
C
B
B
C
So
I
I
suspect
that
the
the
next
step
needs
to
be
an
email
to
cfdev
to
start
a
conversation,
or
maybe
you
try
to
direct
things
onto
slack
where
it's
easier
to
have
those
kind
of
back
and
forth
discussions.
C
D
C
That's
why
I
said:
can
we
just
let's
just
hold
off
on
this
for
now
and
see
where
we
go.
A
C
A
C
B
B
The
threat
also
to
the
toc
and
say
this
is
the
topic
we
are
discussing
about
that
for
now.
Please
don't
yeah
move
forward
or
I
put.
C
B
A
Good
okay,
so
this
is
further
discussion.
We
have
a
few
more
points
on
the
agenda
today,
so
let's
continue
because
otherwise
time
we
will
run
out
of
time.
So
next
would
be
maximilian,
mull
and
maybe
alexander,
to
give
us
an
overview
of
this
proposed
feature
tcp
dump
for
everyone,
then
I
will
give
a
brief
up
view
of
the
migrated
pipelines
to
our
own
conquerors.
Carson
is
not
into
call
but
yeah,
okay
and
then,
finally,
we
will
talk
about
the
ruby
three
update
of
the
copy
release.
B
D
Perfect
yeah,
so
this
speed
up
for
everyone
is
something
we
have
been
developing
for
the
past
few
months,
mainly
by
dominic,
but
he's
unavailable
today.
So
I'm
taking
over
for
him.
D
The
problem
that
we
are
facing
is
we
often
get
tickets
from
customers,
or
we
see
issues
where
it's
not
really
clear
where,
where
the
issue
lies,
the
underlying
issue-
and
we
often
just
get
tickets
saying
yeah,
my
app
is
not
working.
It's
the
network
is
too
slow.
Something
with
the
routing
stack
is
wrong.
D
D
So,
as
I
said,
it's
mainly
based
on
on
tickets
and
then
customer
requests
that
we
get
and
we
really
like
to
give
them
the
power
to
do
it
to
debug
stuff
on
their
own.
We
do
not
expect
everyone
to
know
how
to
read
these
pcap
files,
how
to
read
tcp
dumps
and
understand
what
is
going
on,
but
just
giving
them
the
power
to
provide
us.
D
The
information
we
need
is
already
a
big
step
forward
for
us,
especially
stuff,
like
502's
nego
rotor,
where
the
customer
claims
the
app
has
not
crashed
and
there's
something
in
between
the
go
router
and
the
app
for
example,
and
another
nice
thing
that
we
could
also
get
with.
This
is
kind
of
dump
ingress
scenarios.
So
we
also
have
issues
where
the
customer
is
even
unable
to
connect
to
the
go
router
or,
in
our
case
the
agent
proxy
of
the
platform,
and
we
currently
don't
have
a
way
to
easily
capture
traffic
there.
D
So
usually,
the
solution
is
to
tell
the
customer
to
do
it
on
their
side,
but
that's
also
difficult.
So
with
that
solution.
With
this
tcp
dump
solution,
we
could
also
be
able
to
just
capture
selective
traffic
on
the
on
the
load
balance
of
the
platform,
and
this
data
gives
us
hard
evidence
on
what
is
actually
going
on
where
the
failure
is
occurring.
What
is
going
wrong
and
yeah
enables
us
to
just
find
out
what
the
issue
is,
whether
it's
on
our
site
or
a
customer
site
or
website.
D
Basically,
there
are
two
scenarios:
the
vanderla
cloud,
foundry
scenario
where
you
have
the
normal
cloud:
foundry
app,
either
one
or
multiple
instances,
and
you
just
want
to
see
which
request
or
which
network
traffic
is
reaching
it
and
the
way
it's
implemented
is
currently
is
that
you
stream
it
back
to
the
user.
So
the
the
capture
on
the
on
the
app
container
is
stripped
back
to
the
cli,
which
then
captures
it
to
a
file.
D
But
it's
also
planned
to
have
an
intermediate
storage
on
the
access
vm
of
the
pcap
server,
so
that
we
can
captures
traffic
even
if
the
client
doesn't
have
a
strong
network
connection,
because
we
need
to
send
back
all
traffic.
So
if
the
connection
is
not
strong
enough,
quick
or
fast
enough,
we
would
lose
packets
and
the
connection
would
die.
D
So
let's
see
okay,
so
we
have
a
cf
landscape
here
now
we
have
this
one
here,
so
we
have
the
p-cap
server
or
the
p-cap
deployment
and
the
p-cap
server
running
on
this
landscape,
and
you
basically
start
by
talking
to
the
endpoint
that
this
pcap
server
provides,
which
also
offers
you
the
the
cfcli
plugin,
either
for
mac
or
for
linux
kind
of
like
concourse.
Does
you
can
then
just
download
that
file
and
install
it
with
cf
install
plugin?
I
already
did
that.
D
And
now
we
have
a
simple
test:
app
that
we
use
for
all
kinds
of
testing
and
what
it
does
is
it
just
tells
us
what
was
the
forwarded
for
header
when
the
app
received
the
request,
so
we
can
see.
Okay,
it's
my
my
client
and
then
here
we
have
the
aj
proxy
that
case
now.
What
we
can
do
is
we
can
do.
We
can
take
a
look
at
the
cli
of
this
pcap
plugin
and
basically,
we
just
tell
it
to
capture
from
a
specific
appens
app
name.
D
D
We
could
specify
the
instance
the
type
this
is
either
web
or
job,
I'm
not
entirely
sure,
but
these
two
types
basically
will
always
be
web
for
capturing
traffic
and
also
the
network
device
or
interface
you
want
to
capture
from
so
let's
just
start
the
capture
and
we
now
start
sending
requests
to
the
application.
You
can
see.
That's
also
immediately
reading
data,
so
we
can
stop
that
now
into
the
file
and
tells
you
okay,
I'm
capturing
from
both
instance
instances
of
the
application.
D
D
D
So
what
we
saw
is
we
as
a
user,
we
just
use
pcsf
cfcli,
we
login
as
we
do
always,
and
we
can
just
specify
the
application
that
will
work
out
what
where
the
application
is
and
how
to
talk
to
it,
and
this
is
done
by
talking
to
the
pcap
server
and
the
uaa.
D
So
the
token
that
we
get
from
uaa
gets
sent
to
the
pcap
server
and
the
pcap
server
then
validates
that
token,
with
uaa,
and
currently
it
checks.
If
the
person,
the
user,
that
is
trying
to
capture
from
a
specific
application,
is
at
least
space
developer
in
the
space
where
the
application
lives
to
make
sure
that
no
one
can
just
randomly
capture
traffic
from
other
applications.
D
The
api
route
by
the
way
is
just
exposed
using
road
registrar,
like
other
deployments,
do
as
well.
Now,
in
our
case,
the
pcap
server
realized.
Okay,
he
wants
to
capture
from
the
user,
wants
to
capture
from
an
application.
D
So
it
talks
to
the
cloud
controller
to
locate
the
cell
that
the
application
is
on
and
gets
some
other
details
from
it
and
then
talks
to
the
pcap
server.
So
this
is
the
pcap
server
api
of
pcap
api,
and
this
is
the
pcap
server.
The
naming
is
a
bit
bit
confusing,
which
is
running
on
each
diego
cell,
so
it
has
a
dedicated
port
and
then
it
tells
the
pickup
server
hey.
D
D
And
for
the
future,
it's
planned
that
we
then
store
the
file
that
we
or
store
the
stream
that
we
get
back
from
the
pcap
server
on
a
local
pcap
file
that
when
capturing
is
done,
we
copy
to
s3
so
that
the
client
can
then
be
handed
out
a
url
saying
hey
in
the
future.
If
you
want
to
download
please
contact
that
url
and
then
it's
kind
of
the
same
thing.
D
The
bosch
case
is
basically
the
same.
The
main
difference
is
it
does
not
talk
to
the
cloud
controller,
but
the
bosch
director,
because
we
now
need
to
locate
the
instance
vm
of
that
deployment
instead
of
some
app
some
app
container,
but
here
as
well,
there's
the
pkip
server
running.
So
we
connect
to
that.
Pcapp
server
tell
it
to
capture
the
traffic
and
yeah.
D
B
D
D
We
have
a
few
open
questions
that
we
still
need
to
address
the
main
question
being
whether
or
not
this
should
be
part
of
cf
deployment
or
just
an
optional,
plug-in,
experimental
ops
file,
or
something
like
that.
D
So
basically
the
ops
fire
needs
to
be
injected
into
the
ecf
deployment,
because
we
need
to
put
push
it
to
the
to
the
eagle
cells,
but
the
pkf
server.
Apk
api
itself
needs
to
be
an
independent
bosch
deployment
and
that's
like,
what's
the
best
way
to
achieve
that,
there
is
also
the
possibility
to
implement
it
more
like
cfssh,
so
not
have
a
dedicated
binary
running
alongside
on
every
vm
once,
but
rather
to
have
the
pcap
server
run
inside
the
app
container.
D
D
It
could
be
possible
with
cable
file
capabilities,
but
that's
something
we
have
to
explore
in
the
future
and
then
it's
unclear
whether
or
not
the
authentication
as
a
space
developer
is
enough
or
if
we
need
more.
But
since
the
space
developer
can
push
the
application
and
basically
has
fuller
control
over
the
over
the
application,
what
it
logs,
what
it
sees,
what
it
doesn't
see.
D
A
Well,
okay,
the
the
interesting
questions
for
us
is,
of
course,
what
would
you
need
in
cf
deployment?
I
mean
typically
for
an
extension
like
this.
You
can
provide
an
experimental
ops
file
which
defines
all
the
artifacts
you
need
and
whoever
wants
to
use.
This
adds
this
ops
file
to
their
cf
deployment
and
the
rest
is
packaged,
as
I
don't
know,
bosch
release
somewhere
else
and
would
then
be
installed
next
to
cloud
foundry.
C
D
I
don't
know
how
applicable
that
is
for
for
other
people,
but
it's
mainly
for
us
interesting,
because
we
can
then
manage
it
independently
of
cf
deployment,
because
on
our
landscape,
cf
deployment
is
already
quite
big
and
we
would
not
like
to
increase
its
size
even
further,
so
we
would
yeah.
The
idea
is
to
just
have
it
as
separate
deployments
that
we
can
imagine
independently.
B
Okay,
I
mean
for
for
this
case.
There
are
several
options
I
would
say.
If
we
look
just
for
cf
deployment,
I
guess
an
experimental
ops
file
and
deploying
it
as
part
of
the
deployment
would
be
the
natural
approach,
but
still
I
mean
at
the
end
of
the
day,
it
is
an
own
bosch
release
right
and.
B
Today,
for
instance,
the
diego
cells
we
deploy
as
separate
deployments,
that's
in
something
that
you
package
yeah
in
our
landscapes
in
a
special
case,
and
you
can
tweak
basically
every
deployment
I
mean
with
ops
files.
You
can
do
basically
everything
on
the
resulting
manifest
and
it
would
be
possible
to
maybe
yeah
to
to
not
deploy
it
as
part
of
this
cf
deployment
of
the
cf
deployment,
but
have
a
separate
pcap
deployment
and.
D
B
B
A
D
B
Mean
that's
the
the
content
of
this
working
group,
but
for
me
the
discussion
starts
a
little
bit
earlier.
I
mean
the
real
meat
is
in
certain
projects
right
that
implements
the
pcap
server
and
the
pickup
api,
and
they
belong
then
to
the
application.
Runtime
working
group
is
that
correct
to
amelia.
B
That's
a
good
question
that
I
can't
answer.
Okay,
because
this
is,
I
guess
something
that
is
should
not
belong
to
the
sea
of
deployment
working
yeah.
A
B
D
D
B
Is
actually,
I
would
say,
for
cf
deployment
really
in
the
beginning,
an
experimental
ops
file,
and
then
we
will
see
how
this
route
goes
right
if
it
becomes
the
hottest
feature
since
life
spreads,
and
maybe
it
makes
it
one
day
into
the
standard
deployment.
C
Yeah,
generally
speaking,
yeah,
the
the
experimental
ops
file
is
the
starting
point
and
then
either
like
you
said,
either
gets
inlined
into
the
main,
manifest
or
just
promoted,
to
be
a
more
stable
pops
file
that
just
lives
in
the
operations
directory
and
people
can
choose
to
include
it.
My
assumption
is
this
is
all
one
bosch
release.
D
D
D
C
C
No,
I
mean
this
is
this
is
really
cool.
I
think
I
I
can
definitely
see
the
value
to
to
people,
but
from
a
from
a
safe
deployment
perspective.
Yeah,
I'd
love
to
get
this
in
here.
D
D
B
Copy
related
questions
we
can
also
zoom
yeah
in
a
coffee
break
or
whatever
that's
possible.
I
do
have
some
opinions
here,
but
yeah,
it's
not
for
this
working
group
meeting.
Okay,
we
will
do
that.
No
problem.
D
A
A
Okay,
but
so
this
is
the
the
current
state,
our
new
concourse,
and
these
are
the
pipelines
that
have
been
migrated
and
what
carson
did
yesterday
is.
He
migrated
the
remaining
credentials
to
our
here.
They
still
are
here
to
this.
Concourse
is
creta,
so
we
should
now
have
pretty
much
everything
to
run
these
pipelines
and
you
also
triggered
some
of
them,
but
not
all
some
have
automatic
triggers
and
just
monitor
certain
things.
D
A
What
I
did
not
fully
understand
is
for
the
cf
deployment
fan
out
tests.
We
need
a
windows
resource
for
testing,
and
this
is
something
that
cannot
yet
be
shared
because
it
uses
the
vmware
internal
credential
right.
C
Yes,
it's
using.
We
have
an
internal
toolsmiths
team
that
produces
a
a
pool
of
safe
deployment
environments
and
when
we
implemented
that
we
were
testing
that
out
for
people,
and
so
it
that
claims
an
environment
from
that
pool
that
deploys
windows
on
it
or
deploys
window
cells
on
it
and
then
runs
the
windows
cats.
C
B
C
To
figure
out,
I
don't
know
if
that
problem
so
to
speak,
has
been
solved
in
general
for.
C
Like
access
that
that
that
system
or
whether
we
need
to
figure
out
a
more
working
group,
centric
way
of
doing
that
in
the
short
term,
probably
the
easiest
thing
would
be
just
to
convert
this
to
be
like
our
other
environments,
where
there's
a
long-lived
bosch
director
and
we
just
deploy
as
needed
and
then
tear
it
down
at
the
end.
A
C
So
I
I
don't
think
that
would
be
too
involved
to
to
get
that
that
work
done.
C
A
A
Good
well
yeah
seems
to
take
longer
so
okay,
though,
so
that's
why
this
if
deployment
is
still
paused
the
other.
Well,
this
one
should
maybe
we
just.
A
Someone
could
open
the
pr,
then
we
can
test
this
one
here.
This
is
also
a
wise
update
releases,
still
post.
C
A
A
D
A
D
A
Carson
and
yeah,
I
will
try
to
schedule
another
meeting.
I
mean.
C
Regarding
those
groups,
I
I
think
it's
just
a
question
of
isolating:
are
you
looking
at
which,
which
repo
are
you
looking
at
pull
requests
for?
We
could
split
them
out
into
separate
pipelines
if
that's
easier
to
reason
about
cats,
pull
requests
plan
and,
let's
see
if
deployment
requests
by
plane.
A
Of
course
yeah
good,
I
mean
that's,
then
the
fine
tuning
yeah
too
right.
First,
we
need
to
get
it
get
things
running,
yeah,
okay,.
A
C
I'd
change
this
pipeline
so
that
the
runcat's
kate's
job
was
not
blocking
on
cutting
a
release.
It
looks
like
that
change
did
not
make
it
here,
I'll
need
to
make
sure
that
I
did
commit
it
to
to
cats,
but
I
thought
I
had
so.
This
might
just
need
to
be
reflown
with
a
new
version
of
the
pipeline,
because
that
should.
C
A
A
C
A
great
first
step-
and
it
will,
it
will
really
exercise
our
concourse
instance-
make
sure
that
that's
scaled
appropriately
and
then
we
can
go
from
there.
A
So
then,
let's
come
to
the
final
point,
integration
of
the
latest
copy
release
with
ruby3.
A
So
this
is
the
latest
132
copy
release.
The
latest
in
cf
deployment
is
130.
Yes,
and
this
one
now
introduces
ruby3.
A
And
now
we
are
thinking
how
to
get
this
into
our
regular
cf
update
here
at
sap.
A
B
Maybe
the
the
background
is,
we
are
not
sure
what
will
happen
on
our
big
landscapes
when
ruby
3
gets
into
the
cloud
controller,
it
could
do
bad
things
and
then
we
want
to
have
an
option
to
roll
back.
The
cloud
controller
to
ruby,
2.7
and
the
safest
way
is
when
we
have
a
version
switch
where
really
only
the
ob3
changes
inside
and
no
other
functional
stuff.
Because
then
you
never
know
what
happens,
and
so
our
idea
was.
B
Is
it
possible
that
we
cut
a
sea
of
deployment
release
maybe
already
on
this
new
concourse
the
next
days,
which
consumes
copy
one?
Three
two
before
copy
one,
three
three
gets
available,
and
then
we
would
take
this
version
and
roll
it
out
and
have
them
the
option
to
go
to
patch
back
the
copy
version
in
case
of
trouble.
We
don't
expect
trouble,
but
you
never
know
and
it
would
be
less
risky.
That
was
the
idea
and
maybe
also
to
mention
this
is
just
ruby
3.0.
B
It
is
not
yet
enough
for
jamie,
but
we
talked
about
that
in
the
copy
open
office
hour.
The
next
step
should
be
much
smoother
and
easier.
So
this
work
will
continue
and
we
will
then
finally
have
the
cloud
controller
that
is
ready
for
jeremy
right.
This
is
the
big
step
is
27
yeah
2.723.0,
because
there
were
tons
of
incompatible
changes.
C
Sure
so
I
would
expect
ci
to
already
have
bumped
this.
The
update
releases
pipeline
should
have
should
have
done
this
we'll
have
to
look
at
the
old
concourse.
I
think.
C
But
if
you
go
to
the
update
base
releases
group,
the
second
yeah,
the
second
group
there.
So
these
are
all
the
base
releases
that
are
compiled
into
into
the
compiled
release
ops
file,
but
in
here
there
should
be
an
update,
cappy
job.
I
don't
see
yeah
there.
It
is
there's
a
bit
happy.
C
So
if
you
click
on
the
right
on
the
very
far
right
hand,
side,
there's
a
release,
notes
template
job.
If
you
just
click
on
on
that
and
open
up
the
generate
release,
notes,
template
and
scroll
down,
we'll
see
all
of
the
bumps
that
have
made
it.
So
it
has
not
made
it
copy.
1.31
or
131.
Is
the
last
release?
That's
there,
so
it's
stuck
being
validated
at
the
moment.
C
So
I
think
the
biggest
challenge
is.
If
you
go
back
to
the
the
pipeline
view,
I
wonder
what's
going
on
with
bbr
deploy,
because
that
looks
like
it's
in
a
bad
way.
A
A
C
A
C
C
B
Think
that's!
Okay!
Normally
we
don't
have,
let's
say
interdependencies
between,
let's
say
copy
and
c
of
linux
fs3
or
that's
independent.
I
would
not
expect
normally
we
just
consume
what
comes
out
of
cf
deployment?
Okay,
just
that
we
want
to
have
a
version
with
132
right,
ideally
tested
and
released.
That
would
make
it.
A
A
A
So
ruby3
is
on
the
way
anything
else
we
need
to
discuss
here.
B
I
would
just
mention
one
thing
regarding
the
github
teams
for
the
working
groups.
There.
B
In
the
naming,
eric
just
merged
the
pr
to
fix
that
so
in
I
don't
know
three
to
four
hours,
the
correct
names
are
there,
but
then
the
others
are
gone.
If
the
automation
works
correctly
with
the
yeah,
this
is
always
a
bit.
You
never
know
what
comes
out
when
it
runs
and
if
you
have
used
those
names
somewhere
in
concourse,
etc,
that
needs
to
be
adapted
or
in
branch
protection
rules.
That's
all.
A
B
Knocking
ends
the
door.
We
wanted
to
get
this
through
this
before
more
users.
C
B
D
C
Repo
private
repositories
that
we
we
need
for
the
working
group
isn't
up
to
date,
and
so
we
need
to
make
sure
that
we
get
a
pr
into
to
fix
that
before
the
old
team
is
removed.
So
otherwise
we'll
lose
access
to
a
few
things
that
we
kind
of
need.
B
A
D
A
There's
trouble
ahead,
potentially
potentially
no
appreciate
us
when
it's
done
and
then
we
can
adopt
the
new
team
names
and
then
everything
is
fine,
yeah
good.
So
anything
else.
C
From
my
end,
I
mean
we,
we
definitely
need
to
have
more
discussions
about
bubble,
yeah.
A
C
We
can
either
do
at
the
next
meeting
or
we
can
judge
or
something
japan
for
that.
It's
encouraging
that
that
you
feel
that
you,
you
have
the
expertise
to
to
take
on
maintenance
which
would
be.
A
C
A
B
I
I
actually
I
like
the
idea
to
have
the
meeting,
let's
say
also
with
room.
B
C
C
Yeah,
I
can
definitely
provide
more
information
about
the
challenges
that
there
are
with
maintaining
bubble
in
its
current
state
and
yeah,
hopefully
help
everyone
come
to
a
an
informed
decision
as
to
what's
the
best
path
forward.
At
this
point,.
B
A
B
Okay,
regarding
the
release
would
john,
would
you
try
to
cut
it
tomorrow?
Let's
say
assumption
would
be
that
this
pipeline
works
through
or
who
would
actually
do
that.
C
Try
and
have
me
on
the
line
while
you're
doing
it.
I
mean
it's
really
not
that
complicated.
So
following
the
wiki
instructions,
I
think
should
be
fine.
B
Yeah
yeah:
let's
try
it
out
and
if
we
have
questions
or
stacks,
then
we
can
also
yeah
ask
offline.
A
C
C
B
The
great
thing
I
just
wouldn't
wait
too
long
because
it
could
be
for
the
next
release
and
then
we
are
a
bit
I
mean
also.
This
is
not
a
real
problem
because
of
course,
we
can
patch
down
the
copy
version,
but
you
this
is
something
we
usually
don't.
Do
we
try
to
yeah
look
for
right,
yeah,
yeah,
okay,
cool.
A
A
B
Okay-
and
I
I
mean
we
can
have
this
meeting
anyway-
I
can
host
it
or
join
and
bring
in
at
least.
C
Yeah
I'm
happy
to
touch
base,
so
it
doesn't
have
to
be
a
full
hour
meeting
at
that
point.
If
there's
not
as
much
to
talk
about
that's
fine
but
I'm,
I
would
hope
that
we're
going
to
continue
to
make
progress
on
moving
pipelines
over
getting
those
running
on
on
the
working
group.
Concourse
so
they'll
definitely
be
updates.
So
yeah,
I'm
happy
to
keep.
You
know
to
still
meet.
A
A
Attend
I'm
also
here
next
week,
but
the
next
two
working
group
meetings.
I
can't
attend.