►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Meeting
so
the
pull
requests
for
the
new
job
certificates
is
merged.
A
Most
consumers
of
CF
deployment
should
see
no
effect
because
you
have
to
manually
activate
key
to
it's,
just
that
it
interferes
with
our
credential
rotation.
But
that
is
our
problem.
A
Team's
problem,
so
no
big
change
for
the.
A
Update
infrastructure
drops
per
orange
because
the
state
was
not
pushed.
This
is
all
fixed
yeah
manually,
because
if
state
is
lost,
you
have
to
destroy
everything
and
so
on.
A
A
Carson
presented
two
pull
requests
for
automatically
updating
the
cup,
the
Cloud
controller
info
data
for
the
V3,
endpoint
and
I.
Think
after
some
small
here
is
coming
after,
if
you
reworks
this
is
now
in,
and
we
have
now
an
automatic
updates
of
builds
inversion
numbers
in
the
Manifest,
and
these
should
be
shown
if
you
call
the
V3
endpoints
I.
A
Another
pull
request
for
automatic
CF
steel
I
win
those
bombs.
This
is
also
merged,
yeah,
okay,
so
in
short,
everything
from
last
meeting
is
done
and
there
are
no
problems
good.
Then,
let's
come
to
today's
agenda.
I
don't
have
too
much
so
we
wanted
to
check
the
cost
saving
in
the
gcp
account,
because
this
cat
smoke
test
environment
is
now
destroyed
after
each
validation
run
and
then
deployed
again,
so
not
constantly
constantly
running
and
yeah.
A
Well,
okay,
if
you
interpret
this
chart,
you
see
a
small
effect,
so
we
are
saving
a
little
bit
of
cost.
This
is
the
past
three
months
and
compared
to
April
and
June.
We
are
saving
around
10
percent,
okay,
good,
so
I
think
we
are
on
the
right
track,
but
anyone
so
far,
no
one
complained
about
all
these
coasts,
so
it
seems
to
be
not
too
much.
A
A
Orange
here
means
when
we
set
up
the
new
Concourse,
it
didn't
work
at
all
and
at
first
everything
was
orange,
but
the
releases
that
got
updates
turned
green
over
time.
Just
those
two,
the
up,
SD
pre,
no.
B
A
Cf
up
SD
release
and
the
garden
Windows
release
have
not
been
updated,
and
indeed
it
seems
that
or
it
doesn't
seem.
It
is
the
case
that
both
of
those
are
deprecated,
so
there
are
no
new
releases.
This
has
been
integrated
into
CF
networking
and
the
other
one
look
at
URL.
A
A
In
CF
deployment
anymore,
so
they
are
possibly
likely
not
used
anywhere
good
catch.
Yeah,
yeah
well
I
mean
luckily
yeah.
We
had
this
trouble
with
the
orange
top,
so
you
see
what
has
not
been
updated
yeah,
but
for
other
jobs
for
other
releases,
which
will
not
be
updated
anymore
in
the
future.
We
will
have
to
look
more
closely
and
check
if,
if
there
are
no
more
incoming
updates
and
then
remove
them,.
A
Okay,
then,
we
had
a
small
discussion
about
the
meaning
of
CF
deployment.
Major
versioning.
A
A
Now
there
are
a
few
database
migrations
in
the
copy
release
upcoming,
which
have
to
be
deployed
in
steps.
So
there
will
be
a
kind
of
preparation
step
and
then
later
the
final
migration
and
you
you
can't
skip
the
preparation
check
and
now
there
was
a
discussion
if
we
release
those
as
major
versions
and
changed
or
enhanced
the
semantics
of
major
versions
a
little
bit
so
to
so
that
it
means
you
have
to
deploy
every
major
version
and
you
can't
skip
any
major
version.
A
Otherwise
your
deployment
will
could
break
so
this
would
mean
an
enhancement
of
the
meaning
of
of
major
versions
and
before
we
just
did
this
with
the
release
notes
telling
users
if
you
want
to
deploy
this
version,
make
sure
you
have
deployed
that
version
before
and
yes,
I
wanted
to
collect
your
opinion
on
this.
Should
we
change
the
semantics
of
major
versions,
or
does
this
make
not
much
sense.
A
I
think
this
is
the
case
for
for
other
releases
on
the
outside
of
cloud
Foundry,
so
that
when
you
release
a
new
version,
you
can
assume
that
your
consumers
have
deployed
all
major
versions
in
order.
A
But
you
know
certain
prerequisites
have
been
fulfilled
if
you
want
to
now
do
this
and
that
we
can't
really
be
sure
of
that.
So
when
we
later
in
the
year
release
this
copy
database
migrations,
we
have
to
tell
users,
please
make
sure
you
have
deployed
the
first
migration
steps
at
least
once
otherwise.
You
will
make
your
deployments
yeah.
C
Okay,
it's
a
runtime
downtime.
Actually,
if
I
remember
correctly,
so
the
the
old
copy
version.
Let's
say
that
is
released
now,
one
six
one
or
whatever
prepares
it,
and
if
you
have
to
deployed
it,
then
something
that
we
are
going
to
release
in
three
months.
I
think
that's
the
agreement
was
correct.
Now,
then
everything
will
run
fine.
If
you
forgot
and
you
make
a
jump
update,
then
it
will
or
you
will
have
a
downtime.
Basically,
okay,.
B
So
that
kind
of
fits
within
the
semantic
versioning
we
already
have,
then
isn't
it
because
there's
a
breaking
chain,
one
of
the
semantic
versions,
one
of
the
semantic
requirements
for
a
major
version
is
app
or
operator
downtime
right.
So
we
could
release
either.
Do
my
major
major
with
one
major
having
the
pre
version.
A
B
Pre
thing
or
just
do
and
then
the
next
one
having
the
like
actual
change
or
do
just
or
consider
the
pre
thing
if
it
doesn't
cause
any
app
or
operator
downtime,
we
could
just
do
it
as
a
patch
and
then
have
a
major.
That's
like
you
should
and
upgrade
to
at
least
this
version
before
you
upgrade
to
this
major
or
else
you
will
have
a
lot
of
downtime
something
along
those
lines.
I'm
I'm
good
either
way,
but
it's
I
I,
don't
think
we
need
to
change
the
semantic
description.
C
One
sentence
where
we
say
we
guarantee
only
that
you
can
update
from
one
one
major
version
to
the
next,
but
you
cannot
go
to
n
plus
two
that
may
or
may
not
work,
but
we
want.
This
is
maybe
something
we
should
that's.
B
A
solid
I
think
that's
a
solid
change
just
putting
it
here,
though,
is
probably
not
enough
to
make.
B
Aware
of
it,
though,
right
we
probably
want
to
advertise.
If,
if
we
do
make
that
change,
we
probably
want
to
advertise
it
in
the
slack
or
maybe
the
CF
Dev
mailing
list,
I,
don't
know,
probably
the
slack,
but
that's
that
sounds
like
a
good
change
to
me.
Like
we've.
Definitely
had
people
come
to
us
before
saying.
Oh
I
was
on
v18
and
I
upgraded
to
v32,
and
now
this
isn't
working
I'm
like
what?
What
do
you
want
us
to
do
here?
That's
like.
A
Yeah
we
have,
we
had
a
few
slack
threads,
where
people
updated
from
very
old
versions,
and
it
was
really
for
us
impossible
to
tell
which
versions
in
between
they
would
have
to
deploy
to.
B
A
B
A
B
Sounds
reasonable
to
me
as
long
as
we
broadcast
that.
A
Okay,
good,
so,
okay,
we
can
phrase
it
like
this.
We
expect
users
to
deploy
all
major
versions
in
order
to
avoid
any
trouble
and
yeah
for
our
database
change.
We
should
explicitly
mention
it
in
the
release.
Notes
that
if.
C
B
A
Okay,
because
it
just
came
up
because
the
copy
release
itself
does
not
have
a
bigger
semantics
on
versioning
as
I
understand,
yeah,.
B
A
A
B
Real
quick
on
that,
because
there's
like
just
spitballing
here,
but
we
we've
been
talking
about
how
we
can
advertise
that
users
or
operators
would
need
to
go
from
one
copy
release
to
like
have
one
Cappy
release
minimum
before
they
go
to
another
Cappy
release
right,
but
theoretically
there
could
be
an
encode
way
to
do
those
kind
of
things
right.
We
have
a
deploy.
We,
you
know
we
have
a
deployment
manifest
Cappy
itself.
C
A
If
you
could
say
yeah
yeah.
B
And
just
set
the
state
back
if
you
try
and
upgrade
incompatible
versions-
I,
it's
probably
not
worth
spending
a
ton
of
time
on,
but
I
don't
know
it's.
Maybe
it's
worth
calling
out
to
the
foundational
infrastructure.
Folks
as
like
something
cool
that
Bosch
could
do
or
is
like
a
real
real,
simple
idea.
Maybe
I
don't
know
if
Kathy
has
the
ability
to
check
if
the
migration
has
been
done,
but
you
but
Cappy
could
theoretically
fail
itself
on
upgrade
if
it
could
detect.
C
A
C
B
C
Way
to
have
it
in
the
nice
way
and
we
leave
three
months,
so
we
will
merge
these
migrations.
I.
Think
in
middle
of
December
only
makes
it
slow,
but
okay,
there
were
similar
things
where
it
was
not
noticed,
for
instance,
between
the
API
and
the
workers.
C
I
mean
they
have
this
Ruby
model,
interchange
and,
for
instance,
if
there's
a
rails,
update
very
bad
things
can
happen,
and
you
don't
finds
us
out
in
the
tests,
because
you
have
these
scenarios,
not
in
your
CI,
very,
very
ugly,
stuff
and
I
would
also
not
over
Promise
Too
Much,
with
this
version
updates,
because
we
don't
test
these
updates
right.
We
test
from
one
minor
version
to
the
next
minor
version
that
one
works.
We
do
not
consciously
break
one
major.
B
A
B
B
This
is
true:
they
like
yeah,
that
that
all
makes
sense
and
if
y'all
Greg's
response
sounds
reasonable.
I,
like
we,
that
that
makes
sense
for
running
arbitrary
code
on
copy
release.
I
may
still
reach
out
to
foundational
infrastructure
about
this
kind
of
thing
from
a
Bosch
level,
because
that
would
that's
that's
an
interesting
idea
and
yes
you're
right.
So
one
one
thing
about
the
end:
to
n
plus
two:
is
it's
not
necessarily
major
to
major
right?
It's
just
one
version
to
the
next,
because
that's
really
all
we
test
in
our
pipelines
is.
C
C
This
in
the
version
document-
maybe
we
should
review
that
a
few
times
that
we
do
not
or
that
we
at
least
say
what
we
test,
what
we
don't
test
and.
A
C
Intend
to
do
so
that
I
I
remember
in
the
very
old
times
there
were
I,
think
Google
Docs
document
sent
around
for
every
major
CF
deployment
version
forgot
the
name,
but
somebody
from
the
MBA
did
a
good
end
and
good
job
and
put
a
lot
of
effort
and
communicating
around
and
asking.
A
B
C
B
But
that
could
be
another
path
if
we
feel
like,
if
we,
if
we,
if
we're
pretty
confident
in
our
minor
releases,
not
being
important
changes,
then
maybe
there's
an
argument
that
our
upgrade
path
should
be
last
major
to
current,
rather
than
last
minor,
to
current
just
a.
C
It's
also
Diego
OS
databases
and
yeah
routing
doesn't
have
databases,
but
they
have
sometimes
certificate
changes
which
also
involved
several
update
steps.
So
it's
good
that.
A
B
B
Yeah,
that's
that's
an
interesting
idea,
because
I
know
I'm
pretty
confident
from
working
on
some
of
the
components
that
at
least
some
maintainers
throughout
this
organization,
View
major
relief
like
when
they
want
to
release
for
I'm.
Just
thinking
like
as
an
example
the
syslog
agent
change
where
it
it
switched
to
allow
for
mtls,
for
example,
that.
C
B
For
wait
for
this
thing,
which
is
a
prerequisite
to
no
downtime
to
get
released
in
a
major
version
as
soon
as
that's
released
in
a
major
version,
we're
okay
to
release
the
next
thing,
because
then
everyone
will
already.
We
can
guarantee
that,
like
probably
everyone's
gonna
get
there,
you
know
and
that's
not
actually
the
way
that
we're
testing
things
right.
C
C
Cool
would
not
be
nice
if
we
have
then,
every
week
a
new
major
version,
yeah.
B
A
Okay,
so
do
we
need
any
except
rephrasing
the
existing
release
documentations,
do
we
need
any
follow-ups
on
this,
or
is
it
not
good
enough
for
a
moment.
C
A
A
Yeah
this
has
not.
The
versioning
has
not
been
a
problem
for
anyone,
that's
true
for
users
who
stick
to
very
old
versions
and
then
want
to
try
update
all
at
once
and
but
they
they
also
seem
to
be
happy
with
some
workaround
suggestions
and
did
not
care
very
much
about
downtime.
So
this
was
probably
a
half
productive,
ready,
clone,
Foundry
installation,
I.
Guess
the
productive
consumers
already
do
it
the
right
way,
yeah,
but
nevertheless
can't
hurt
asking
for
feedback.
Then
then
I
will
do.
C
This
interesting
how
that
works
out
for
VMware
actually
for
the
private
Cloud
installation.
So
let's
see
what
feedback
comes,
because
we
are
now
more
or
less
telling
us
those
customers.
You
cannot
update
from
or
you
have
tested
it
explicitly.
I
mean
that's
all
fine.
If,
but
from
CS
deployment
point
of
view
we
say
interesting,
things
can
happen,
it
doesn't
change
technically,
it
was
the
same
stuff
in
the
past
yeah.
C
But
if
you
update
I,
don't
know,
do
your
update
your
customers
every
two
weeks
or
every
major
version,
or
are
there
bigger
gaps
like
half
years
between
let's
yeah
yeah,
interesting.
B
C
Least
there
could
be
a
discussion
yeah,
but.
A
Okay
and
let's
start
a
small
discussion
and
gather
some
feedback
yeah,
this
would
be
all
agenda
coins
from
my
side.
There's
nothing
new
yet
on
fips
stem
cell
it's
too
early.
But
when
that
one
is
ready,
we
will
need
a
new
validation,
yeah.
C
A
B
B
That's
all
I
got,
oh
actually,
sorry,
I
I,
just
remembered
we
we've
got
a
we've
got
a
couple.
Docker
file
bumps
imminent,
that
I
wanted
to
call
out
for
I
guess:
pipeline
changes
the
go
121
was
released,
so
I've
been
holding
off
on
merging
the
cfdct
and
runtime
CI
change
to
bump
the
docker
file
to
use
go121
instead
of
go
120..
B
The
do
we
feel
like
a
significant
enough
amount
of
time
has
passed
to
where
we
can
bump
to
that.
I
can
also
before
we
bump
to
it.
We
probably
should
make
sure
that
we've
cut
minor
versions
of
both
of
those
things
so
that
all
of
our
tasks
are
using
the
latest
stuff,
and
then
we
can
cut
a
major
to
give
some
people
time.
B
That's
the
boss,
CLI
image
and
then
the
cfdct
one
is
the
other
one.
I
don't
know
if
we
actually
cut
releases
of
those
two,
but
we
use.
We
just
use
the
latest
of
runtime
CI
and
Bosch
CLI,
but
other
people
depend
upon
them
too.
So
I've
been
holding
off
bumping
to
go
121
in
all
three
of
those.
Until
we
hit
some
critical
mass
of
most
people
are
on,
go
121.
B
This
kind
of
thing
becomes
less
of
an
issue
because
go
121
introduces
forward
and
backward
compatibility
with
the
go
language,
so
it's
going
to
from
now
on
Hope
like
with
going
21.
Hopefully
it
should
download
and
use
the
correct
Go
version
for
whatever
your
module
declares,
which
is
kind
of
nice,
but
I
like
it
also
introduces
if
you're
still
using
go
120
on
a
project
introducing
go
121
to
your
module
can
break
because
it
introduces
a
note.
A
new
go
mod
line
called
tool
chain.
I.
Think.
B
That's
a
huge
pain
in
the
ass
yeah,
huge
pain,
yes,
but
I
I
mean
I,
get
it
the
forward
and
back
compatibility
is
super
nice,
but
yeah.
Just
a
little
scared
of
that
run,
I
think
CF
smoke
tests
has
already
broken
some
GitHub
actions
because
of
that
change,
yeah.
A
Indeed,
it
did
also
for
us.
We
also
noticed
a
bit
late
in
our
CF
update
that
this
was
introduced
and
requires
go
121.,
and
so
really
we
had
to
revert
this
CF
smoke
test
revision
to
an
older
revision
to
to
make
it
work
for
now.
Yes,
so
this
may
be
critical
here,
but
but
okay
from
from
this
version
on
then
go
guarantees
some
forward
and
backward
compatibility,
which
sounds
very
good.
A
B
That
was
one
part
of
it.
If
we're
feeling
good
about
go
121
now,
I
can
look
to
merge
those
PR's.
The
other
part
of
it
is
that
CF,
the
cfdct
Concourse
image.
If
you
go
to
the
or
Docker
file,
if
you
go
to
that
as
a
base,
that
is
no
longer
in
support,
we're
based
off
golang,
120.5,
Buster
and.
A
B
Other
day,
I
realized
that
we
weren't
getting
Auto
bumps
for
some
reason
and
we
used
to
get
Auto
bumps.
I
was
confused,
so
I
went
to
Docker
Hub
to
check
it
out.
They
stopped
supporting
Buster
as
a
base
for
the
golang
docker
Hub
official
images,
so
we'll
we'll
probably
want
to
upgrade
to
either
Bullseye
or
bookworm
as
the
base
I
don't
have
strong
opinions
about
which
one
to
do
it
looks
like
Bullseye
is
the
next
one,
but
Bullseye
is
also
already
the
old
stable
release.
B
So
if
we
wanted
to
jump
to
the
latest,
it
would
be
Bookworm.
That's
the
current
deviant,
stable
release.
However,
I
think
that
Bookworm
is
actually
ahead
of
jammy
like
Jammy
Jammy
is,
is
Jammu.
Ubuntu
Jammy
is
I,
think
based
off
of
Bullseye,
so
I'm
not
sure
if
we
want
to
be
running
our
stuff
in
a
Debian
container
that
is
theoretically
ahead
of
who
this,
what
we're
running
everything
else
in
in
our
book,
yeah
environments.
You
know
what
I
mean.
B
A
B
A
C
And
we
will
I
mean
it's,
it's
half
a
step
for
sure,
but
if
that
one
works
out
through,
then
we
have
half
of
the
way.
Let's
relatively
and.
C
Part
we
can
postpone
a
bit
who
knows
I
mean
I,
guess
next
year,
24
beginning
we
will
start
about
discussion
of
CF
Linux
fs5
to
do
it
maybe
a
bit
earlier
than
last
time,
and
then
it's
maybe
the
right
time
to
discuss.
B
Bookworm
yeah
cool
that
should
work,
I,
guess,
I.
The
only
reason
I
brought
up
Bookworm
at
all
is
because
I
don't
know
what
the
Debian
release
cycles
look
like.
They
might
they're
non-standard
I
I,
think
I,
don't
think
they
I,
don't
think
they
declare
releases
and
LTS
stuff
in
the
same
way
that
Ubuntu
does
they're.
Just
like
here's
a
release
and
now
everything
else
is
one
degree
older,
and
we
only
maintain
like
the
last
two
something
like
that.
B
So
if
we
use
Bullseye,
which
I'm
fine
with
there's
just
danger
that
prior
to
us
upgrading
to
the
next
thing,
they
cut
support
for
Bullseye
and
then
but
I
guess.
At
that
point
we
can
just
deal
with
Bookworm,
whatever
it
presents
so
and
probably
they
wouldn't
I
doubt
they
can
like
fully
cut
support
for
Bullseye.
Wild
Jammy
and
whatever's
above
Jammy
still
depend
upon
Bullseye.
So.
C
B
Yeah,
so
it's
still
supported
under
LTS
support,
whatever
that
means,
but
yeah
at
the
golang
official
images
have
stopped
supporting
it.
B
I
think
we're
probably
fine
to
move
to
just
Bullseye
it
it
moves
us
closer
to.
We
were
Buster
corresponds
roughly
with
bionic
I,
think,
because
the
Ubuntu
releases
track
DB
and
releases
to
some
degree,
but
like
they're
not
tied
to
them
in
any
explicit
way
that
I
could
find
in
the
Ubuntu
website.
I'm
just
kind
of
trusting
random
stack,
Overflow
responses
with
that
one,
so
I,
it
seems
like
we
should
be
fine
to
move
to
Bullseye.
That's
what
jammies
based
off
of
it
puts
us
close-ish
to
what
we're
deploying
in
Bosch.
A
A
B
B
A
B
A
B
B
B
All
right,
good
yeah,
that's
all
I
got
okay.