►
From YouTube: 2023-01-10 Delivery Group: Ruby 3 Rollout
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Awesome
so
I'm
gonna
move
on
to
point
number
three,
just
to
give
you
a
general
status
of
how
things
are
right
now,
so
gitlab
merge,
requests
and
pipelines
are
now
rolling
in
Ruby
tree,
with
the
exception
of
security
pipelines,
Auto
deploy,
pipelines
and
stable
branches,
they
are
still
running
in
Ruby
too,
so
they
did.
This
swap
I
think
they
did
it
last
week
or
last
two
or
a
couple
of
weeks
ago,
but
it
doesn't
have
that
much
time.
A
A
Also,
there
is
in
progress
the
Ruby
3D
exploratory
testing,
that's
as
I
mentioned,
they
are
pinging
or
they.
A
On
every
team
stage
to
check
their
own
features,
this
is
going
to
take
some
time
because
they
are
basically
relying
on
people
to
be
available
and
to
manually
check.
So
nevertheless,
they
have
set
a
due
date
to
be
on
January
30.,
and
that
is
the
current
progress
that
they
are
doing.
Jenny.
Do
you
want
to
verbalize
your
question?
Oh
yes,.
A
C
See
do
you
reckon
that
they
will
be
doing
that
as
things
get
rolled
out.
A
That
is
an
interesting
question
and
from
the
comment
Matthias
made,
he
was
questioning
the
same.
Let
me
link
that
comment.
Yeah.
C
Because,
as
I
said
in
the
the
dark,
essentially
during
the
rollout
process,
we're
trying
to
figure
out
the
baking
times
for
each
environment,
if
we
want
to
give
you
know
specific
teams
to
have
their
time
to
do
manual
testing,
obviously
it
probably
will
take
longer
than
like
an
automated
set
of
tests.
Of
course,.
A
I
think
it
really
depends
on
the
results
of
this
exploratory
testing.
For
one
side
they
have
already
performed
performance
tests,
they
are
doing
this
exploratory
testing
and
they
perform
a
bunch
of
QA
tests.
A
So
I
think,
depending
on
the
results
for
this
issue
we
might
see
or
that,
if
we
need
to
run
manual
tests
for
some
stages
like
there
might
be
some
stages
that
wasn't
clear,
like
their
testing
was
inconclusive
and.
A
They
might
want
to
check
it
live,
but
I
am
expecting
for
these
opportunities
like
not
to
be
a
lot
of
them,
but
yeah
I
think
I
think
the
first
step,
for
that
will
be
to
wait
for
the
results
of
this
exploratory
testing.
C
D
The
thing
do
you
think,
I'm
afraid
about
it's
extraordinary
testing,
it's
really
prone
to
regressions
right,
so
they
keep
doing
things
and
then,
since
it's
very
expensive
to
read
with
his
exploratory
testing
I
kind
of
like
healed,
my
pizza
with
my
vibe
of
having
like
a
successful
early.
So
do
you
do
you
know
if
they
plan
to
move
some
of
these
exploratory
testing
to
automated
testing
as
part
of
this
effort
before
the
release
or
is
just
like?
Okay,
we
decided
subset
of
tests
are
going
to
be
automated
and
running
some
QA.
D
A
I
think
this
could
be
confirmed
with
quality,
though,
because
we
have
the
smoke
a
reliable
tests
that
should
test
like
the
main
stages
and
they
are
like
required
for
the
deployment
to
move
on
like
they
need
to
be
green
before
continuing
to
the
next
stage.
Okay,
so
those
tests
are
automated,
I'm,
not
sure
if,
as
a
result
of
this,
more
tests
should
be
added,
because
I
know
that
we
have
some
in
quarantine.
A
B
I
think
it
may
also
depend
on
the
type
of
failures
that
the
teams
are
looking
for,
because
I
I
believe
from
reading
some
of
the
previous
issues
that
I
I
think
they
are
more
concerned
with
fairly
drastic
change.
Like
drastic
failure,
in
which
case
expiratory
testing
is
probably
going
to
catch
most
of
those,
if,
like
I,
think
it's
more
of
the
concern
is
like.
Is
there
a
like
a
fundamental
shift
in
how
memory
is
used
or
something
that's.
C
B
B
To
go
live
is
my
guess,
just
exactly
for
the
reasons
you
mentioned
Jenny,
that's
that
delay
is
going
to
cause
more
problems,
so
I
think
probably
adding
a
lot
of
extra
automation
could
could
push
that
timeline
and
actually
also
potentially
give
more
risk
okay,
which
I
think
we
should.
We
should
follow
these
dates
with
I
know.
We
haven't
necessarily
got
a
rollout
date,
but
I
I
would
assume
that
once
these
sorts
of
tests
are
concluded,
things
could
move
fairly
swiftly.
A
C
Yes,
I
believe
so
in
terms
of
you,
you
briefly
mentioned:
what's
it
called
memory,
usage
and
performance
right
from
what
Matthias
okay
has
commented
and
I
don't
remember
if
it's
the
same
comment,
but
he
mentioned
something
along
the
lines
of.
He
wasn't
sure
that
they
were
doing
like
performance
testing
like
that,
or
rather
they
did,
they
did
like
an
initial
batch.
They
couldn't
find
anything
except
some
stuff
that
were
like
known
issues.
A
C
Not
sure
yeah
I'm
not
sure,
if
that's
something
that
we
want
to
at
least
keep
an
eye
on
as
we
roll
it
out
in
case.
You
know
this
is
something
that
I,
don't
know
will
get
worse
with
traffic
say
right.
Oh
yeah,
I
think
yeah.
A
That
I
recalled
the
Matthias
comment
and
when
he
said
that
he
performed
the
performance
test
on
the
Omnibus
environment
or
debris
environment,
and
that
environment
only
goes
as
far
as
it
can
go
because
it
doesn't
receive
as
much
traffic
as
the
production
one
so
yeah.
It
could
be
a
good
point
to
us
to
our
rollout
environment,
see
if
some
of
those
performance
tests
can
be
performed
during
the
rollout
just
to
make
sure
that
we
are
not
seeing
any
regression
from
that
side.
B
Because
I
suggests
that
this,
we
kind
of
grouped
this
into
the
ask
them
what
they
need
to
validate
like
I,
think
our
role
here
is
to
enable
the
rollout
I
think
this
is
going
to
be
super
super
key
for
managing
that.
How
long
are
our
environments
locked
down
for
but
I
think
we
should
expect
and
I
think
they're
aware
that
they
will
likely
know
this
stuff,
but
in
terms
of
we
do
have
expiratory
testing.
B
We
do
potentially
have
some
like
performance,
profiling
or
those
sorts
of
things
that
could
like
probably
need
to
take
place,
but
I.
We
absolutely
need
them
to
specify
in
advance
exactly
what
they're
going
to
do
and
how
long
they
need
that,
because
otherwise
we
will
put
this
on
an
environment
and
it's
sure
to
be
something
that
ends
up
just
sort
of
running
and
I.
Think
we
we
will
impact
many
other
things,
then
so
I
I
think
it
should
be
included,
but
I
don't
think
we
should
take
that
as
an
action
item
on
our
side.
B
A
What
about
asking
quality
if
the
smoke
reliable
tests
are
enough
for
the
rollout
I'm
just
thinking
about
we
have,
since
those
are
smoking,
reliable
tests
depend
on
what
we
have
a
master
and
master.
Sometimes
we
quarantine
specs,
because
well
they
are
failing
or
they
are
flaky
or
whatever.
I
just
want
to
be
sure
that
we
are
running
the
the
full
test
of
soil
that
we
should
do.
You
think
something
that
we
should
concern
or
I'm
just
thinking
ahead.
B
B
Moving
to
the
next
Stitch
I
would
assume
that,
yes,
they
will
want
to
include
those
sorts
of
things,
and
then
what
we
can
do
is
help
make
sure
that
you
know
the
if
they
say,
for
example
like,
oh,
you
want
packaging
QA.
You
know
like
okay,
great
well,
here's
how
we
would
make
sure
that
has
run
but
I
I,
don't
think
we
should
be
the
one
specifying
this
test
must
happen.
We.
B
Change
to
flow
through
in
an
Ideal
World
right,
we
would
run
this
as
an
auto
deploy
so
I
think
that's
the
sort
of
approach
we
should
like
in
a
safer
way,
of
course,
but
like
in
terms
of
we
have
quality
Gates
and
we
enable,
through
environments.
B
I
would
I
don't
know
if
we've
already
started
those
conversations,
though,
of
asking
the
the
Ruby
3
team
to
actually
like
literally
write
down.
You
know,
given
we
have
these
environments
available,
and
here
is
here's
what
we
can
provide
them.
You
know
what
what
are
they
like?
How
are
they
going
to
validate
this
as
it
goes
through
to
production
like
if
we
haven't
started
that
I
think
we
should.
C
Yeah
yeah
I've
been
contacting
Matthias
atheists.
Is
he
the
right
point
of
contact
for
questions
like
this?
He.
A
A
Awesome
and
well,
another
thing
that
they
are
doing
is
that
they
are
analyzing
the
adoption
blockers
I'm,
going
to
move
to
point
C
now
which
they
are
issues
about
gems
that
are
included
in
the
project
that
are
no
longer
actively
maintained
or
that
they
don't
have
like
enough
specs
or
coverage
to
move
on.
A
It
is
still
unclear
how
many
of
them
are
actually
blockers.
They
are
analyzing
like
data
this
week,
so
yeah.
That
is
another
point.
They
are
doing.
A
Just
moving
on
so
that
was
the
General
status.
Now
we
are
going
to
talk
about
the
delivery.
Currently
current
status,
we
are
now
working
on
defining
a
rollout
strategy
and
well
Jenny
is
doing
that
based
on
well
some
of
the
discussions
that
we
just
have
considering
how
much
time
it
should
sit
on
each
deployment,
considering
how
what
are
the
metrics
that
we
should
measure
Etc
Jenny.
Do
you
want
to
verbalize
your
comment?
Yeah.
C
Yeah
yeah,
so
the
first
pass
of
the
rollout
strategy
is
at
the
bottom
of
the
issue.
Description
as
you
can
see,
there
are
some
metrics,
some
or
other
some
dashboards
that
uses
these
metrics.
That
I
think
we
care
about
and
the
parts
that
I'm
not
like
overly
sure
about,
as
we
just
discussed
before
it's
about
the
baking
times
right
and
yeah
in
terms
of
the
baking
times
in
case
I
missed
any
dashboards.
I
tag,
some
other
sres
of
delivery.
C
You
know,
depending
on
what
comes
out
of
that
conversation,
I
might
go
ahead
and
tag
the
orchestration
team
as
well
or
their
SRE
expertise,
but
yeah
other
than
that
I
think
the
first
pass
of
it
is
good
and
then
yeah
I'm,
just
waiting
for
some
feedback
and
basically
what
happens
with
the
exploratory
testing
before
you
know,
saying
anything
more
about,
especially
those
two
areas
of
concern.
B
What
your
kind
of
current
feeling
about
this,
the
rollout
plan
that
we
have
at
the
moment.
C
C
Because
we
have
our
environmental
deploy
so
separated
I'm
wondering
what
would
happen
if
say,
like
one
of
the
components
or
like
sort
of
the
services
encounter
a
much
bigger
issue
if
we
can
like
somehow
quarantine
that
and
still
move
along
with
the
deploy,
what
kind
like
what
that
would
look
like,
because
we're
doing
this
in
like
yeah,
very
like
compartmentized,
like
environmental
sense,
as
in,
like
you
know,
even
in
G,
prod
we're
doing
this
like
literally
and
prefect,
like
all
that
is
really
granular
and
great.
C
But
when
it
comes
to
hey
like
one
of
the
components
is
misbehaving,
do
we
have
any
ways
to
mitigate
that
is
kind
of
what
what
I'm
concerned
about,
but
in
terms
of
what
we
have
right
now
in
terms
of
rolling
out
deploys
using
what
the
tools
that
we
have
I
think
this
is
the
best
that
we
can
go
forward
with.
B
B
C
Yeah,
that's
that's
the
baking
time
that
I'm
concerned
about
right.
Okay,
so
I
wrote
there
that
it's
an
hour
I
think
an
hour
might
be
too
short
if
we're
doing
manual
testing.
C
So
all
this
kind
of
relies
on
what
the
other
teams
are
used
to
doing,
how
long
that
usually
takes
them
and
stuff
like
that
right,
which
I
don't
have
answers
to
that's.
Why
I've
been
going
back
and
forth
with
Matthias
about
it
so
yeah
once
that
gets
a
bit
more
clarified
and
you
know
exploratory
testing
gets
done
and
which
part
of
that
we
need
to
repeat,
for
you
know
after
staging
and
then
after
prod,
then
we'll
probably
have
a
better
idea
that
makes.
D
What
to
do
if
there
is
a
component
failure,
I
think
it
is
something
that
is
up
to
them.
It's
also
up
to
us
that
we
are
kind
of
responsible
for
delivering,
so
we
should
also
set
our
boundaries
there
and
say:
okay,
we
are
not
comfortable,
because
it's
just
like
you
know
or
is-
is
a
monolith
kind
of
a
lot
of
parts.
D
I
even
use
a
different
component,
so
we
should
be
clear
with
our
boundaries,
where
we
are,
what
we
are
comfortable
with
or
not,
and
maybe
something
you
know
is
good
to
discuss
with
Matthias
as
well,
and
please
ping
me
outside
the
discussion.
If
you
also
need
my
point
of
view
there,
the
other
part
I.
The
other
question
I
have
is
about
the
baking
time
to
two
questions.
C
C
That's
definitely
something
that
we
can
look
into
increase
the
traffic
and
then
have
the
baking
time
a
bit
longer
on
production,
Canary
versus
staging
Canary,
I'm,
I'm,
guessing
staging
canary
will
be
like
pretty
much
the
shortest
and
then
yeah
production
Canario
will
be
a
lot
longer
than
that,
because
yeah,
we
need
a
green
light
to
go
into
that
promotion.
This
is
what
I'm
thinking
from
all
the
teams
is.
D
C
A
D
D
That
one
hour
with
five
percent
could
could
allow
us,
so
I
would
probably
consider
in
the
rollout
plan
one
the
two
things
I
actually
would
probably
prefer
to
stay
longer
time
at
lower
traffic
than
a
shorter
time,
a
higher
amount
of
traffic,
or
we
can
even
decide
to
do
a
stage
to
roll
out
where
we
can
say.
Okay,
the
first
Tower
is
the
five
percent,
no
problems.
Second
hour.
We
go
at
10
percent.
That
could
be
also
an
option.
It's
possible
to
do
that.
We
did
it
for
gitlab
sshd.
D
You
need
a
necessary
so
Jenny
that
is
gonna
just
play
with
the
weights
on
on
on
each
proxy
to
do
so,
but
yeah.
A
B
Right,
like
I,
think
like
we
don't
want
to
necessarily
have
a
multi-multi-day
policy
if
we
can
avoid
one,
but
it
had
that
same
time.
I
think
we
we
also
don't
like
one
of
the
benefits
of
having
this
process
with
the
the
managed
rollout
is.
We
are
within
our
chance
of
saying.
Actually
we
can
have
it
there
for
three
hours,
four
hours
and
that's
actually,
okay,
it's
just
a
little
disruptive,
but
it's
not
as
disruptive
as
going
too
fast
and
causing
problems,
or
you
know
having
things
like
that.
C
B
Just
I
haven't
I
apologize,
I,
haven't
read
this
issue
and
it
may
all
be
answered
there,
but
just
curious
as
well
like
are
you?
Is
there
benefit
to
using
the
staging
Canary
like?
Do
we
actually
get
benefits
from
deploying
on
there
or
should
for
this
case
this?
Should
we
just
be
considering
staging
or
production?
For
example,
like
do
you
believe
there
might
be
mixed
version
problems.
C
Right,
so
the
mixed
version
problem
is
one
of
the
points
that
I
brought
up
as
one
of
the
wrists
I've
asked.
The
QA
team
I've
asked
Matthias
so
far.
It
seems
to
be
a
no
because
the
only
thing
that
they
share
is
a
database
and
I'm
guessing
that,
because
we're
gonna
put
like
a
PCL
in
place
and
we're
not
going
to
do
the
migrations.
That's
not
going
to
be
an
issue.
C
C
And
also
I,
guess
it's
not
going
to
be
tested
before
we
do
this
rollout
so
right
now
it's
kind
of
like
the
answer
is
like
probably
not.
It
doesn't
make
sense
that
it
would,
but
we
are
not
sure
until
we
do
it.
B
D
Yeah
I
agree.
A
With
you
come
from
from
what
is
worth,
they
are
monitoring
the
failures
on
the
Ruby
2
pipelines
and
on
the
Ruby
3
pipelines
to
be
the
same,
so
that
should
somewhat
minimize
like
the
mixed
deployment
problems
that
should
encounter
and
give
them
some
confidence.
But
yeah
I
agree
with
you.
I
think
that
the
staging
Canary
deployment
should
probably
stay
assist
like
we
can
just
make
sure
that
the
pipelines
and
the
qas
are
ring
before
moving
on,
but
I,
don't
think
we
necessarily
need
to
measure
something
in
that
environment.
B
Okay,
yeah,
it's
good
yeah
that
makes
sense
yeah,
because
I
just
might
speed
you
up
to
get
to
that,
because
staging
is
a
more
useful
environment.
Yes,
because
you
can
see
more
things,
but
he
still
doesn't
give
you
traffic.
So
actually
probably
we
actually
do
want
to
try
and
get
to
production
Canary
relatively
soon-like
so
that
we
can
actually
sit
it
there
for
longer.
So
we.
D
C
So
basically
used
to
the
auto
deploy
pipeline,
don't
bake
in
staging
Canary
because
we're
not
really
testing
for
anything.
As
long
as
the
QA
tests
pass,
we
move
on
to
the
production
Canary
and
the
production
Canary.
According
to
the
fancy
math
chart
that
scarvac
has
we
decide
how
long
we
want
to
bake
for
and
then,
depending
on
what
the
staging
teams
wants
to
do
decide
on
the
other
big
times.
Basically,
right.
B
Well,
actually,
it
depends,
it
actually
depends.
So
if
you
have
to
do
manual
testing
so
I
assumed
we
would
so
we
must
do
our
testing
on
a
staging
environment
prior
to
putting
it
in
front
of
users.
So,
if
that,
if
we're,
if
we're
using
the
order
of
environments
that
we
have
on
auto
deploy,
we
will
need
those
tests
to
happen
on
stage
in
Canary
right,
because
we
need
to
conclude
those
before
we
go
to
production
canary.
B
Does
that
make
sense?
So
what
we're
doing
for
this
one
is?
We
are
adding
to
our
automated
tests
So
currently
with
auto
deploys.
We
just
rely
on
the
automated
test
Suites.
It
sounds
like
in
this
case.
We
we
have
other
tests
that
would
need
to
be
grouped
into
those.
So
my
assumption
was
that
we
would
put
all
of
this
on
to
the
full
staging
environment
and
spake
it
there
prior
to
doing
any
of
the
production
environments.
But
that's
just
my
assumption.
C
B
C
B
Tests
will
have
to
happen
on
stage
in
Canary
or
the
profiling,
and
things
like
that
will
have
to
happen
on
station
canary.
C
Yeah
because
it's
just
like
the
tool
that
we
have
right
now,
that's
like
kind
of
the
most
reliable
to
go
forward
or
deploy
that
we
didn't
want
to
really
mess
with
that
tooling
and
do
it
in
a
separate
order
of
things
but
yeah.
No,
that
that
makes
sense.
B
That's
the
question
I
think
Health.
It
depends
what
you're
testing,
hopefully
not,
but
we
may
need
to
review
that
and
they
should
be
the
same
right.
They
should
the
environment
should
the
same,
but
I
think
those
are
the
key
questions.
Let's,
let's
check
that,
we
because
I
think
in
the
past,
when
we've
rolled
out
say,
for
example,
sshd
we
haven't
done
it
via
Auto
deploy
pipelines,
we've
done
it
via
the
kind
of
change
request
process
where
we've
done
the
pre-environment,
the
staging
environment
and
the
production
environments.
C
A
B
Just
because
it's
a
not
so
because
staging
is
a
test
environment
production
can.
A
A
Okay
and-
and
that
depends
on
the
results
of
the
exploratory
testing
and
whatever
Matthias
tell
us
to
that,
they
need
to
test
if
they
tell
us.
Okay,
we
might
need
to
test
this
and
this
and
this
that
that's
not
going
to
be
performed
on
stage
in
Canary.
B
C
A
C
Update
the
issue
with
that
and
probably
ping
Mathias
on
it,
yeah,
okay,.
A
A
I
have
been
discussing
this
with
Matthias
and
keeping
an
eye
on
the
progress
and
they
are
aiming
to
happen
whether
on
the
last
week
of
February
after
the
22nd
or
the
first
week
of
March,
there
is
still
no
clear
date.
They
are
going
to
well,
they
are
and
I
am.
We
are
continue
tracking
the
progress
of
these
efforts
that
are
happening
now
that
we
just
mentioned,
but
I
just
wanted
to
bounce
with
you
about.
A
Let's
assume
that
is
either
the
first
week
of
the
last
week
of
February
or
the
first
week
of
March.
How
do
we
plan
or
how?
What
can
we
start
working
on
the
PCL
for
it?
Do
we
need
to
prepare
a
type
one
month
in
advance?
What
kind
of
approvals
do
we
need
or
how?
What
is
the
process
to
request?
One.
B
D
C
D
So
for
the
PCL
who
talk
and
this
the
first
thing
we
should
do
is
modify
the
handbook
open,
a
merge
request,
Korean
book
with
the
dates
and
the
kind
of
PCL
we
want
and
the
time
of
the
PCL,
and
these
guys
should
be
approved
by
Steve,
Martin
and
Alan.
D
And
after
that
there
will
be
some
discussion
in
the
issue.
While
we
need
that
and
everything
else
and
so
on
and
I
think
we
should
be
enough
that
one
when
is
approved
and
then
at
that
point
there
is.
We
have
this
change
lock,
yaml
fire
I
think
we
should
adult.
Alternatively,
over
there.
D
Much
yes,
I,
think
I.
Think
one
month
would
be
enough,
because
this
would
help
us
also
development
teams
to
plan
their
development
schedule,
especially
around
the
PCL
around
the
religious
and
everything
else.
So
if
you
as
soon
as
you
have
some
draft
dates,
I
would
probably
start
in
here
to
open
a
merge
request
and
then
just
and
get
into
a
state
where
it's
about
to
be
submitted
or
not,
and
we
will
speak
to
Marine.
So
we
have
been
the
Man,
Steve
and
Alan
on
the
issue
that
we
would
need.
D
This
could
change,
hopefully
not,
but
this
is
like
at
least
we
know
in
advance.
We
can
communicating
in
advance
to
the
globalist
team.
A
Okay,
perfect!
Should
we
set
the
action
items.
D
A
C
A
C
A
D
D
So
one
one
question:
before
we
sign
off:
do
we
do
we
want
to
meet
again
in
a
couple
of
weeks.