►
From YouTube: 2023-01-26 Delivery Group: Ruby 3 Rollout
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah
welcome
to
delivery,
Ruby
3
rollout,
it's
January
26th
today,
yeah
I,
guess
I'll
get
it
started
the
overall
status
of
Ruby
3,
all
out
so
far
from
what
I'm
aware
of
and
what
we're
affected
by
Ruby
3
is
Now
the
default
for
developers
using
the
GDK
and
gck.
A
However,
if
they
want
to
switch
back
to
review
two,
they
can
do
that
I
believe
locally,
but
like
on
the
on
the
repos
and
stuff,
it's
been
switched
over
to
repeat
three
trying
to
make
the
default
Ruby
three
and
yeah.
Basically,
the
exploratory
testing
is
ongoing.
It
involves
pretty
much
the
entire
company
and
its
Target
is
to
conclude
on
January
31st.
A
So
from
what
I
understood,
what
I
wrote
wrote
out
this
a
few
days
ago,
no
new
issues
have
been
found
so
far,
I
don't
know
if
Moira
knows
any
more
updates
from
that.
A
Okay
and
other
than
that,
let's
talk
our
status.
C
A
B
A
D
A
Thank
you,
which
means
realistically,
if
this
goes
through
before
March
17th,
we
could
release
it
on
March,
22nd,
right,
theoretically,
yeah,
okay,
so
yeah,
that's
the
overall
status
for
us.
We
are
pretty
much
yeah,
we're
also
waiting
for
the
exploratory
testing
results
to
define
the
yeah
the
bake
time
of
expected
timeline
for
the
rollout.
A
So
about
the
about
the
rollout,
basically,
QA
has
mentioned
that
they
need
to
run
the
full
test
Suite
after
each
deploy
now
I
haven't
made
sure
that
also
involves
staging
and
production,
but
they
did
mention
staging
Canary
and
production
Canary,
so
I
should
probably
double
check
if
they
also
mean
staging,
but
I
would
assume
so
or
maybe
not.
Maybe
that's
because
they
wanted
to
pair
this
up
with
the
reliable
reliability
tests
like
the
smoke
tests
and
stuff
that
run
automatically
I'm.
Sorry.
A
That's
that's
what
I
thought,
because
I
think
they
mentioned
that
they
need
to
run
the
full
Suite
with
the
smoke
or
that's
my
assumption.
It's
it's
like.
Maybe
that's
be
that's
why
they
said
that
they
mentioned
staging
ones
are
not
actual
staging.
So
from
from
what
I
understand
you
know
they
they
raised
that.
A
So
they
said:
hey,
it's
going
to
take
50
minutes
to
run
the
full
seat
so
I'm
guessing
the
timeline
would
be
something
like
deployed
to
sage
and
Canary,
see
the
smoke
test
and
all
like
our
automation,
test,
Suite
to
run
once
that
finishes
successfully.
We
mentioned
qadri
and
then
they
start
running.
It
takes
50
minutes
and
after
that's
done
successfully,
we
still
continue
to
bake
for
another
hour
after
that.
So
that's
why
I
said
the
bake
time
is
two
hours
to
account
for
this.
A
C
Didn't
misunderstoods
anything
us
or
that.
A
Yeah
I
think
I'll
double
check
about
staging,
but
anyway,
so
basically,
what
I
asked
Mathias
to
do
and
to
be
honest,
this
should
be
in
that
list
that
I'm
asking
with
us
to
do,
and
basically
asking
his
team
to
come
up
with
a
list
of
like
a
table
of
like
per
individual
team
if
they
need
to
do
any
activities
or
test
or
verification
in
between
any
of
the
auto
deploy
rollout
stages
to
be
like
hey,
my
team
needs
to
do
XYZ
activity.
A
I
need
to
do
it
in
you
know
this
environment
and
it's
going
to
take
this
long,
and
this
is
the
dri
for
it
basically
like
a
list
of
some
people,
people
to
keep
track
of
before
the
bake
time
is
over
and
we
get
a
green
light
like
for
the
who
are
the
stakeholders
to
give
that
green
light
to
go
on
to
the
next
deploy
right.
A
So
yeah
I've
asked
Mathias
for
that.
One.
A
B
Yeah,
yes
yeah,
so
we
are
talking
about
application.
Performance
team
is
going
to
give
us
a
list
of
tests,
that
of
tests
that
need
to
be
performed
manually,
and
you
were
saying
that
they
might
be
performed
after
each
environment
after
staging
Canary
and
after
production,
Cannery,
but
I
wonder
if
we
should
limit
that
environment
to
only
be
a
stage
in
Canary,
not
production,
Cannery.
B
Now
that
I
say
that
out
loud
it
might
depend
on
the
testing
requirements.
The
different
teams
have
yeah
but
yeah.
What
I
would
like
to
avoid
is
that
we
have
a
test
that
said
that
we
have
like
a
test
list,
and
this
is
going
to
be
tested
against
the
staging
Canary
and
everyone
say
yeah.
This
is
happening
and
this
is
working
fine
and
then
we
move
the
production
Cannery
and
then
everyone
wants
to
retry
again
and
to
pre-performed
tests.
That
is
going
to
be
time
consuming
so.
C
Have
a
question
about
that
Mayra,
so
the
the
test
Suite
that
we
are
performing
in
staging
kind
of
reproduction
Canary
is
still
artificial.
It
means
is
coming
from
us
which
kind
of
difference.
C
We
see
we
expect
to
see
if
we
test
it
in
production
counter
instead,
I
mean
the
only
difference
that
comes
to
mind
is
that
we
actually
have
actual
traffic
interaction
Canary
coming
in,
but
this
is
actually
you
know
external
to
the
artificial
testing
that
we
are
doing
against
that
so
I'm,
just
wondering
which
kind
of
values
which
kind
of
DC
we
would
see
if
we
actually
test
the
same
stuff
ourselves
that
are
artificial
in
in
the
two
environments.
B
Yeah
from
what
I
understand
in
by
scanning
the
conversation
on
the
exploratory
testing
issues,
there
are
some
specs
that
require
a
certain
environment
to
actually
run.
For
example,
some
of
the
specs
cannot
run
on
the
gitlab
pipelines
because
the
environment
is
set
to
run
on
staging,
so
they
always
run
on
the
stage.
B
C
A
Yep
yep,
so
oh
and
I
forgot
to
write
another
Point
here,
Maya
and
I
sent
yesterday,
and
we
were
talking
further
about
the
other
issues
in
the
Epic
that
we
own
and
OCTA
has
logged
me
out
again.
Give
me
one
sec
every
day.
A
So
yeah
we
talked
further
about
the
issues.
Let
me
actually
just
share
my
screen,
so
we
can
see
the
same
things
yeah.
So
this
is
the
this
is
the
Epic
and
yeah
basically,
for
we
discussed
the
chronicle
plan,
the
list
of
issues
so
yeah
rollout
strategy.
A
You
know
we're
waiting
on
we're
waiting
on
the
testing
to
consolidate
on
the
big
times
and
then,
generally
speaking,
for
the
rollback
strategy,
because
we're
going
forward
with
the
auto
deploy
our
tooling
of
Auto,
deploy
we're
just
going
to
try
to
lean
on
on
the
rollback
mechanism
that
we
already
have
so.
Basically,
this
issue
is
to
consolidate
the
ways
of
rolling
back,
that
we
have
and
right,
like
a
Consolidated
list
of
dogs,
basically
and
then
so
that
we
can
reference
it
and.
A
Something
like
this,
like
the
production
production
change,
we're
not
sure
if
we're
still
going
to
be
using
this
checklist,
because
it
is
a
year
old,
so
I'll
probably
contact
with
us
about
making
another
one.
That's
a
bit
more
up
to
date
with
this
one
reference
in
there
other
than
that
yeah
Define,
the
content
of
the
auto
deploy
package.
That's
basically
like
in
our
Auto
display
pipeline,
defining
like
we're
still
like,
compiling
everything
and
running
everything
in
277.
A
and
what
which
files?
It's
just
like
a
list
defining
which
list
of
like
files
to
change
so
that
you
know
once
we
do
merge.
Those
Mrs
in
that
are
Auto.
Deploy
pipelines
are
going
to
pick
up
Ruby
three
yeah
so
other
than
that
we
have
those
work
to
start
working
on
and.
D
A
A
Else!
No!
No!
No!
So
is
this
basically
like
list
of
files,
so
here
there's
already
a
list
on
kind
of
being
curated
so
stuff
like
this,
like
Ruby
image
being
used
on
Docker
file,
so
stuff
like
this.
So
for
that
issue,
it's
just
to
consolidate
a
list
of
files
that
needs
to
be
changed.
D
Do
the
developers
not
already
have
something
like
this,
like
I'm
wondering
whether
we
can
actually
put
some
responsibility
on
the
Ruby
3
team,
I'm
I'm,
making
a
big
assumption
here?
So
it
might
not
be
true,
but
I'm
gonna
guess
they.
A
Okay
I
mean
that
makes
sense,
because
if
I
look
back
at
the
reposit,
how
do
I
go
back
if
I
go
back
here
and
I
go
to
a
fall
like?
A
Maybe
this
isn't
a
good
example.
There
were
other
other
repos
that
I
saw
where
they
were
already
making.
Those
like
switch
to
Ruby
three
right.
D
So
yeah
I
mean
I,
think
a
lot
of
developers
who
work
with
these
sorts
of
things
are
used
to
to
doing
it.
That
way
so,
for
example,
I
think
if
so,
for
example,
like
container
registry,
if
they
want
to
get
a
certain
version
out,
they
they
have
a
process
where
they
pretty
much
go
through
these
steps
to
merge
in
the
version,
so
they
may
not,
but
I
I
do
I
I
well
at
least
I.
Think
we
should
do
this
in
collaboration
with
the
developers.
D
At
least
you
know
that
they
actually
are
the
ones
who
can
basically
say
all
the
changes
they
are
expecting
to
have.
You
know
here
are
the
Mrs
and
we
can
then
just
be
like
okay
double
check
everything
has
merged
in.
D
B
Yeah
and
I
think
it
also
will
be
a
good
idea
to
ask
distribution
to
confirm
if
those
are
the
files
that
we
need
to
modify
because
I
identify
some
files
on
CNG
and
on
upnibus
and
also
Scarlet
listed
some.
But
we
should
really
like
to
be
sure
we
should
confirm
from
with
distribution
about
them.
A
Yeah,
okay,.
A
Yeah
so
I'll
I'll
I'll
take
that
on
as
an
action
item
and
yeah.
That's
basically
where
our
epic
is
that
from
what
I'm?
What
I'm,
looking
at
I'm
concerned
about
yep.
A
Oh
yeah,
yeah
Myra,
you
had
some
comments.
B
Yep
I
wanted
to
spend
some
time
talking
about
the
PCL
so
far.
The
requirements
that
we
have
is
that
it
is
going
to
be
a
hard
one,
because
there
is
an
uncertainty
about
upgrade
into
Ruby
tree.
We
haven't
done
that
before
in
gitlab,
so
we
I
I
mean
application.
Performance
has
been
doing
some
exploratory
testing
performances
and
they
think
this
is
going
to
be
okay,
but
there
are
no
certain
about
it,
so
it
is
going
to
be
a
hard
one.
B
The
duration
so
far,
based
on
the
deployment
strategy,
is
going
to
be
one
day
as
any
other
rpcl
and
from
the
discussion
on
the
deployment
strategy
issue
enabling
or
disabling
feature
Flags
or
performing
change
requests.
It
is
going
to
be
kind
of
a
risk
because
well
we
don't
know
the
impact
of
upgrading
the
Ruby
3.
So
we
do
prefer
there
is
a
preference
of
putting
a
block
on
them
until
gitlab.com
is
operated
and
then
resume
those
operations
now.
Well.
Those
are
the
requirements
regarding
the
dates.
B
The
application
performance
team
is
aiming
for
early
March,
but
there
are
still
some
well
as
Jenny
talk
about
it
before
there
are
system
items
to
be
defined,
including
like
the
manual
tests
and
some
deployment
steps
to
be
defined
by
them.
So
we
are
kind
of
on
hold.
The
exploratory
testing
has
a
deadline
of
end
of.
D
B
If
I
remember
correctly
by
then
I
assume
that
we
should
have
all
our
answers
that
we
require
to
plan
this
PCL,
but
I
am
kind
of
between
okay,
it
is
going
to
be
kind
of
late
to
announce
the
PCL
or
not
I
I
I
am
not
sure
if
we
should
start
the
preparations
right
now
without
us,
knowing
all
the
answers
or
if
we
should
wait
for
them
until,
like
the
end
of
the
month,
or
perhaps
a
bit
later,
for
us
to
start
preparing
all
the
things
that
we
need
to
to
do
to
set
up
a
PCL.
D
D
One
of
the
downsides
for
hard
PCL
is
the
impact
it
has
on
everybody
else.
I
think
that's
totally
fair
for
development
in
terms
of
us
not
doing
deployments
and
totally
fair
and
development
not
doing
feature
Flags
right
because
they
get
the
big
benefits
of
the
Ruby
3
upgrade
Alan
might
have
opinions.
We
should
definitely
check
with
Marilyn
and
Allen
quite
soon
that
they're,
okay,
with
this
being
a
PCL,
because
it
also
impacts
anything
else.
That's
planned
around
changes
like
like
everything
is
blocked
on
that
day.
Right.
D
So,
okay
did
you
I,
don't
know
if
you've
already
mentioned
this
to
either
Alan
or
Maron.
C
Not
yet
I
spoke
about
this
yesterday
with
we
spoke
about
this
with
Jenny
yesterday
and
yesterday.
I
know
this
timeline
was
yeah
to
be
done
clear
because
we
still
didn't
tell
the
two
days
in
all
the
epics.
When
should
this
be
done,
and
my
suggestion
that
also
is
yesterday
was
to
start
to
actually
open
an
MR
with
our
proposal
and
have
the
discussion
going
on
in
the
Mr
for
the
PCL,
and
so
we
can
see
they
are
they
also.
D
We
could
send
it
through
that
yeah
I
think
alongside
that
as
well.
We
can
also
raise
it
in
one
of
the
infra
leadership
channels.
C
A
D
A
one-day
hard
PCL
for
the
Ruby
3
rollout
then
at
least
we
get
the
kind
of
early
concerns
visible,
and
then
we
can
separate
that
from
like
this
is
the
actual
hard
date,
but
yeah
I
would
suggest,
go
ahead,
get
the
Mr
open,
I,
don't
think
it
is
I.
Don't
think
it'd
be
the
first
time
that
we've
done
all
the
prep
and
set
up
a
PCL
and
then
gone.
Oh
actually,
the
date
is
changing
and
that's.
Okay,.
D
C
D
D
C
C
The
chances
of
having
a
more
reliable
platform
after
all,
I,
don't
think
I
think
it
pays
back
the
price
right.
It's
a
good
trade-off
that
we
can
have
so
I.
Don't
see
any
reason
why
this
should
be
probably
rejected
or
should
see
any
reason
why
we
cannot
I
mean
we
still
have
like
a
couple
of
months
before
that
date
that
we
cannot
have
a
proper
plan
even
for
infrastructure
changes
to
work
around
that
part
right
so
for
24
hours.
I
think
it's
is
a
reasonable
ask
to
my
point
of
view.
B
Okay,
perfect
I
already
have
a
date
in
mind
which
is
March,
6
and
I,
consider
kind
of
the
release
schedule
for
it,
and
basically
release
managers
March
6th
okay,
which
is
a
Monday
okay,
yeah.
D
Do
you
think
Monday
is
a
safe
day?
My
only
concern
around
Monday
is
knowing
how
many
changes
we
have
queued
up
on
deployments
from
the
weekend.
B
B
It
it
is
very
hard
for
me
to
choose
a
day,
because
if
you
think
about
it,
we
don't
really
have
like
this
is
going
to
be
a
good
day
for
performing
a
PCL,
but
I
think
if
we
plan
it
for
Monday,
we
kind
of
stop
after
deploys
on
Friday
evening
Friday
yeah
Friday
afternoon,
and
then
we
perform
it
on
Monday
and
then
we
out
to
resume
the
up.
The
deployments
on
Tuesday.
D
I
mean
the
benefit
I.
Suppose
as
you'll
know,
the
package
is
stable
because
it's
sat
there
for
the
weekend,
but
I
think
Friday's,
my
my
least
popular
day
because
we
won't
have
APAC
cover,
but
a
Tuesday,
Wednesday
or
Thursday
I
think
will
give
us
like
we.
We
should
also
consider
like.
B
C
C
D
Do
you
know,
oh
we're
pretty
much
the
time
but
just
sort
of
a
question.
We
can
also
talk
about
it's
like
closer
to
the
rollout,
but
it
would
be
if
I'll
ask
you
as
a
question.
We
can
figure
it
out
for
the
next
time,
but
it
would
be
interesting
to
know
what
time
zone
coverage
we
have
of
the
team
because
actually,
as
I
sit
here
and
I
realize,
the
three
of
you
right
now
are
all
on
the
Americas
and
I'm.
D
So
be
good
if
we
did
move
it
to
something
like
a
Tuesday
APAC,
for
example,
could
begin
if
we
had
people
in
APAC
or
it
starts
like
right
bursting
Emir
so
that
we,
like
you,
know
I,
think
if
you've
got
a
24-hour
PCL,
it
runs
from
Midnight
UTC
through.
So
we
should,
let's,
like
figure
out
if
we
can
actually
really
make
use
of
the
the
first
half
of
the
day.
Yeah.
A
A
The
only
thing
that
I
worry
about
on
the
delivery
side
of
things
is
if
something
goes
wrong
in
staging
Canary
I'm,
still
not
sure
of
like
the
boundaries
of
when
we
call
it
a
emergency
Mr
versus
a
rollback
and
I.
Don't
really
know
what
would
like
Define
that
and
I
think
it's
a
bit
out
of
even
like
our
scope
to
know
that
confidently
right.
A
So
that's
the
only
thing
that
I'm
kind
of
worried
about,
if
like
if
things
go
wrong,
who
makes
that
call,
because
so
many
teams
are,
are
interchanged
really
like
stakeholders
in
this
rollout
right
so
and
then,
like
considering
that
we
have
a
hard
PCL.
You
know
all
that
uncertainty
makes
it.
You
know
kind
of
higher
stake.
D
Yeah
absolutely
yeah,
we
should
figure
out
make
sure
we
know
who
the
answer
to
will
answer
those
questions
and
how
we
handle
those
largely
in
the
past.
The
way
this
is
mostly
played
out
is
available
time.
So,
if
you
have
10
hours
of
your
PCL
left,
it's
fine
for
developers
to
spend
some
time
trying
to
fix
it.
If
you
have
15
minutes
of
your
PCR
left,
we
still
have
to
ask:
can
we
extend
it?
If
not,
we
have
to
roll
back.
D
D
Awesome:
okay,
well,
Michael
and
I
will
figure
that
bit
out
because
I
think
that
will
largely
come
down
to
the
sort
of
stakeholder
management
piece.
A
B
No
I
think
well,
it
is.
There
is
some
uncertainty
about
upgrading
all
our
servers
to
Ruby
tree.
It
is
the
first
time
that
we
are
doing
it,
but
I
think
it
seems
under
control.
Even
though
we
have
a
lot
of
conversations
and
a
lot
of
discussions,
it
seems
we
are
in
a
good
path.
B
I
am
not
expecting
for
anything
to
go
outrageously
wrong
on
that
day,
but
whenever
I
say
that
things
happen
so
I'm
not
going
to
say
it
so
yeah,
let's
say
I'm
excited
about
it.
I'm
honestly.
D
Excited
about
it,
I
mean
I'm
super
happy
to
see
the
amount
of
effort
that
the
development
teams
have
put
into
testing
and
planning
this.
The
last
big
version
upgrade
I
saw
literally
got
merged
without
us
knowing
and
it
was
chaos.
D
B
Yeah
so
I'm
gonna
set
an
action
item
for
myself
to
start
the
merch
requests
and
then
I'm
gonna
bring
both
of
you
all
three
of
the
three
of
you
on
it
and
share
it
on
the
delivery.
Channel
sounds
good.