►
From YouTube: New Staging Discussion - 2021-09-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Should
I
go
ahead
and
start
we
expecting
anyone
else,
yeah
awesome,
so
I
wanted
to
just
chat
a
little
bit
and
happy
also.
We
could
take
this
offline
there
if,
if
we
want
to
do
a
more
in-depth
thing,
but
I
just
want
to
chat
a
little
bit
about
the
staging
canary
test
expectations,
so
we
do
have
staging
canary
is
now
running
in
the
pipelines.
B
Deployments
are
going
through
smoothly.
We
should
need
to
add
in
the
tests.
What
I
wasn't
clear
on,
maybe
is
whether
we
are
expecting
to
need
staging
canary
and
staging
to
both
be
in
a
known
state
to
any
one
time,
and
the
test
would
run
against
both
or
whether
we're
thinking
about
them
as
like
independent
things,
and
they
would
each
deploy
independently
and
run
tests
independently.
C
A
perfect
question:
actually
it
was
something
I
was
going
to
ask
as
well.
I
I
think
the
original
idea
was
to
try
to
get
staging
and
staging
canary
kind
of
into
the
same
deployment
step
as
we
see
with
production
and
canary
for
production.
So
we
can
get
to
the
same
situation
where
we
have.
You
know
an
up-level
version
on
canary
our
next
version
on
canary
and
then
we
have
a
stable,
stable
version
on
on
production
and
we
can
test
across
those
two,
so
staging
canary
should
never
be
equal
to
staging.
C
I
guess
ultimately
is
yeah
is
what
we're
trying
to
target
here.
So
as
long
as
staging
canary
is
like
our
our
next
version
of
what
would
get
pushed
up
to
canary
itself,
so
it
would
kind
of
roll
through
through
that
through
that
kind
of
yeah.
I'm
not
sure.
If
that's
where
we're
actually
at
or
not
do
you
do?
You
know
if
that's.
B
That's
are
there
the
the
bit
that
we
perhaps
need
to
think
about,
or
certainly
maybe
at
some
point
think
about?
Is
there
is
a
chance
at
the
moment
that
we
could
be
deploying
to
staging
canary
and
to
staging
at
the
same
time,
there'd
be
different
packages,
but
they
could
both
be
in
a
deployment.
B
C
It's
it's
actually
one
test.
That's
going
to
toggle
between
the
two.
C
And
two
environments
yeah,
so
so
the
idea
was
to
have
the
same
kind
of
cookie
functionality
that
we
have
with
production
and
canary,
but
we
set
the
get
lab
canary
equally
true
and
then
based
on
that
cookie
existence,
it
will
select
the
appropriate,
node
and
direct
the
traffic
to
that.
So
that's
kind
of
what
the
test
is
doing
it.
It
starts
the
test
up.
I
target
canary.
C
I
do
what
I
want
on
canary
or
I
target
staging,
do
what
I
want
on
staging
flip
and
check
on
the
other
side
and
then
basically
reverse
it.
I
think
okay,
so
so
I
so
I
get
it
going
both
directions
and
it's
that's
the
general
idea.
I
don't
know
if
you
want
me
to
go
any.
B
No,
I
think
that
makes
total
sense
yeah.
I
think
I
think
that
makes
sense.
Okay,
we'll
have
a
think
about
that
then,
because
at
the
moment.
B
Well,
there
are
a
few
other
things,
but
at
the
moment,
the
way
we
have
deployed
running
on
stage
and
canary
and
staging
means
you
you're
not
guaranteed
to
have
the
the
package
plus
one
package
deploying
that
will
be
the
same
as
production
and
production
canary.
But
we
can.
We
can
work
out
for
those,
but
that
makes
sense
in
terms
of
the
you
know
what
to
expect
from
tests
like
great.
C
Yeah
yeah,
so
so
that
was
just
you
know,
kind
of
the
idea
to
prevent
those
those
issues
where
customers
end
up
on
gitlab
next
in
in
production
or
yeah.
You
know
for
some
reason,
the
jobs
between
between
the
two.
If
traffic
gets
redirected
to
an
up
level,
node
versus
a
stable
node
and
yeah,
then
we
have
those
issues
so
so
that
was
our.
That
was
our
our
target.
Do
you
know
if
the
you
know
if
the
cookie
routing
functionality
is
in
yes,.
B
It
is
in
I
can
I'll
ping
you
on
pl
after
comment.
It's
not,
I
don't
think
it's
documented,
but
I'll
ping
you
on
the
on
the
comment
that
has
it
say.
Yes,
we
have
that.
C
Yeah
and
I
apologize
I
had
so-
I
had
a
family
member
have
covid
and
we
lost
him.
So
I've
been
out
for
probably
seven
days
since
we
started
this,
so
I'm
a
little
I'm
a
little.
C
D
A
follow-up
question
sorry
sure
go
ahead,
your
family
lost,
so
the
thing
is
the
cookie
settings
that
we
have
in
production
was
was
designed
to
route
five
percent
of
production
traffic
to
cannery,
regardless
of
the
cookie
being
set.
I
don't
know
if
this
changed,
but
then
we
we
basically
had
a
problem
with
some
toolings.
We
were
inspecting
the
versions
running
on
various
environment
and
five
percent
of
the
time
the
production
api
gets
routed
to
the
staging
to
the
cannery
environment.
So
we
get
the
wrong
the
wrong
version.
So
there
are
two
questions
here.
D
C
That's
fantastic
to
know,
thank
you
for
that.
So
is
that
something
you're
following
up
on
alicia
or
do
I
need
to
no.
D
It
just
came
up
to
my
mind
when
we
were
when
you
were
answering
on
amy's,
because
you
mentioned
this
cookie
thing
and
then
just
that
pops
it
up
on
my
mind.
But
the.
D
That
I'm
following
is
another
problem
that
I
think
is
still
related
to
this,
which
is
the
the
fact
that
multiversions
compatibility
is
a
function
of
two
versions:
the
old
one
and
the
new
one,
and
what
we
have
in
at
the
moment
in
staging
and
staging
cannery
is
not
the
same
that
we
will
have
on
production
production
cannery.
So
we
may
end
up
testing
versions
that
are
not
the
same
that
we
will
then
run
in
production
because
of
the
way
we
do
manual
promotion.
D
So
there
is
a
conversation
in
the
main
epic
when
we
were
trying
to
explore
the
idea
of
automated
rollback,
and
now
today
I
started
another
issue
in
on
delivery:
tracker
about
exploring
the
opposite
idea
of
reordering
the
pipeline
so
that
we
do
cannery
staging
first,
then
we
do
qa,
which
will
be
mixed
deployment.
Then
we
do
production
cannery
and
we
do
qa.
D
B
And-
and
I
think
the
other
thing
to
mention
is-
I
think
this
certainly
is
going
to
be
an
issue,
but
it's
not
a
it's,
not
a
like
absolutely
everything
shouldn't
happen
until
we've
solved
this
like.
We
should
continue
with
the
existing
plan,
and
then
we
have
a
follow-up
iteration
to
change
the
ordering
of
the
of
the
pipelines
or
juggle
things
around
to
make
the
tests
more
stable.
So
the
work
you're
doing
at
the
moment,
therefore
will
fit
into
whatever
direction
we
go
in
on
this.
Okay.
C
Okay,
well
fantastic,
so
follow-up
question
back
to
the
back
to
the
cookie
functionality,
so
you're
saying
even
with
the
cookie
cent
we're
only
getting
five
percent
of
the
traffic
set
to
be
directed
to
canary.
Is
that
what
I
took.
D
C
D
D
B
Oh
okay,
so
we're
checking
if
we
have
the
same
five
percent
traffic
routing
into
staging
canary
and
if
so
we
should
adapt
us.
I
see.
Okay
with
you
got
it.
A
B
Me
open
up
an
issue
for
that
zeff
because
I
think
pierre
might
be
the
or
job
maybe
the
best
people
to
since
they
put
the
cookie
stuff
in
to
actually
confirm
so
I'll
open
up
an
issue,
I'm
picking
you
on
there.
So
we
can.
We
can
link
on
that.
One.
C
Fantastic
yeah.
Thank
you
for
that.
I
wonder
if
it
would
be
easy
enough
to
add
some
functionality
to
this
cookie
routing
to
to
say
if,
if
canary
is
equal
to
false,
then
just
never
send
it
to
canary.
C
B
C
Awesome
awesome
yeah,
so
so
you
all
are
waiting
on
me
at
this
point.
Does
anybody
have
any
other
follow
up
with
that?
No,
yes,
no!
Okay,
you're
good
with
any
other
questions
regarding
the
pipelines
and
then.
B
A
B
In
so
we
can
actually
work
out
how
to
add
in
and
and
do
all
the
extra
wiring
pieces.
So
whenever
you're
ready,
that's
fine,
yes,.
C
Yes,
so
so
I
have
an
mr
out
right
now,
where
I've
added
the
functionality
to
our
session
class
within
our
browser
functionality
in
gitlab
qa.
C
So
it
looks
like
it's:
it's
adding
the
cookie
okay
it'll
be
I'll,
have
to
update
it
to
if
we
decide
to
go
the
canary
equals
false
route
to
to
add
that
functionality
to
make
sure
it's
targeted
to
production
or
staging
you
know,
in
whichever
environment
this
gets,
this
gets
run
in
the
the
tests
are
going
to
be,
are
written
in
such
a
way
that
they'd
be
able
to
run
in
either
these
pairs
of
environments
with
staging
and
canary
staging
or
production
and
canary
they
would
just
execute
the
only.
C
The
only
difference
there
is
is
where
we
point
the
tests
at
when
we
actually
execute
the
test,
so
initially
the
next.
Mr,
actually,
I
have
my
code
here.
I
don't
think
I
pushed
that
one
up
yet
is
actually
just
setting
up
a
job
for
that
to
execute
just
the
way
we
run
our
regular
jobs,
starting
out.
It's
going
to
actually
execute
with
staging
itself.
C
But
but
the
real
checks
are
going
to
be
when
we're
checking
that
our
data
is
good
across
the
board,
so
I'll
be
pushing
up
like
creating
projects,
creating
issues
creating
commits,
in
one
version
and
in
one
instance,
and
then
in
the
other
instance,
checking
those
and
then
flip
those
around
and
also
running
a
pipeline.
So
those
would
be
kind
of
like
your
your
smoke
test
for
the
mixed
environment
test
so
far
and
then
we'll
kind
of
iterate
on
where
we
want
these
to
run.
C
I
don't
know
we
might
want
to
call
them
out
specifically
as
like
a
canary
staging
just
to
give
our
qa
engineers
more
visibility
into
understanding
what
we're
actually
executing
there.
I'm
not
sure
how
much
else
we're
going
to
run
in
qa
staging
at
this
point.
C
So
we
just
haven't.
We
just
haven't
talked
about
that
so
far,
so
right
now,
you're
just
you're
waiting
on
me,
I'm
kind
of
getting
my
feet
back.
Underneath
myself.
I
should
have
my
the
existing
mr
in
review
today.
C
Hopefully
the
second
mr
will
be
in
review
today,
and
I
have
the
test
out
tomorrow,
so
maybe
by
thursday,
we'll
have
something
actually
executing
so
I'll,
be
a
a
day
behind.
I
know
tanya
pushed
the
due
date
out
to
tomorrow,
but
I
might
be
thursday
because
of
the
timeout,
but
I've
had
to
spend.
B
Okay,
well
just
give
us
a
shout
piazza
in
new
zealand,
so
we
have
a
little
bit
of
a
live
as
well
on
that,
but
yeah.
B
You're
ready
pierre's
got
plenty
of
other
things
that
we're
still
sort
of
putting
in
place
so.
C
C
A
C
That's
right,
I
can
work
that
out
yeah,
so
so
that's
where
I'm
at
and
I
might
might
tag
nelia
here
later
on
as
well
to
get
some
feedback
from
her.
C
B
Oh
big
on
things
you
keep
going
awesome
and
then
the
other
question
I
had
was
kind
of
on
the
other
side
of
this
project
for
the
staging
ref
environment,
and
I
know
sort
of
questions
around
like
how.
How
do
we
want
to
do
the
deployment
for
this?
B
I
guess
first,
maybe
a
basic
question
is:
are
we
expecting
staging
rap
to
be
online
and
then
we
deploy
to
it
in
the
same
way
we
do
staging,
or
would
the
deployment
task
be
bringing
staging
ref
online
with
the
deployment
package
like
I'm,
not
sure
how
to
get
feel
free
to
point
me
to
get
documentation
if
this
is
covered
somewhere,
but
what
would
we
be
expecting
for
staging
ref
graph.
E
Yeah,
I
will
try
to
address
it.
I
think
from
my
understanding,
if
you,
if
deployer,
will
provide
us
with
the
package
that
should
be
installed,
we
can
pass
it
to
the
get
and
it
will
update
the
environment
to
this
version
and
when
it's
not
in
use
when
the
deployment
is
finished,
I
think
the
plan
was
to
have
this
environment
up,
because
engineers
will
use
it
yeah.
E
B
I
say
that
makes
sense
yeah
thanks
for.
Thank
you,
oh
nice,
so
you've
done
all
the
elements.
So
far,
do
you
work,
but
we're
still
talking,
okay,
cool.
That
makes
sense,
and
then
I
think
this
next
one's
answered,
which,
from
the
diagram
just
to
double
check
that
we
I
have
that
correctly.
So
we're
expecting
at
the
point
we
do
the
staging
canary
deployment
we
would
keep
staging
ref
in
sync,
with
the
package
on
staging
canary
is
that
is
that
correct.
E
Yeah,
this
is
how
I
understand
the
the
plan
yeah,
but
maybe
this
is
something
to
clarify
with
mac
as
well.
B
Okay,
cool:
let's
go
with
that
and
yes,
okay,
but
I
can
I'll
put
that
I
I'll
clear
that
up
in
an
issue
and
then
we
can
ping
out
ping
everyone
and
double
check
that
it
probably
isn't
too
hard
to
move
that
round
later.
But
we
need
to
work
out
how
to
actually
do
the
deployment
so
yeah.
E
I
think
there
is
an
issue
to
what
that's
called
wire
in
deployer.
Let
me
find
it.
B
Maybe
yeah
jeff
mentioned
that
we
need
so
we
have
a.
We
have
a
kind
of
high
level
issue
to
like
wire
it
up.
It's
basically
as
far
as
we've
got
so
yeah
jeff
mentioned
that
we
may
be
we'll
need
to
find
a
way
to
get
deployer
to
talk
to
the
get
environment.
E
I
think
here
is
that
issue,
so
once
the
staging
ref
is
yeah
mostly
up,
we
can
proceed
with
this
yeah.
Okay,
can
we.
B
Wear
it
okay,
great
great
and
then
I
suppose,
a
kind
of
extensionless
post,
maybe
of
my
first
one
in
terms
of
the
so
I
know
we
kind
of
have
the
I
guess
two
requirements:
one
is
to
have
a
more
stable
environment
with
better
data
for
admin,
testing
and
things
like
that.
How
are
we
or
are
we
planning
to
do
any
coordination
around
that
sort
of
testing
and
the
fact
we
want
to
be
able
to
tear
this
environment
down
like
easily
and
spin
it
up?
Is
there?
B
E
If
we
yeah,
if
I
understand
it
correctly,
we
we
will
have
this
environment
up
and
engineers
will
yeah
we'll
be
able
to
to.
I
mean
related
changes,
but
if
there
is
a
need
to
destroy
it,
it
should
be
quite
easy
because
yeah,
which
we
will
just
run,
get
once
more
to
deploy
this
once
more.
But
the
the
problematic
part
here
could
be
to
re-add
those
users
that
that
I
need
to
be
on
the
environment
and
also
to
regenerate
the
test
data.
E
So
I
think
this
is
something
that
should
be
automated.
We
probably
for
now.
We
can
do
it
like
manually
but
to
to
be
able
to
replicate
it
once
more.
We
need
to
automate
it
in
us
in
a
second
iteration.
A
A
E
That
would
be
helpful
and
we
will
still
need
to
have
slack
reports
to
yeah
about
gitlab
ke
results
because
we
will
run
lucky
there.
So
it's
it's!
It
will
be
great
to
have
a
single
place.
B
C
So
that
was
my
initial
iteration
was
just
going
to
be
to
run
it
with
qa
staging
and
that's
where
those
notifications
and
and
if
we
run
into
problems
with
the
tests
they'll
be
automatically
included
with
the
reports
that
we
already
create
in
their
own
separate
job,
because
those
particular
tests
will
have
their
own
tags
that
will
be
executed
independently
of
all
the
other
tests,
so
we'll
be
able
to
target
them,
I
think
pretty
easily.
C
So
the
question
is:
are
these
particular
tests
so
important
that
they
need
to
have
some
other
special
call
out?
So
that's
probably
the
next
question
we
need
to
answer
there.
C
I
know
my
initial
thought
was
what
I'm
looking
at
for
the
other
tests,
I'm
going
back
and
and
actually
looking
at
existing
smoke
and
reliable
tests,
ones
that
are
really
reliable,
because
I
don't
want
to
create
additional
unreliable
tests
and
then
try
to
focus
on
making
these
smoke
or
reliable
tests
themselves
and
that'll
give
them.
I
think
all
the
focus
that
they
could
ever
need,
since
it's
going
to
stop
a
deployment.
E
Over
to
you,
hello,
yeah,
I'm
yeah,
I
added
a
small
note
that,
yes
still
catching
up
after
being
out
of
office.
There
I
run
the
I
get
with
the
with
the
using
the
permissions
and
configurations
that
were
the
jar
created
and
new
gcp
project
and
the
environment
is
up,
but
there
are
still
some
things
that
should
be
yeah
explored
and
we
will
probably
destroy
it.
It's
just.
It
was
a
single
run
just
to
check
that
everything
works.
E
Its
configuration
is
okay,
but
I
will
probably
destroy
it
later
this
week
and
the
question
is
that
currently
I'm
using
the
ultimate
license
that
is
issued
on
my
name,
and
I
I'm
wondering
if
there
is
some
specific
license
that
should
be
used
or
is
it
fine
to
use
this?
It
should
be
easy
to
change
later.
It's
just.
It
would
be
nice
to
clarify
it.
B
B
Yes,
we
should.
We
should
create
one
and
set
it
up
with
one.
I
I
think
it's
probably
we
could
just
probably
open
another
issue
and
and
get
someone
to
go
through.
I,
I
don't
suppose
it's
particularly
hard,
but
there's
probably
just
a
few
bits
of
pieces
that
we
can
do.
B
Then
me
follow
up
on
that
one
with
job
he'll
know
the
details
of
what's
actually
involved
there
like
we
should
just
probably
create
its
own
license
and
have
it
running
with
that,
just
to
keep
everything
clean,
so
yeah
yeah
we're
getting
an
issue
for
that
yeah.
Thank
you.
B
E
B
E
Sounds
great
and
another
point
is
yeah.
I
wanted
to
know
how
the
ssl
is
is
configured
on
the
current
staging,
because
we
will
need
to
have
some
certificate
to
to
configure
it
on
the
on
the
staging
graph
environment
yeah.
B
Again,
let
me
stick
an
issue
on
our
infra
project
and
see
I'm
not
sure
the
answer
of
how
it's
currently
set
up,
but
we
can
maybe
it's
something
pierre
or
someone
could
have
a
look
into
and
then
or
or
see
who's
on
the
actual
get
side.
So
we
can.
We
can
certainly
help
you
get
that
set
up.
Okay,.
E
Good
cool
thanks
thanks
and
meg
is
not
here,
but
I
I'm
afraid
that
we
are
mostly
out
of
time.
I'm
not
sure
if
we
will
be
able
to
address
those.
B
It's
maybe
just
on
your
point,
for
that
like
do,
we
need
to
have
a
maybe
do
we
need
to
have
some
sort
of
process.
Like
I
mean
access
requests.
Are
there
the
current
way
to
get
people
on,
but
like
is
there
a
another
process?
We
need
to
have
to
actually
add
people
going
forwards,
or
is
there
a
way
to
make
that
easier.
E
Yeah
yeah,
I
think
yes,
this
is
why
probably
infrastructure
team
provided
mac
and
other
managers
with
more
access
rights,
maybe
yeah
as
a
first
iteration.
They
can
add
all
the
engineers
listed
in
the
issue
to
this
gcp
project
and
yeah,
and
we
probably
need
to
have
some
process
on
how
to
request
access
to
this
project.
So,
okay,
that's
a.
B
Good
point
right:
okay
and
then
on
five
yeah.
I
had
it
on
my
list
have
a
think
about.
Let's,
let's
see
if
we
could
do
anything,
if
not
maybe
well,
we
can
do
this
on
async,
we'll
probably
just
need
to
work
out
the.
I
suppose
the
pros
and
cons
of
of
flipping
the
flag
once
and
having
it
applied
to
lots
of
places
versus
applying
it
specifically,
but
I
think
there's
already.
There
is
an
issue
so
yeah
I'll
comment
on
that
we
can
work
out
the
best
approach.