►
From YouTube: New Staging discussion - 2021-08-23
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Awesome,
hey
everyone.
This
is
the
kickoff
and
question
answering
meeting
for
the
new
staging
efforts.
The
team
has
a
task
laid
out
already,
and
this
is
where
we
jump
right
into
to
hash
out
the
discussions
and
questions
first
off
thanks
everyone
for
reviewing
the
the
video,
the
kickoff
and
digesting
the
task.
I
saw
a
lot
of
movements
in
the
task
already
and
we
can
go
from
there.
The
first
agenda
item
is
jarvis
amy
and
skarbeck
had
a
discussion
on
how
to
do
canary
staging,
which
is
a.
A
I
think
this
is
before
the
recent
round
of
task
planning,
so
just
putting
it
here
for
historical
context.
We
don't
have
to
dive
in
unless,
unless
there's
any
flags
that
I
reviewed
it
and
seems
to
be
fully
aligned.
A
Moving
on
to
point
two,
I'm
proposing
that
the
the
staging
10k
is
a
separate
db
to
start
fresh,
and
I
I
want
to
point
out
to
the
the
last
item.
I
think
we
should
just
not
use
customer
data
period
for
this
environment,
so
we
can
have
a
lot
of
mobility
and
adding
extra
testing
capabilities
going
forward.
This
is
a
starting
from
the
other
end
where
we
need
to
go,
and
then
john
has
a
meeting
with
nick
already,
and
then
he
agrees
that
a
fresh
db
is
the
way
to
go
nick.
B
C
B
Into
the
staging
10k
plan,
are
we
thinking
to
to
spin
up
a
secondary
right
away?
Are
we
thinking
future
iteration.
A
If
we
rinse
and
repeat
fast,
it
means
we're
forcing
ourselves
to
codify
all
the
setup
in
the
script,
and
setting
up
geo
will
be
an
opportunity
for
us
to
prove
to
ourselves.
We
can
set
things
up
fast
and
and
bake
it
in.
So
I
I
think
when
we
had
the
first
note
up,
we
will
explore
that
how
we
can
set
up
the
geo
or
move
in
your
existing
geo
to
the
environment.
B
Yeah,
that
sounds
good.
I
think
the
the
get
already
supports
it
pretty
well
and-
and
I
think,
the
sooner
we're
we
we
try
to
incorporate
the
g-
incorporate
geo,
the
fewer
problems
we
might
have
down
the
line
when
we
try
to
so
yeah.
I
like
that
approach.
Thank
you.
Cool.
A
All
right,
cool
jeff,
oh,
I
thought
sorry
jeff.
You
want
to
vocalize
anything.
I
think
I
didn't
pass
the
mic
any
any
thoughts
there.
A
Cool,
thank
you
number
three
amy's
on
the
call
she
said
in
the
kickoff
video.
We
talked
about
learning
how
to
to
rinse
and
repeat
faster
and
it's
the
the
question
is:
is
the
intention
that
the
10k
staging
the
the
new
one
will
be
regularly
toned
down?
If
so,
should
we
open
another
issue
to
work
out
how
to
link
this
into
deployments
and
avoid
blocking
tasks,
but
before
I
go
in
there,
I
I
think
we're
not
gonna
tear
down
anything
at
this
point
so
to
not
block
existing
deployment
tasks.
A
The
the
existing
staging
with
the
new
canary
node
will
be
our
our
gate,
our
golden
gate.
While
we
work
on
this,
this
newer
environment,
I
don't
think
we'll
be
tearing
it
down
every
day.
I
think
it's
as
if
there's
an
opportunity
to
set
up
new
new
users.
A
I
think
nick
you
asked
me
on
gathering
test
requirements.
I
think
we
can
start
that
now,
but
I
want
to
push
us
towards
getting
the
qa
accounts.
The
test
accounts
the
pay
test
accounts
in
the
migration
or
in
the
script.
So
then,
when,
when
we
set
it
up,
it's
you
just.
We
just
commit
to
code
run
it
then
you
get
10
more
keyway
users,
and
then
we
fan
it
out
to
the
team.
I
don't.
A
I
think
there
will
be
some
rinse
repeat,
I'm
probably
the
stupidest
person
in
the
room
in
terms
of
sre,
so
I
rely
on
the
team
and
whether
the
tear
down
is
is
small
or
big,
but
I
think
the
focus
would
be
to
automate
all
the
provisioning
as
we
go
and
and
prove
to
ourselves
that
it
is.
It
is
if
this
changes
its
encode
and
we
can
just
set
it
up
from
scratch
without
the
fear
of
moving
things
around
steph.
You
had
a
comment
there
yeah.
I
just.
D
Just
just
to
make
sure
I'm
on
the
same
page,
the
the
tear
down
comments
that
was
specifically
to
the
staging
10k
environment
that
you
were
referencing
that
we
would
set
up
with
get.
At
least
that's
that's
what
you
were
mentioning
in
the
video.
That's
not
the
staging
canary
that
we're
asking
infrastructure
to
set
up
to
help
us
with
the
mixed
deployment.
Yep.
A
That
one
will
be
again:
that's
that's
the
that's
the
the
the
wall
that
we
need
to
make
sure
it's
working
and
getting
correctly
so
the
deterrent
on
comment
isn't
staging
canary
or
existing
staging.
This
is
the
new
environment
we're
going
to
build
together
and
make
sure
that
it's
it's
provisioned
correctly
going
going
it's
free
in
a
way
we
can
move
fast.
That's
that's
the
right
way
to
put
it
going
forward.
Yeah.
D
A
So
the
goal
is
in
the
self-manage
instance
yet
sets
up
things
that
are
really
at
scale,
and
we
we
put
this
in
the
press,
release
already
g
who
is
using
get
to
deploy
gitlab.cn
already.
So
we
need
to
really
shift
away.
That
get
is
only
for
self-manage.
It's
not.
C
We
discussed
naming
this
morning
linked
on
the
agenda
and
there's
a
youtube
video.
C
Staging
10k
as
the
name
and
that's
the
project
I
created
this
morning,
if
we're
cool
with
that,
I
think
we're
good
if
we
want
a
different
name
and
let's
decide
soon,
because
I
think
we'll
use
we're
going
to
start
deploying
to
it
very
soon.
A
A
C
No,
I
mean
like
we,
I
just
created
the
project
through
terraform
this
morning
and
we're
pretty
much
good.
I
think
once
we
start
to
point
to
it,
you
know
I
think
the
end
point
will
be
like
staging
10k.gitlab.net
or
something
you
know
all
this
stuff.
I
I
think
like
once
we
once
we
set
it
up
like
yeah,
I
mean
it'll,
be
kind
of
a
pain
to
change
the
name
later,
so
I'd
prefer
to
have
a
name
that
we
want
to
keep.
If
staging
10k
is
good,
then
let's
just
stick
with
that.
D
Just
having
him
both
name
staging
seemed
to
maybe
draw
a
little
bit
confusion
from
from
people,
but
yeah
I'm
not
particularly
strongly
opinionated
about
it.
I'm
just
calling
it
out.
F
Yeah,
but
the
point
is
I
mean
this
is
I
think
this
is
also
where
amy's
question
is
coming
from,
where,
if
we
look
at
the
graph,
the
mac
design,
when
we
have
basically
long
term,
we
have
a
parallel
view
when
we
do
a
staging
cannery
and
staging,
and
then
there
is
a
parallel
branch
which
goes
in
staging
10k
and
basically
there
are
qa
that
are
interleaved
and
then
there's
a
gating
point,
which
is
we
can't
go
production
cannery
if
both
if
all
the
the
staging
environments
are
passing
qa.
F
So
this
I
think
this
is
where
amy's
question
is
coming
from
right,
because,
basically,
what
long
term?
What
we
are
doing
here
is
that
we
are
adding
new
potential
point
of
failure
in
auto
deploy
pipelines
because,
if
get
is
broken
for
any
reason,
then
this
is
a
new
blocker,
because
we
will
not
be
able
to
say
provision,
or
maybe
it's
a
ready
provision,
but
then
we
have
to
upgrade
I'm
not
familiar
with
yet
so
I
don't
know
how
do
we
upgrade
and
get?
F
Is
it
something
we
go
through
get
or
we
do
with
ansible
script
like
we
are
doing
on
ansible
or
kubernetes
really
depends
right.
So
that's.
I
think
this
is
the
point
of
aim.
Is
it's
how
many
new
blockers
are
or
potential
point
of
failure?
Are
we
introducing?
Because
this
is
one
part
of
our
pipeline
in
delivery?
That
is
really
fast
and
it's
completely
automated.
C
A
I
think
it's
a
great
call
out.
Let
me
add
two
phases,
the
phase
one:
let's
set
up
the
10k,
there's
no
arrow
down,
so
there's
no,
no
perception
of
there's
another
failure,
opportunity
and
then
phase
two
once
it's
solid,
we
will
have
it
be
the
gate
and
also
the
reason
we
do.
This
is
to
make
sure
that
we
move
ourselves
away
from
calling
it
a
place
for
failure.
A
It's
a
place
for
for
us
to
give
feedback
loop
to
everyone
that
the
quality
is
high
and
that's
why
we're
going
to
reduce
the
signal
to
noise
ratio
and
make
sure
the
failures
are
calling
out
problems
which
that's
what
we
aim
to
do.
So
let
me
go
and
do
that
in
the
epic
and
have
that
be
broadly
documented,
seth
josh.
A
If,
if
you
want
to
think
of
a
new
name
we
can
have
today,
I
think
I
need
help
calling
the
existing
staging-
and
I
I
know
we
can
add
the
word
existing
to
an
already
used
namespace.
That's
why
I
said
staging-
and
I
didn't
want
to
call
it
new
staging
because
it
might
be
something
else
so
calling
as
it
is
a
staging
10k.
So,
instead
of
it's
not
in
front
it's
just
at
the
back,
that's
that's
where
I
landed
at.
C
C
A
We
have
today,
so
if
there's
you
want
to
call
it
staging
reference
architecture
or
ref
arc,
that's
cool
too!
That
buys
us
time,
but
that
buy
says
naming
head
room
if
you
want
to
deploy
a
50k
later
on
all
right.
Let's
move
that
offline
async!
Let's
go
to
point
for
nick.
You
had
a
one
question:
there.
B
E
B
Spend
too
much
time
on
this
just
asked
if
we
can't
start
gathering
the
additional
test
data
requirements
from
development
teams
right
away
sounds
like
you
answered
there,
that
we
can
go
ahead
and
do
that
so
yeah
we'll
we'll
get
started
on
that
effort
right
away.
Thank
you.
Yeah.
A
This
is
awesome,
I
have
heard
a
lot
of
complaints
and
it's
always
at
the
back
of
my
mind
for
the
past
time.
I
would
love
if
you
and
ian
can
just
be
the
the
the
holder
of
the
voice
and
just
bring
it
to
us.
What
does
development
need
so
far?
I've
collected
the
the
users
in
different
paid,
tiers
and
then
admin
accounts,
admin
access
so
feel
free
to
fan
out
now
and
let
us
know
going
forward
sure.
B
Yeah
that
does
yeah
is
there
anywhere
else
that
we
can
look
to
see
if
there
maybe
has
been
some
other
feedback
in
the
past?
I
I
think
maybe
there
you
had
a
quality
team
tasks,
issue
tracker.
If,
if
I
comb
through
those
would
I
be
able
to
find
maybe.
A
A
It
should
be
in
the
epic
and
then
there's
I
attacked,
I
think
it's
luke
duncan
and
then
lindsay
kerr
who
made
that
feedback.
Thank
you.
A
Cool
any
any
other
thoughts
on
test
data
requirements.
A
All
right
you
want
to
roll,
let's
see
what
you
have
number
five.
F
Thanks,
so
this
is
just
in
the
interest
of
speed,
speeding
up
the
the
development
here.
So
there's
this
issue
here,
which
is
about
testing
mixed
version.
So
the
the
end
goal
of
this
is
that
then
we
test
qa
on
staging
camera,
staging
and
figure
out
if
there
are
some
broken
compatibility
between
versions.
F
So
my
point
here
is
that,
instead
of
waiting
for
cannery
staging
to
to
be
there,
we
can
start
right
right
away
with
production
canary
because
from
my
experience
as
a
release
manager,
I
I
think
that
often
times
we
pick
it
up
those
failure
from
user
reports
or
our
own
reports
as
as
git
club
employees.
So
we
already
have
a
mixed
environment
in
place
which
runs
qa.
F
So
probably
we
can
start
working
on
specific
qa
tests
for
mixed
version,
environment
right
away.
D
Yeah,
I
felt
the
same
way
alessio
when
I,
when
I
saw
the
list,
we
already
have
the
production
canary
when
we
already
have
that
routing
functionality,
where
we
use
the
the
cookie
to
go
between
the
two
environments.
So
I
was
going
to
go
ahead
and
start
working
on
that
and
build
that
in
so
it's
up
and
ready.
Once
we
have
the
g-staging
canary
environment
environment,
going.
A
Is
this
a
is
this
a
matter
of?
We
have
a
test
to
to
to
validate
this,
and
this
is
making
sure
the
tests
can
run
on
production,
canary
and
canary,
without
having
to
wait
for
stage
internet
to
get
set
up
and
have
a
test
that
routes
through
the
two
cookies
and
making
sure
that
both
versions
don't
render
a
500.
Is
that.
D
Yeah,
as
is
yeah
building
in
the
functionality
to
our
our
framework,
to
be
able
to
switch
the
cookies
on
the
fly.
Yes,.
A
That's
great,
I
think
we
can
start
that
in
parallel
now
and
just
use
production
canary
to
to
test
it,
and
if
we,
if
you
see
failure,
then
that
might
be
an
improvement
in
in
a
short
and
feedback
loop.
Like
hey,
we
have
another
thing
coming
in,
I
I
had
a
question
that
I
need
to
roll
up
to
the
gitlab.com
standup,
given
what
we
know
right
now.
When
do
we
think
we
will
get
a
staging
canary
note
up?
Is
it
a
matter
of
a
week
two
weeks?
What
do
we
think.
C
Mac,
I
think
the
problem
is
that
the
dri
for
this
is
not
here.
So
it's
hard
to
say
I
mean
I
can
give
you
an
idea.
I
would
say
two
weeks
is
is
reasonable,
but
you
know,
okay,
I
think
we'd
have
to
talk.
I
I
I'm.
A
E
C
What
other
work
is
happening,
and
it
I
mean
I've.
I've
already
seen,
mrs
for
this,
so
I
I
I
would
even
say
optimistically
this
week,
but,
okay,
I
I
don't
know.
A
Okay,
I'll
I'll
give
a
rough
timeline
of
one
to
two
weeks
and
then
and
then
we'll
go
from
there
seth
on
on
the.
I
think
what
the
story
of
working
on
the
canary
note
in
parallel
with
the
tests
is
a
great,
is
a
great
update.
When
do
we
think
we
can
get
a
mixed
version,
deployment
test
running
in
production
and
production
canary?
A
Is
it
the
same
time
frame
like
a
two
week
thing
a
week?
What
do
you
think
yeah
well.
D
I'm
I'm
on
two
days
this
week
because
of
my
caregiving,
so
I
it
might
be
next
week
before
I
get
my
changes
rolled
out.
A
A
Cool
pierre,
if
you're
watching
please
work
with
us
async
in
the
issue
as
well
on
the
on
the
time
we
don't
want
to.
We
want
to
make
sure
that
you
have
a
your
opinions
captured
here
as
well.
G
Yeah
yeah
yeah.
I
wanted
to
clarify
what
we
expect
staging
10k
to
be
hybrid
10k
or
a
full
omnibus
environment.
What
is
the
expectation
there.
A
We
are
already
selling
hybrid
deployments,
so
sales
dog
food
and
if
the
new
kubernetes
components
gets
trickled
down
to
get,
let's
use
that,
and
so,
as
as
more
things
move
into
kubernetes,
a
lot
of
people
are
using
get
already:
let's
connect
all
the
dot
footing,
let's
connect
all
the
the
efficiency
and
just
use
omnibus.
I'm
sorry.
G
Yep
yep,
okay,
yeah,
so
good
thanks
yeah.
I
have
a
next
point
as
well:
yeah
yeah.
This
is
just
a
small
summary
of
today's
discussion
with
jeff
and
nick
and
yeah
here's
an
issue
and
yeah
as
the
question
about
the
name
yeah.
This
is
yeah.
This
is
still
a
big
issue,
yeah.
How
the
what
is
the
domain
name
should
we
use
for
staging
10k.
A
G
G
Yeah
and
the
last
thing
is
that
to
store
gets
configs,
we
need
a
project,
so
we
already
have
something
similar
with
the
performance
for
performance
environments
store
it
under
the
flop
environment.
Toolkit
configs
folder
will
be
okay
to
create
this
new,
a
new
project
for
staging
10k
under
the
same.
G
E
G
Okay,
then
I
will
start
with
it
yeah
and
the
small
fyi
that
I
will
be
out
at
the
end
of
this
week
and
hold
next
week.
So
if
you
have
any
questions
about
cat,
please
feel
free
to
reach
out
to
nick
or
grant
or
in
club
environment
toolkit
dislike.
G
Sorry,
I
also
have
another
another
point:
yeah.
What
would
be
the
best
place
to
create
an
issue
to
sync
feature
flags
enabling
on
stage
between
staging
and
the
staging
10k
yeah?
Who
who
would
be
the
tri
for
it.
D
So
is
the
thought
here
for
when
a
when
a
developer
enables
a
feature
flag
on
staging
that
it
yeah.
G
Automatically
gets
the
same
yeah
because
yeah
there
are
could
be
some
issues
when
a
specific
test
is
failing
only
when
the
feature
flag
has
been
enabled
and
we
for
example.
Last
week
we
had
a
incident
when
say
a
future
flight
was
enabled
only
on
production
and
not
on
staging,
and
the
problem
has
been
missed.
So
this
is
quite
a
significant
thing.
I
think
that
we
should
pay
attention
to.
F
It
sounds
like
gl,
infra
or
maybe
delivery.
Trackers
had
a
good
a
good
place
for
this,
because
this,
the
automation
are
implemented
in
chat
ups
and
usually
are
handled
by
infra
or
delivery
team.
So
at
least
as
a
starter
for.
A
A
I
I'm
curious,
as
things
come
together
to
see
if
there's
there's,
if
we're
gonna
have
more
than
one
staging,
is
it
beneficial
to
maintain
a
different
permutation
of
feature
flags
to
make
sure
we're
catching
everything,
because
if
it
is
one
environment,
the
timeline
is
serial
correct
if
you
switch
the
flag-
and
you
run
the
test,
there's
only
one
permutation
of
feature
flag
sets
that
we
can
catch
with
that
set
of
regression,
and
if
we
have
things
in
parallel,
you
can
potentially
test
two
permutations
and
we
also
don't
want
to
only
test
100
permutations,
like
the
the
20
for
the
80
of
these
are
like
the
two
permutations
one
for
the
new
stuff,
one
for
the
old
stuff
making
sure
it
doesn't
break.
A
I
think
there's
something
worth
exploring
to
take
advantage
of
any
any
parallel
gates
that
we
set
up
in
the
future,
although
I
don't
have
enough
enough
reading
to
it
right
now,.
A
Oh,
I
don't
know
yet
I
don't
want.
I
don't
want
to
prescribe
if
we
find
value.
I
think
that
the
key
is
if
we
do
find
value
that
if
we
have
two
golden
sets
of
feature
flags
and
we're
getting
value
out
of
it,
then
we
would
find
a
way
to
add
a
facility
that
if,
if
there's
no
value,
then
there's
a
case
of
not
doing
it
as
well,
and
it
might
be
that
we
just
think
from
production.
G
A
Yeah,
do
we
have
an
issue
to
sync
the
feature
flags
for
for
the
new
environment?
If
not,
let's.
A
You
neil,
I
appreciate
it,
thank
you
and
we're
at
56,
so,
let's
make
sure
we
close
it
out
number
10,
so
great
traction!
Thank
you,
everyone
for
jumping
in
digesting
the
information
they
think.
What
would
be
a
good
next
thing?
A
We
could
do
a
bi-weekly.
I
want
to
lead
on
async
if
we
think
this
is
going
to
be
a
working
group
which
is
I'm
trying
not
to
do
because
working
group
tends
to
take
a
long
time
it's.
It
feels
a
lot
fresher
to
do
it
under
the
bill
of
engineer
locations
and
just
force
ourselves
to
be
async
to
send
out
youtube
videos
and
redigest
it.
A
A
Okay,
yeah:
I
think
that
coincides
with
the
new
test
service
working
on
then
the
canary
note,
possibly
being
online
at
the
time.
So
I'll
set
another
sync
up
for
the
next
two
weeks
and
then
I
ask
all
of
you
to
just
lean
async
going
forward.
A
I
do
not
want
understand
the
recurring
if
I
don't
have
to,
and
then
the
last
one
is.
I
think
this
is
a
way
out
in
the
future.
I'm
looking
forward
to
also
see
how
we
can
improve
monitoring
the
signal
to
noise
ratio.
Let's
make
sure
we
make
it
clear
from
the
get-go.
We
have
an
opportunity
to
run
tests
selectively
more
impactful
and
then
make
sure
that
the
the
feedback
going
into
delivery
and
infrastructure
are
all
calling
out
problems.
So
we
can
confidently
hey
tell
the
team.
This
is.
This
is
going
down.
A
Signal
is
really
bad,
there's,
obviously
a
problem
with
primary
keys
database
regression.
This
is
an
opportunity
to
really
make
it
make
the
signal
really
crystal
clear.
A
Right,
weird
time:
great
discussions:
vince
vincy,
would
you
mind
posting
this
on
youtube
and
instead
it
put
to
public.
A
Awesome
great
discussions
thanks
for
running
async
and
I'm
gonna
coin,
we're
moving
at
the
speed
of
gitlab
there's
a
tesla
plan,
but
there's
also
the
gitlab
speed.
So
thank
you.
Everyone
see
you
in
two
weeks,
bye.