►
From YouTube: Plan stage weekly - 2021-03-31
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Actually,
let's
kushal
is
on,
but
let's
go
with
your
point
first
john,
because
kuchal's
is
with
alexis
out
this
week,
we'll
touch
on
it
if
we
have
time,
but
it's
something
that
we
can
that
we
can
hold
off
on.
But
since
there's
only
one
other
part,
there's
only
one
other
topic
in
the
agenda.
We
probably
will
have
time
two
other
parts.
Yes
all
right,
so
let's
skip
kashal
go
to
jail.
That's
good!.
B
Yeah
yeah,
okay,
no
problem
yeah,
so
this
is
just
a
note
something
I've
been
investigating
this
morning
anytime,
you
have
these
interdependent
front-end
back-end
changes,
that
is
to
say
like
two
things
going
on
at
once.
So
maybe
change
the
api
and
you
change
the
the
way
the
client
uses
the
api.
B
B
So
at
any
one
time
you
can
have
like
all
permutations
of
this
at
once,
you
can
have
new
front-end
code,
using
old
backend
code,
new
front-end
code,
using
new
back
new
back-end
code,
which
isn't
a
problem
and
all
other
permutations
right
and
that's
a
problem
so
yeah
and
we
had
like
a
minor
incident
this
morning.
I
think,
with
this
and
kuchal
helped
to
quickly
revert
it
in
future.
There's
no
real
way
around
this
other
than
to
split
out
the
changes
or
you
can
do
the
changes
together,
but
both
need
to
be
backwards
compatible.
B
So
the
first
change
needs
to
be
additive.
That
means
that
you're
only
adding
new
behavior
but
you're
also
supporting
the
old
behavior
front
and
back
end.
Then,
once
it's
been
released
for
a
month
like
you
can
see
this
on.com
in
our
in
our
cabana
charts
that
there's
no
more
traffic
to
the
old
endpoint,
for
example,
then
we
make
the
change
to
remove
the
redundancy,
but
the
important
part
is
that
we
can't
assume
we
can't
make
assumptions
about
how
clients,
like
self-hosting
clients,
are
deploying
git
lab.
B
They
could
be
using
helm,
charts
and
kubernetes,
in
which
case
they
have
a
multi-version
deployment
as
well,
and
we
also
want
to
move
gitlab.com
the
production
web
fleet
to
kubernetes
in
the
future
and
the
way
that
works
is
like
pods
are
allowed
to
expire
of
their
own
accord.
So
you
can't
be
sure
how
you
know
exactly
when
the
deployment
will
use
the
new
code
anyway
long
story
short
yeah
in
future.
We
need
to
wait
a
release
before
we
remove
the
redundancy
it's
a
little
more
awkward,
but
it
will
save
us
more
trouble.
B
I
think
in
the
long
run,
there's
an
mr
there
I'm
working
on
to
include
this
in
the
guidance
for
multi-version
deployments,
any
questions
or
concerns
or
anything.
C
I
just
noted
that
we,
you
know,
I
think,
frequently
avoid
these
problems
through
the
use
of
feature
flags
like
most,
I
would
say
most
of
the
features
that
we
roll
out.
You
know
typically
have
some
feature
five,
primarily
because
we
have
lots
of
you
know
big,
relatively
large,
graphql,
refactor
or
whatever
it
is
to
roll
something
out,
but
there's
also.
It
is
increasingly
arduous
to
keep
those
up
to
date
and
make
sure
that
we
actually
use
them
properly
and
then
they
become
difficult
to
enable
because
of
incidents
or
whatever
else.
C
So
I
don't
know
if
we
want
to-
and
maybe
this
is
alex's
point
you
know
document
that
as
a
strategy
for
circumventing
this
problem.
B
And
then
yeah,
so
I
was
just
making
a
note
there,
but
yeah
feature
flags
also
are
not
atomic,
so
they
won't
necessarily
roll
light
across
the
whole
fleet
at
once.
They'll
they'll
roll,
like
basically,
is
what
I
should
say
so
again
like
they
help,
but
you
can't
guarantee
again
that
they'll
be
that
they'll
they'll,
like
you,
can't
guarantee
that
users
won't
encounter
this
because
it
takes,
I
think,
five
minutes
or
something
up
to
five
minutes
to
roll
one
feature
flag
out
across
the
whole
fleet.
B
It
depends
on
the
feature
as
well.
Like
I
mean
the
one
this
morning,
we
wouldn't
need
to
worry
about
the
back
end
change
that
was
made
in
it
because
it
was
purely
additive.
It
was
adding
a
new
field.
The
problem
was
that
the
front
end
change
needed
to
support,
also
the
the
possibility
that
that
field
isn't
there,
because
the
front
end
might
be
the
new
front-end
change,
but
it's
it
hasn't
rolled
out
across
the
back
end
servers.
B
Yet
so
it's
looking
for
a
field
that
doesn't
exist
on
those
back-end
servers
and
the
reverse
could
be
true
as
well.
I
think
it's
probably
more
common
to
be
honest
that,
because,
even
in
in
non-multi-version
deployments,
you
could
still
end
up
with
the
case
where
an
old
front-end
change
remains,
because
the
user
hasn't
refreshed
the
browser.
B
But
yes,
actually,
like
feature
flags
will
help.
But
again
we
need
to
be
really
careful
with
those,
because
if
we
put
it,
if
we
end
up
with
a
really
big
change
behind
a
feature
flag,
we
just
need
to
audit
that
change
to
make
sure
that
when
it's
flipped
you
know
we
should
just
probably
treat
it
on
a
case-by-case
basis.
D
D
I
I
mean
not
that
I
know
of
a
lot
of
instances,
but
I
I've
not
seen
it
being
experienced
on
the
back
end,
so
I
don't
think
at
least
I
understand
what
I'm
saying
that
we're
rolling
out,
so
we
do
kind
of
have
multiple
versions
of
gitlab
running.
At
the
same
time,
on
different
servers,
I
don't
exactly
know
how
our
infrastructure
works
and
how
are
we
like?
Do
we
have
separate
servers
where
we
deploy
front
end
and
separate
servers
that
we
deploy
back
end?
D
My
understanding
was
more
of
like
we
do
have
the
rails
web
up
as
a
whole
deployed
on
different
instances,
rather
than
the
front
end
being
served
by
one
ministers
and
the
back
end,
and
now
I
don't
know
if
that's
possible.
B
It's
possible
that
your
first
request
to
loads,
the
webpage
comes
from
onenote
and
then,
if
subsequent
requests
from
the
front
a
front
end
of
the
page
you
loaded
are
delegated
by
the
load.
Balancer.
E
B
Another
node
that
has
different
different
version
of
the
code
yeah,
I
see
what
you're
saying
about
the
back
end
but
like
consider
changes
where
we
we
update
like
like
we
made,
then
we
make
a
change
to
make
ids
consistent
in
graphql,
so
that
would
have
changed
the
type
of
iid
in
some
cases.
So
it
does
happen
from
time
to
time
where
we
would.
We
would
be
sending
like
a
string
instead
of
an
integer
or
something
like
that,
and
you
know
it's
possible.
E
B
D
Question
slightly
different,
but
I
guess
now
it
doesn't
make
as
much
sense
in
terms
of
well.
A
Yeah,
I
mean
that's
kind
of
a
good
question,
though
like
should
we
be
pushing
because
we
do
have
cases
where
we're
using
different
future
flags
on
the
front
end
and
the
back
end,
and
not
only
in
addition,
not
only
limited
to
what
we're
talking
about
here.
But
it's
just
I
don't
know
it's
slightly
confusing
or
hard
to
hard
to
manage
when
we
have
separate
feature
flags
for
front-end
and
back-end.
So
I'm
all
for
if,
when
possible,
just
sharing
feature
flags.
D
D
A
Okay
yeah.
My
question
was
on,
if
there's
guidance
on
removing
the
code
that
we
have
for
allowing
for
backwards
compatibility,
but
now
that
I'm
thinking
about
it
like
honestly
on
the
front
end,
we
should
probably
keep
the
code
that
we
should
probably
keep
this
code
in
there
for
a
bit,
if
not
for
ever
like
we
don't
want
to
rely
on
the
structure
of
the
back
end
results.
A
Like
we
don't
want
to,
we
don't
want
to
rely
on
that
completely
for
the
most
cases
I
think
like
we
want
to
be
able
to
support
if
something,
oh,
actually,
maybe
not
not
that
I'm
talking
about
well.
B
No,
I
tell
you
just
just
on
that.
I
think
that
if
you,
if
you
think
about
what
that
front
end
code
might
be
like
it's
still,
not
very
nice,
because
you
know
in
this
case,
like
the
front
end
code,
would
attempt
to
look
for
this
field
and
then
would
have
a
you
know:
a
failed
request
or
500
error
whatever
it
is
and
would
have
to
react
to
that
with
the
second
request.
B
So
it's
still
not
great.
It
just
stops
the
the
actual
breakage
of
the
application
for
the
for
the
user
experience.
So
I
think
it's
a
good
idea
to
get
rid
of
it
at
some
point
in
the
next
release
or
whatever.
So
maybe,
like
the
example
we
gave,
there
is
just
in
database
migrations
like
we
do
this
kind
of
expand,
migrate,
collapse,
cadence,
where
you,
you
kind
of
you,
add
the
new
fields
in
one
release.
B
Then
you
migrate
all
the
data,
then,
once
you
see
no
more
traffic
to
this
field,
you
remove
it,
you
don't
just
simply
like
change
column
and
then
an
update
in
the
same
migration.
You
know-
and
there
is
clear
guidance
on
that,
although
I
haven't
copied
it
in,
I
would
imagine
it's
something
similar
to
that.
We'd
need
to
create
an
issue
to
follow
up
in
the
next
milestone
to
remove
the
the
actual
code.
That's
doing
the
work.
A
Around
is
this
like
talking
about
in
a
specific
use
case
if
we're
requesting
a
field
in
graphql
that
does
not
exist
on
the
back
end.
Is
that
erroring
like?
Are
we
getting
an
error
in
those
cases.
B
Yeah,
I
think
so
because
just
I
haven't
gone
into
this
mr
from
this
morning,
but
I
think
it's
it's
causing
the
epics
list
not
to
load
in
some
cases
and
what
I
think
is
happening.
There
is
the
new
front-end
code.
The
user
has
a
new
client
code,
it's
asking
for
epics
with
this
new,
my
emoji
reaction
filter
and
because
that
filter
doesn't
exist
on
the
backend
node
that
it's
querying
it's
getting
an
error
instead,
and
so
it
kind
of
needs
to
rescue
from
that
error
and
do
something.
Like
else
that's.
B
A
I
got
you
so
we
don't
need
to
put
in.
We
don't
need
to
add
code
to
check
to
make
sure
that
it's
available.
We
just
need
to
handle
that
error.
Essentially,
I
think
so
yeah
so
in
in
that
case,
then
I
think
it
makes
sense
to
keep
the
error
handling
in
there
and
that's
that's
code
that
we
probably
wouldn't
want
to
remove
in
future
in
future
milestones
or
future
releases.
B
Oh
well,
yeah
error
handling
there.
If
it's
that
general
yeah,
probably
it's
it's
a
good
idea
to
have
it
there.
I
guess
I
don't
know.
Is
there
logging
on
the
front
end
for
errors
like?
Is
there
a
way
to
capture
stack,
traces
or
anything
through
century,
but
we
don't
use
that
all
that.
A
F
John
later,
could
you
please
paste
a
link
to
the
issue
from
this
morning,
because
I
was
curious.
What
was
circumstances
in
this
case
because
so
far
my
understanding
was
that
if
we
make
a
compatible
change
in
our
api
and
in
front-end
code,
it's
still
safe
to
assume
that
the
same
change
is
atomic
to
some
extent,
because
the
request
will
always
be
served
by
canary.
F
B
Yeah,
so
that's
that
was
the
first
reaction
to
this
was
to
drain
canary,
but
as
marin
pointed
out
like
there's,
still
a
roll
out
across
nodes,
so
you
still
get
a
roll
out
across
canary
in
a
row
light
across
production
and.
F
Okay,
so
from
now
we
should
aim
this
roll
out
window
window
because
it
seems
to
me
that
it
might
complicate
some
code
changes
quite
a
bit.
D
Then
there
is
no
way
like
if
we
are
adding
a
new
feature
on
the
back
end
and
front
end
right.
It's
just
a
new
field.
Then
there
is
no
like
there
is
no
way
to
prevent
the
error
from
happening
other
than
just
handling
in
a
way
that
well,
we
know
that
this
field
is
not
yet
there.
Let's
do
another
request
right.
There
is
just
no
other
way,
at
least
at
a
first
glance.
B
To
perhaps
if
we
were
to
have
some
sort
of
policy,
if
you
like,
where
we
have
to
roll
out
the
back
end
change
a
month
in
advance
or
a
release
in
advance
of
the
front-end
change,
but
I
think
that's
much
more
painful
than
then
producing.
D
B
D
D
The
request
is
like
uses
two
different
versions,
so
you
don't
really
have
to
do
it
in
mountain
advance,
but
in
a
way
I
don't
know
if
it's
possible
it's
more
of
an
infrastructure
problem,
I
guess
and
that
if
we
can
make
sure
that
the
new
or
the
updated
versions
and
to
to
the
updated
servers,
the
new
requests
go
to
the
updated
kind
of
service.
But
I
don't
know
if
that's
possible,
yeah.
F
F
Well
now
we
have
also
internal
api,
so
if
we
are
talking
about
for-
and
I
think
that
this
would
be
most
common
friction
when
we
change,
for
example,
what
attributes
we
pass
to
a
view
component,
which
we
initialize
in
our
view,
that
we
change
a
set
of
attributes.
We
pass
to
this
component
there
and
this
internal
api
is
not
record
compatible.
D
B
Cool,
I
put
the
mr
in
there's
a
discussion
on
in.
Is
it
known
as
well
about
this
as
well,
and
there
is
also
some
other
like
this
is
a
known
problem
across
the
engineering
department
that
we're
trying
to
get
better
at
it.
B
I
mean
these
are
serious
problems
between
front-end
and
back-end,
but
the
when
you
when
you
you
get
something
that
we
had
earlier
in
the
month,
which
is
where
tokens
were
generated
using
old
code
and
tried.
You
know
then
consumed
with
new
code.
That's
causes
a
real
problem
because
then
those
tokens
have
to
be.
You
know
they
last
they're
persisted
so
then
the
problem
is
much
much
more
pernicious
and
harder
to
solve
yeah.
A
Okay,
sorry,
I
have
one
point
that
I
just
added
on
this,
so
I
think
I
answered
my
own
question,
but
in
graphql,
if
something
like
the
example
from
this
morning,
where
we
added
a
a
field,
I
believe
to
the
query,
my
reaction
and
it
wasn't
available
on
the
back
end.
Yet
so
we're
going
to
get
an
error
like
they.
We
have
a
top
level
error
there.
So
we're
not
getting
everything
else
from
the
epic
we're
not
getting
we're
not
getting
anything
about
the
epic
at
all.
A
D
A
Do
we
do
it
on
like
in
like
filtering,
I'm
sorry,
john?
If
you
have
a
way
to
do
it
but
yeah,
how
do
we
do.
E
A
Filtering,
if
we
add
on
the
rest
api,
if
we
send
in
a
query
that
is
not
that
does
not
exist
on
the
back
end.
F
F
I
I
wanted
to
ask
whether
you
could
inspect
the
graphql
schema
before
you
pass
this
query
so
I've.
My
point
is
that
we
expose
the
wall
graphql
schema,
so
it
should
be
possible
for
front-end
to
know
before
you
actually
send
the
request
whether
this
argument
is
available
for
a
resource
or
not.
F
F
D
Yeah,
I
just
I
think
for
rest.
Api
is
slightly
different
if
you
send
a
pram
that
is
not
there.
It's
just
ignored,
not
processed
because
we
say
declared
params
and
so
on,
and
we
process
those
right.
You'll
not
get
an
error.
If
you
try
to
send
it
from
that
is
not
defined.
Sort
of,
I
think,
is
the
behavior,
whereas
in
graphql
it's
more
strict
in
that
way,
you
cannot
send
a
problem
that
is
not
known.
Today's
schema.
F
D
Just
like,
I
think,
that's
how
we
kind
of
approach
the
graphql
thing
in
this
sense,
whereas
we
should
probably
I
don't
know
how
expensive
it
is
to
check
for
schema
before
you
do
any
requests,
maybe
it's
very
cheap
and
and
something
to
be
done
as
a
middleware
thing
on
the
front
end,
and
then
you
kind
of
avoid
this
this
problem,
but
it's
not
sort
of
terrible
on
the
rest,
I
don't
think.
A
E
B
Oh
go
ahead.
I
was
just
going
to
say.
I
think
that
I
just
thought
about
that.
A
little
bit
like
in
the
case
of
a
single
page
application.
We
would
load
that
schema
once
right
and
then
that
client
would
think
of
that
schema
as
being
the
schema.
But
in
this
case
we
would
have
to
actually
load
the
schema
every
time
we
want
to
make
a
request,
because
we'd
need
to
check
that
no
new
additions,
but
our
subtractions
will
be
made
to
this
game.
It's.
D
We
can
get
in
the
debates
if
that's
very
expensive
and
not
really
perform,
and
then
we
should
break
it
down.
But
that's
like
that's
the
intent
of
graphql
right
instead
of
doing
multiple
requests
as
vest.
Does
you
just
do
that?
So
you
ideally
you'd
you'd
load
the
schema
check
it
and
then
do
this
for
the
data
and
so
on.
F
A
Do
you
know
mark
otherwise
we
should
chat
with
chris.
G
Yeah,
I
was
just
trying
to
dig
into
an
issue
here
to
see
if
I
is
supposed
to
be
premium,
that's
where
I
had
scheduled
it
before
kristin
took
over
planning,
but
I
would
double
check
with
kristin
just
so
that
we
don't
make
a
mistake.
A
Okay,
so
kashan
was
going
to
talk
through
some
of
the
roadmap
date
range
filtering.
We
talked
about
a
couple
weeks
ago,
but
it's
going
to
lead
to
some
questions
from
ux
and
or
product
and
alexis
is
on
vacation
this
week
and
holly
had
to
jump
so
do
we
still
want
to
talk
about
it.
H
I
think
we
can
discuss
it
anything,
probably
in
the
same
issue
where
we
are
tracking
this
ux,
because
then
I
can.
I
can
put
all
the
thoughts
instead
of
discussing
here
and
like
what
are
the
ideas
that
I
have
for
the
first
station
and
then
getting
the
inputs
both
from
ux
and
product.
B
H
So
if
we
get
rid
of
horizontal
scrolling-
and
so
that's
what
I
have
planned
for
concentration,
where
we
do
not
allow
any
form
of
horizontal
scrolling
even
for
scrolling
a
fixed
timeline
instead,
whatever
number
of
columns
that
we
show
for
either
of
the
views,
be
it
year,
be
it
quarters,
months
or
weeks,
those
columns
will
stay
fixed
within
the
viewport
size,
and
that
way
we
can
leverage
graphql
pagination
as
well,
because
then
we
will
get
rid
of
the
automatic
expansion
of
timeline.
H
You
start
to
dynamically
insert
or
remove
epics
depending
on
where
in
the
timeline
user
is
depending
on
the
scroll
position.
We
need
to
have
all
the
epics
to
be
present
there,
because
then
sort
order
goes
out
of
place
in
case
we
don't
show
all
the
epics
at
once,
but
if
we
get
rid
of
it,
then
it
also
allows
us
to
have
some
form
of
pagination.
H
Obviously
there
are
a
lot
of
ideas
that
we
are
exploring
and
to
come
up
with
something
for
the
first
iteration.
We
need
it
to
be
as
simple
as
possible,
because
not
only
we
will
be
touching
the
filtering
part,
but
we
will
also
be
touching
the
pagination
parts.
B
Cole,
thanks,
that's
really
good
information.
I
wonder
if
you
know
on
on
the
initial
load
of
the
page,
even
we
could
just
encourage
the
user
you
we
could
just
put
pagination
on
it
or
limit
on
it
until
the
user
puts
a
label
of
some
sort
and
then
we
you
know,
allow
we
remove
the
pagination.
You
know
what
I
mean
like
put
a
limit
of
some
sort.
H
So
right
now,
so
we
cannot
have
unlimited
epics
anyway,
but
like
we
have
updated
graphql
backend
to
have
2000
as
the
limit.
So
what
happens
is
when
you
open
roadmap
page
for
gitlab
or
group?
You
already
see
a
banner
at
the
top
of
roadmap,
which
says
that
there
are
more
epics
than
shown
here.
So
if
we
want
to
just
quickly
prevent
loading
all
2000
epics
at
once
in
the
current
design
of
roadmap,
then
we
can
do
it.
H
H
So
if
you
have
an
epic
which
ended
say,
for
instance,
in
five
quarters
ago,
maybe
in
like
2019
and
if
you
scroll
the
timeline
and
go
back
to
the
year
2019,
then
the
request
that
gets
fired
to
pulling
the
epic
will
basically
insert
that
epic
into
the
list
and
if
user
also
assumes
that
the
roadmap
is
sorted
by
a
certain
sort
direction
like
a
start
or
due
date,
then
such
an
epic
insertion
will
be
incorrect
within
the
list.
H
H
And
we
can
only
remove
infinite
scrolling
if
we
have
some
form
of
alternative
there
and
that's
what
we
discussed
in
last
plan
team
call
where
alexis
came
up
with
a
plan
like
we
can
have
a
five-year
roadmap
where
users
just
selects
a
five-year
plan,
and
then
we
show
everything
at
once.
But
then
user
explicitly
knows
that
they
have
selected
a
time
frame
that
is
going
to
take
some
time
to
load.
H
But
for
the
initial
load
we
will
show
only
like
the
current
year
or
current
quarter,
and
that
should
narrow
down
the
number
of
epics
that
we
show
by
quite
a
bit.
B
Awesome,
that's
brilliant
thanks.
I
had
a
quick
question
just
follow
up.
You
mentioned
that
on
the
back
end,
it's
2000
is
the
limit
on
the
graphql
api,
but
I
noticed
the
limit
on
the
roadmap
is
1000,
so
can
we
just
drop
it
to
1000
on
the
back
end
as
well,
so
jan
may
be
able.
F
B
Awesome
thanks,
I
might
have
been
looking
at
an
old
issue.
I
don't
want
to
mention
it
on
the
recorded
call,
but
on
a
public
hall
but
yeah.
I
saw
an
issue
about
related
to
the
complexity
of
these
requests
and
it
mentioned
that
the
limit
was
still
2000,
but
it's
probably
old.
F
A
H
So
felipe
might
be
aware
of
this
thing
so
like
when
we
started
hitting
these
limits,
where
we
were
showing
a
lot
of
epics
at
once,
and
even
that
wasn't
sufficient.
Then
we
started
with
the
1000
limit,
but
then
we
realized
that
we
can
probably
fetch
in
2000
epics.
H
A
Okay,
well
yeah.
Let's
we'll
look
into
that
and
touch
base
with
felipe.
A
A
So
what
we're
doing
is
just
making
instead
of
one
call
for
a
thousand
epics
we're
making
10
calls
for
for,
like
whatever
the
limit
is
each
call,
but
is
that
possible,
when
we're
using
cursor
based
pagination,
to
make
all
those
calls
in
parallel?
So
two
questions
yeah?
Does
it
make
sense
to.
H
H
It
doesn't
make
sense,
obviously,
at
the
same
time
like
I,
I
don't
think
those
calls
will
be
happening
in
parallel,
because
we
won't
have
cursor
information
unless
we
receive
response
from
the
first
request
and
then
so.
It
will
basically
accumulative
request
where
the
first
request
would
complete.
Then
the
second
one
fire
and
after
the
10th
request
is
completed
only
then
we
will
be
able
to
combine
the
results.
F
A
Okay,
anything
else.