►
From YouTube: Geo Scheduling Call - 2020-01-24
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
The
self-service
framework-
so
this
is
a
discussion
item
that
we
had
and
I
closed
this
one
out
earlier
in
the
week,
because
we
had
consensus
and
we've
gone
with
separate
registry
tables
and
there's
been
no
concern
after
I
closed
the
issue,
so
market
I'm,
considering
this
done
as
well.
I
was
good
to
have
that
discussion
and
get
it
taken
care
of
because
it
means
that
self
service
framework.
We
don't
have
to
then
have
the
discussion
again
when
it
when
it
comes
up,
but
the
label
is
not
commuting
itself.
A
B
A
B
B
The
other
in
the
next
minor
release
or
Patrese
of
12.7,
oh,
you
should
be
able
to
see
the
front-end
design
repositories,
the
replication
and
everything
actually
works.
I
think
for
for
the
future,
I
think.
Maybe
we
should
devise
some
form
of
issue
template
because
we
often
we
need
at
least
one
issue
in
a
new
data
type
or
a
feature
actually
to
verify
that
you
know
our
feature.
Flags
are
removed,
make
it
generally
available
and
I
think
what
happened
here
is
that,
while
removing
all
the
feature,
flags
I
think
the
front
end
was
overlooked.
B
A
I
know
what
other
teams
do
is
they
move
things
into
the
work
from
a
verification
column,
and
then
they
go
on
to
the
production
instance
and
they
check
that
it's
there
and
once
it's
there,
then
they
get
rid
of
it,
two
out
of
the
verification
column,
but
the
problem
is
we're
not
on
production.
So
we
have
to
think
of
a
different
environment
where
we're
doing
the
verification
and
well.
B
C
A
C
B
A
slightly
unrelated
thing,
that's
the
last
thing
I'm
going
to
say
is
that
I've
done
acceptance,
demos
in
the
past,
in
teams
sort
of
before
a
release
where
everybody
had
to
actually
present
to
the
team
like
what
was
implemented
and
how
it
looks.
You
know,
and
that
was
always
fun,
but
that's
just
a
side
note
yeah.
A
Okay,
well,
let's
give
it
some
thought
and
chat
about
it
separately:
okay,
the
interview
column,
making
the
registry,
the
singles
also
truth.
So
I
know
that
Mike
has
been
working
on
the
proof
of
concept
more
than
anything
else.
That's
in
here,
but
I
see
that
the
worker
progress
label
has
been
taken
off
for
this
POC.
You
know
so
I'm
just
checking
on
what
the
latest
was
here
a
database
review
pending-
and
he
did
some
commits
yesterday
and
heard
a
lot
of
committees
today,
we'll
be
catching
him.
A
Yeah
so
turns
done,
part
of
the
database
review
and
it's
back
with
with
Mike,
okay
but
yeah,
because
they're
working
on
the
other
proof
of
not
the
other
proof
of
concept,
they're
working
on
the
implementation,
so
I'm
expecting
that
that's
going
to
sit
around
for
a
while.
Okay
improve
the
longing
of
sync
services.
A
B
A
B
I
picked
this
up
because
I
wanted
to
feel
useful,
but
it
was
a
lot
of
work
so
far,
I
think
I've
incorporated
all
of
the
feedback
and
I
think
I'll
actually
bounce
this
back
to
Evan,
because
now,
like
the
inter
works,
so
I
think
this
should
get
reviewed
one
more
time
and
then
hopefully
we
can
match
it.
I
think
in
general
like
this
is
something
where
maybe
we
need
to
pay
a
little
bit
more
like
close
attention,
if
we've
discovered
bugs
that
I.
B
Actually,
you
know,
like
so
let's
say,
disruptive-
that
they
require
work
around
I
think
we
should
pick
up
the
documentation
to
this
guide
to
work
around
relatively
soon.
Otherwise
you
know
we
may
we
may
get
caught
out.
You
know,
because
you
know
that
you
know,
customers
won't
know.
I've
already
actually
pinned
account
management
team
and
the
support
team
just
to
sort
of
make
them
aware
that
you
know
there
was
a
bug.
B
A
B
A
A
On
what
are
they?
The
viewer
later?
Okay
makes
it
okay
in
div
eleven
items
in
depth
that
it's
quite
a
lot.
Let's
see
handle
moving
repository
storage,
so
I
I
put
a
comments
on
here:
I'm,
not
sure
if
you've
read
it
yet,
that's
I
recommend
putting
this
issue
on
hold
because
we
aren't
going
to
get
to
production
in
the
next
couple
of
months
and
given
how
much
work
still
remains
on
this
issue.
Yes,
it's
it
just
seems
less
important
than
this.
Also
much
very
work.
So
that's
the
case.
A
B
C
B
A
B
I
thought
I
liked,
so
I
recall
it.
This
is
I,
think
before
I
went
on
eternity,
maybe
just
after
I
can't
it's
like
I
thought.
The
idea
here
was,
if
you
can
go
up
to
the
to
the
like
issue,
description
to
sort
of
try
out.
You
know
a
geo
install
on
on
him,
because
that's
one
of
the
things
that
were
like
I
think
in
development
for
months
right
and
bit
of
discovery,
I.
B
Don't
know
if
this
was
even
successful
or
not,
but
I
think
this
was.
This
was
just
something
that
I
think
we
were
interested
in.
Doing
to.
You
know,
help
the
distribution,
team
and
sort
of
validate
that
all
of
those
things
actually
makes
sense.
I
personally
think
there
is
no
necessary
urgency
to
it
right,
given
what
we're
doing
right
now,
it
still
should
happen,
but
I
like
I,
also
don't
know
how
much
work
this
did
is
right.
B
A
B
A
A
The
failover
using
geo
I
stopped
the
instances
that
ash
had
running
because
I
couldn't
find
anyone
else.
That's
going
to
be
able
to
do
this
in
the
next
two
weeks,
so
I
think
when
he
gets
back.
He
can
start
the
instances
up
and
continue
I'm
not
going
to
move
us
anywhere
I'm
just
going
to
leave
it
there.
That's
fine,
okay,
the
spike
they
had
their
third
pairing
session
yesterday,
which
is
awesome,
I,
need
to
go
and
see
what
what
they've
achieved
I
would
just
want
to
see.
A
B
B
A
B
A
B
A
B
I
actually,
like
I
I,
see
what's
happening
here,
I
think
what
they're
doing
is,
if
they
they're
still
following
sort
of
the
naming
convention
of
the
generalized
API
right,
yes,
strategy,
here's
blob
and
the
replicable
is
package
file
number
think.
Then
there
should
be
another
like
blob,
which
is
a
like
job,
artifact
or
whatever
all
right.
I
think
this
is
like.
A
A
Think
they've
been
able
to
make
a
lot
of
progress
because
they've
had
so
many
appearing
sessions
like
I
watched,
I
watched
about
an
hour
of
the
first
one,
and
even
there
I
could
see
how
many
discussions
they
were
having
and
the
decisions
they
were
taking
in
the
in
that
hour.
That
would
have
been
just
days
in
back-and-forth,
so
I
think
it's
just
been
a
very
good
way
of
working
to
get
through
this
initial
well.
B
I
spoke
with
her
yesterday
and
he
said
that
first
actually
having
a
synchronous
line
of
communication
is
helpful
because
you
can
bounce
back
ideas
and
the
likelihood
of
making
bad
decisions
this
lower,
because
you
have
somebody
else
to
actually
talk
about,
but
I.
Think
I,
like
the
strategy
here
as
well.
Sort
of
it
is
very
much
sort
of
asking
for
forgiveness,
then
for
permission
right.
You
just
move
forward
and
can
make
decisions,
and
maybe
some
of
those
decisions
need
to
be
revisited,
but
we're
actually
moving.
Yeah.
C
C
A
I
had
not
done
anything
this
week
on
how
to
do
this
yeah.
That
is
what
it
is
anything
the
secondary
node
for
staging
they
still
haven't.
They
still
aren't
at
the
point
of
doing
the
backfill
because
they
ran
into
two
issues
this
week,
where
the
one
was.
Something
was
wrong
with
the
Redis.
Something
was
wrong
with
readers
and
that
Devon
is
not
needing
to
take
a
look
at,
and
there
was
a
second
thing
about
the
foreign
data
wrapper
which
got
resolved
yesterday.
A
B
A
C
B
B
Okay,
so
I
think
I
think
we
can
close
this
the
POC,
because
my
understanding
is
what
we
have
established
is.
We
know
now
that
it
is
possible
to
pause
and
resume
database
replication
on
a
secondary,
node
right
via
omnibus
control
commands
rather
than
a
rake
task
and
I.
Think
there's
also
some
more
clarity
on
us,
probably
not
having
to
flush
the
writer
head
lobs.
So
in
my
mind,
that
was
the
purpose
of
the
POC
was
to
establish.
Is
this
actually
something
that
is
possible
to
do
right?
Which
beforehand
we
were
not
quite
sure?
B
The
next
steps
here
are
I.
Think
since
you
and
I
are
going
to
meet
in
the
afternoon,
is
to
look
at
if
we
wanted
users
systems
administrators
to
actually
be
able
to
pause.
The
database
replication
like
how
should
that
user
flow.
Look
like
right
and
I
have
made
this
wonderful
low
fire
diagram
already,
which
I
will
post,
because
the
I
think
there
are
a
few
questions
that
I
have
is
like.
We
already
like
have
a
pause
replication
button,
but
the
pause
replication
button
is
on
the
primary
and
the
pause
replication
button
actually
pauses.
B
You
know
all
the
replication
for
like
job
artifacts.
You
know
like
these
kinds
of
things,
but
not
database.
So
it's
a
little
bit
of
a
lie.
It's
also
on
the
primary
way
we,
as
far
as
I
can
I
understand,
aren't
able
to
actually
pause
the
secondary
database
replication.
So
you
know
I
think
we
have
a
few
sort
of
decisions
to
make.
B
Unlike
a
in
the
way
would
should
this
pause
button
actually
be
right
on
the
primary
on
the
secondary,
but
can
we
like
pause
the
database
replication
actually
via
the
UI
at
this
moment
or
not
right,
because
that
may
be
a
privileged
activity?
You
know
we
don't
have
the
capacity
to
actually
do
this
easily.
At
the
moment
you
know.
So
maybe
we
can't
actually
do
a
UI
like
functionality
for
this
at
this
moment
and
then
is
the
first
sort
of
iteration
on
on
making
this
easier
to
implement
a
task
that
can
be
run
on
the
secondary.
B
A
A
I
think
the
comment
that
you've
that
you've
just
talked
about
I
think
is
useful
to
just
have
like
that.
You
consider
this
a
success
and,
from
your
perspective,
if
you're
happy
to
close
because
I
think
the
proof
of
concepts
are
there
to
verify
an
idea
that
you
have,
and
it
I
think
it's
just
nice
to
have
a
closeout
statement
that
says
yes,
this
is
good.
Okay,.
A
A
A
B
B
Have
two
ideas
so,
first
of
all,
there's
a
ton
of
stuff
in
development.
You
know
that's
so
I,
don't
think
we
should
overload
this
this
year,
a
lot
more
because
you
know
there
is
just
a
lot
going
on
enter
interest
and
proofing
here.
I
think
you
know,
there's
no
change.
I,
think
this
should
be
done
by
by
Jenny
I
think
this
is
still
important,
the
POC
maintenance
mode-
and
we
talked
about
last
time.
I
think
this
is
especially
now
that
the
pause
and
replication
it
is.
B
Where
is
it
here?
You
know,
will
will
close
I
think
we
should
keep
this
in,
because
that
that
will
be
important
to
move
forward
with
this
feature,
especially
because
we
have
high
fidelity
like
prototypes
now
from
from
some
June.
This
one
here,
I
believe
you
know
it
is
not
resolved.
Tone
is
still
on
it
right
and
I
think
this
should
just
also
happen
right.
I,
I,
think!
That's
that's
fine!
This
bit
here
we
discussed
already.
You
know
that
will
be
sort
of
a
natural
follow
on
from
this.
B
B
Like
this,
so
that
people
can
say
like
oh,
this
is
actually
you
know
like
I'm,
not
on
this
version.
I
should
probably
update
so
yep
I.
Think
that's
a
quick
one.
You
know
like.
If
somebody
has
time
yeah,
you
can
do
it
by
the
way.
This
is
a
question.
Live
I,
put
a
Mars
in
for
documentation.
Is
this
counted
against
Gio
velocity,
I.
A
B
A
B
B
B
I
think
it
would
maybe
make
sense
to
like
schedule
one
or
two
of
these
issues
here
that
if
somebody
feels
like
during
the
week,
they
are
ready
to
engage
on
something
new
right.
They
can
start
doing
the
verification,
which
will
also
I,
think
facilitate
some
like
necessary
communication
with
what
is
happening
here
in
the
in
the
spike,
which
I,
don't
think,
is
necessarily
bad
right,
because
it
will
force
questions
like
okay,
I'm.
Looking
at
the
verification
like.
Where
are
you
guys
actually
at
like?
How
would
this
work?
Maybe.
A
What
we
should
do
is
on
the
epic
itself.
We
should
ask
Mike
and
tone,
and
Douglas
and
Gabriel
with
that.
Some
we
want
to
schedule,
though,
like
as
far
as
we
understand
the
verification
can
be
picked
up
in
parallel.
Is
there
a
particular
issue
that
we
can
schedule
or
like
how
do
we
start
that
work
in
parallel,
yeah.
B
C
B
C
A
B
B
B
A
Yeah,
it's
making
me
think
also
about
how
we
need
to
work
with
within
the
cat,
the
backup/restore
category,
because
this
is
not
the
whole
backup
fails.
This
is
one
section
of
it
fails
and
I
think
we
should
innocent
I
think
we
should
behave
like
this
energy
in
normal
geo
circles
anyway,
but
I
think
when
we
get
errors
about
a
particular
cat,
a
particular
data
talk
that
we,
you
must
involve
the
team.
Yes,.
B
Well,
I
think
we
I
haven't
really
spend
a
lot
of
time.
Thinking
about
the
backup
battery
story
yet,
but
I
think
ultimately,
I
feel
personally.
The
same
principles
apply
that
life
to
like
how
we
think
about
geo
replica
Bulls.
They
are
there's
always
going
to
be
something
more
that
we
need
to
be
backed
up
so
ultimately,
I
feel
it
is
much
more
scalable
to
involve
the
groups
responsible
for
these
specific
types
to
share.
You
know
some
of
the
responsibilities
when
it
comes
to
Becky
making
up
that
data.
Yeah.
A
So
what
I
was
trying
to
say
is
I
think
we
should.
We
should
take
that
practice
on
from
now
we
shouldn't
start.
We
shouldn't
start
doing
things
ourselves
immediate,
like
because
we've
just
taken
ownership
of
the
category.
We
should
start
as
we
mean
to
proceed,
which
is
we
found
a
bug
with
us.
We
must
get
them
involved
and
I
see
that
you've
already
paying
James
and
Alex
has
mentioned
Italy
and
I.
Think
that
they'll
be
more
coordination
to
get
things
done,
but
I
think
that
that's
the
right
way
to
Lucas
and.
B
A
A
Sure
we
would
have
one
engineer
out
completely
like,
and
we
really
have
enough
engineers
locked
up
on
various
things,
then
to
have
for
another
person.
Looking
at
the
specific
Elif
is
for
network
thing
and
yeah
so
I
think
also
in
future.
When
bugs
come
up
from
backup/restore,
we
must
we
must
treat
them
exactly
the
same
as
this
pinging.