►
From YouTube: Geo Epic Conversation : Verify all replicated data
A
Great
well,
thank
you
for
taking
the
time
to
chat
today.
I
was
keen
to
just
go
over
this
epic
because
we're
getting
close
to
needing
to
start
it
now
and
I
wanted
to
make
sure
that
we
know
what
needs
to
be
done
and
we
have
a
rough
idea
of
the
rather
large
tickets
in
there.
How
we're
going
to
break
those
down
into
manageable
pieces
and
I
just
wanted
to
check
that.
There's
nothing
missing!
A
B
A
I
think
I
think
it
should
be
okay,
so
I
had
dragged
these
into
an
order
before
the
meeting,
but
I
see
that
they
ordered
that
they
were
in
is
not
being
respected.
I
think:
okay,
that's
fine!
So
the
first
one
that
the
first
one
that
I
wanted
to
talk
about
was
there
was
a
refactoring.
It
sounds
like
it's
a
bit
of
a
refactoring
task
which
is
dry
up
check
some
code
a
bit
and
it
does
seem
like
it's
a
lower-risk
thing
and
just
extracting
shared
code.
A
A
A
C
A
B
And
it
also
declares
that
the
NPC
is
like
the
formal
issue
is
modified
over
time.
You
only
come
the
front
end
now.
Look
at
this
I
depend
like
I
interpret,
as
this
issue
depends
on
automatically
there
if
I
upload
some
secondaries,
which
I
don't
personally
understand
why
it
but
me,
but
that's
another
issue
that
is
weighted
at
five.
Okay,.
C
C
B
A
B
A
C
B
Actually
like,
if
I
dig
a
little
bit
deeper
here,
so
the
automatically
there
if
I
uploads
on
secondaries,
which
is
an
improve
uploads,
depends
on
another
thing:
it's
a
which
is
constantly.
There
is
loads
onto
your
primary,
which
is
again
in
verify
data
with
the
God
in
that
epoch.
So
I
almost
feel
like.
Here
we
need
to
choose
a
little
bit
mine
as
an
you
know.
B
A
B
A
A
A
A
C
Specialization
of
how
they
upload
verification
works,
but
it's
mostly
the
same
code
that
that's,
if
you,
if
we're
going
to
wait,
oh
I
would
say,
for
example,
doing
that
for
uploads,
it's
ten
and
doing
this
for
other
facts,
it's
the
one.
It's
just
okay,
you
just
need
to
change
a
few
things.
I
need
to
a
work.
Okay,.
A
A
So
it
sounds
like
we
okay,
so
we've
moved
over
automatically
verifying
the
uploads
on
secondary's.
We
need
to
do
that.
You
know
we
need
to
constantly
verify
the
uploads
on
the
primary.
Then
we
need
to
automatically
verify
loads
on
secondaries
and
once
we've
done
that
we
can
then
do
the
automatic
verification
of
LFS
objects.
Yes,
okay,
so
coming
back
to
constantly
verify
uploads
on
the
Geo
primary,
that's
currently
an
eight
as
well
so
I
think
we
also
need
to
just
be
more
specific
about,
what's
included
in
there.
C
B
A
Okay,
so
so
for
this
one,
because
it's
an
eight
I
think
we
at
least
need
some
sort
of
bullet-pointed
list
around
what
makes
up
the
AIDS.
C
They
share
some
some
sort
of
complexity.
We
have
to
deal
with
both
local
and
object
storage
that
adds
up.
So
we
need
to
basically
build
an
interface
and
switch
from
one
to
the
other
local
files.
It's
easier,
but
objects
start.
Sometimes
it's
it's
hard
hard,
because
you
need
to
build
specific
types
of
tests.
A
C
B
C
A
C
A
A
B
A
I
think
having
them
on
one
issue
makes
sense,
but
maybe
having
them
in
two
merge
request
would
make
sense.
Yeah
I
think
if
we
do
it
in
two
issues,
it's
the
type
of
thing
where
I
think
it
should
go
together
like
it
should
make
it
I
mean
I,
know
it's
not
an
all-or-nothing
thing
to
do
it
like
that,
but
I
feel
like
it
would
be
better
to
release
support
for
both
of
them,
then
to
be
specific
for
local
files
and
then
do
the
object,
storage
separately,
yeah.
C
Yeah,
this
is
a
get
the
case
where
you
have
to
an
interface
that
switched
from
one
to
the
other,
and
also
that
there
is
the
possibility
that
some
of
your
data
is
local,
and
some
of
your
data
is
an
object
starch.
This
will
be
the
case
when
you
have
an
installation
that
it
started
with
local
files,
then
turn
it
on
object.
Starch
ran
with
obviously
starch
for
a
while,
and
then
they
decide
that
they
need
to
migrate
what
is
left
as
local
files.
C
A
B
A
B
B
You
know
the
like.
Let's
say
the
time
span
here
is
definitely
wrong:
two
months
yeah,
and
so
my
question
is,
if
that,
if
that
is
really
true,
is
there
like
a
way
we
can
actually
paralyze
some
of
this
at
all,
because
otherwise,
it's
very
sequential
right
and
like
I,
don't
know
if
that's
just
a
technical
requirement,
maybe.
C
So,
if
you
don't,
if
you
remove
some
of
the
complexity
of
doing
this,
the
con
job
or
something
that
it's
being
triggered
after
a
certain
event,
so
this
can
probably
be
split
that
way,
so
just
build
the
verification
code
as
one
issue
and
do
that
triggered
by,
for
example,
flashing
something
to
the
cistern
are
after
something
they
said
that
if
that
there's
less
things
that
you
had
to
do,
simply
you
don't
need
to
do
that
when
processing
a
certain
event.
This
reduced
complexity
and
the
scope
of
the
issue
so.
C
A
Uploads
part,
and
then
you
add
in
the
word
constantly
as
a
separate
thing,
it
would
be
less,
it
would
be
less
so
so
if
we
did
the
work,
that
was
if
we
verified
uploads
on
the
primary
so
take
the
word
constantly
out
and
then
we
verified
uploads
on
the
secondary
and
then
we
verified
LFS
objects.
You
can
then
add
in
the
automatic
part
as
one
issue
later
on
yeah.
It.
A
A
A
B
A
But
that
makes
me
start
to
think
that
that
needs
to
be
part
of
this
epic
as
well.
Is
that
if
we're
going
to
be
verifying
information,
so
if
we're
going
to
be
verified
for
vine
Oh,
try
that
again,
if
we
are
going
to
be
verifying
different
types
of
data
and
displaying
that
information
on
the
screen,
we
need
to
figure
out
exactly
where
we're
going
to
show
that,
because
adding
more
adding
more
dropdowns,
not
drop
lands
anymore.
Many
I
since
yeah
isn't
I,
don't
think
the
right
thing
to
do.
No.
C
A
B
Think,
when
what
is
a
follow
up
for
me
here
is
I
can
talk
to
Jackie
and
say
like
look.
We
are
we're
starting
all
of
this
work
on,
like
verification,
but
currently
how
we
surfaced.
This
is
not
you
know
ideal
in
the
idea
or
interface
can
we
discuss
some
some
UI
and
UX
work
here.
You
know
to
actually
improve
this,
but
that
can
be.
You
know
the
next.
The
next
step
that
makes.
A
C
B
Ui
yeah
right
now,
I
mean
the
UI
to
me,
looks
like
a
copy-paste.
You
know
from
nodes
to
to
uploads
or
some
projects
to
uploads
yeah,
which
is
not
pretty
you
know,
but
you
could
argue
that
if
that
actually
gets
90%
of
the
job
done
with
minimal
effort,
you
know
maybe
that's
a
good
initial
step,
but
I
think
there
is
something
to
be
said
for
cleaning
this
up
a
little
bit.
Yeah.
C
C
C
Have
the
recheck
and
resync-
and
we
have
these
batch
operations
here
so
when,
whenever
you
pick,
for
example,
we
check
it
will
disappear,
it
will
schedule
and
you
will
see
it
on
ending
the
batch
operations
does
the
same
thing,
but
for
everything
and
if
we
go
to
uploads
there
is
one
extra
action
with
this
remove.
If
I
understood
this,
it
is
something
that
existed
on
the
primary
protocol
deleted,
for
example,
when
you
delete
the
project
and
then
all
all
linked
attachments
to
that
project
are
deleted
from
disk
as
well,
but
they
still
exist
on
secondary.
C
C
A
A
A
A
C
C
A
B
C
A
A
Think
thinking
about
this
from
a
step
like
a
holistic
viewpoint
is
going
to
be
much
better
because
I
mean
in
my
mind,
changing
the
project
screen
to
have
all
of
the
the
information
about
the
contents
of
a
project
would
be
better,
but
I
think
that
having
UX
help
us
figure
that
out
would
be
good,
so
I
think.
As
fabian
said,
we
can
do
that.
The
work
to
do
this
in
the
background
and
adding
the
UI
portion
of
it
on
later.
B
That
makes
more
sense
to
me,
especially
because
they
are
a
little
bit
coupled
from
each
other,
and
we
we
could,
you
know
I
mean,
ideally
they
get
them
more
or
less
at
the
same
time
you
know
but
I
think
having
as
a
little
bit
of
feedback
from
from
Jackie
and
saying
look.
This
is
the
challenge
we're
facing,
and
this
is
what
we're
going
to
do.
How
should
we
represent?
It?
I
think
that's
a
I
guess
something
that
they're
quite
interested
in
in
contributing
to
as
well
yeah.
A
C
A
B
Mean
this
is
fine
right.
It's
like
I
I
anticipated
that
we
won't
be
able
to
knock
out
everything
here
in
12.2,
and
it
actually
helps
me
communicate
this
a
little
bit
better,
because
I
can
say
in
12.2.
We
are
currently
doing
all
of
the
like
backend
work.
You
know
to
actually
like
make
verification
work
really
well
and
then
for
my
next
kickoff
call.
I
can
say
it
as
a
continuation
of
this.
You
know
for
12.3.
B
A
It
sounds
like
the
UI
work
and
making
anything
automatic
can
be
done
in
parallel,
but
actually
verifying
uploads
has
to
be
done
before.
We
can
then
verify
LFS
objects,
which
has
to
be
done
before
we
can
do
the
job
artifacts.
It
sounds
like
that's
all
in
a
straight
line,
so
the
other
things,
the
other
set
of
things
that
looks
like
it
but
could
potentially
be
separate,
is
there's
one
issue
here.
The
verification
worker
should
check
health
of
Schad.
Am
I
right
in
thinking
that
that
is
quite
separate,
yeah.
A
B
A
A
It
sounds
like
this
is
separate.
It
sounds
like
this
is
to
do
with
how
with
how
the
workers
are
running,
which
I've
done
is
going
to
land
up
being
different
from
I
mean
obviously
not
an
expert
on
the
site.
They
always
feel
pretty
crazy,
but
it
sounds
like
this
would
be
done
in
some
way.
This
would
be
done
somewhat
separately.
Yeah.
C
So
the
sticky
side
kick
shots,
it's
they
are
defined
in
a
way
that
it's
farm
as
a
DSL.
So
the
way
that
you
configure
and
unit
it
smells
like
a
configuration
of
how
the
job
would
be
run
and
all
the
configuration
parameters
can
be
tested
as
something
like
this
job
has
retries
disabled
or
something
like
that.
C
A
C
A
A
The
other
one
saving
you
were
talking
about
was
the
chick,
very
chick
failures
on
the
primary
and
secondary
notes.
So
this
with
us,
we
don't
have
a
way
to
check
the
verification
failures.
This
is
about
the
you
either.
This
is
about
spending
time,
thinking
about
how
the
information
should
be
represented.
So
it
sounds
like.
A
B
A
A
A
So,
in
terms
of
the
talking
about
this
epoch,
I
think
that
that
is
a
pretty
good
first
meeting
about
this
particular
epoch.
We've
got
some
things
that
we
can
already
scheduled
for
work.
We've
got
some
actions
to
take
away
from
here
in
terms
of
how
to
update
these
issues,
and
we
can
already
start
working
on
some
of
these
things
as
soon
as
they
space.
So
is
there
anything
else
that
we
need
to
talk
about
today
around
this,
this
epoch,
I.
B
B
B
A
Another
question
this
is
for
Gabriel.
This
is
constantly
verified
that
uploads
on
the
primary
everything
else
is
about
verifying
things,
just
just
verifying
the
objects
which
I
assume
is
on
secondaries.
So
what
makes
the
upload
special
that
they
have
to
that?
There's
certain
things
that
have
to
be
done
on
the
primary
first
I.
C
C
B
C
A
B
A
These
conversations
are
useful
because
you
know
if
we
just
started
it
and
then
suddenly
we
find
out
how
big
it
is.
Then
we've
made
commitments
and
made
promises
about
what
we
can
accomplish
and
like
that's
exactly
what
I
mean.
I
wasn't
hoping
to
find
us
in
the
conversation,
but
that's
why
the
I
enjoy
these
conversations.
No.
B
Good
to
catch
them
early,
I
think
that
I
have
two
observations
so,
and
this
may
be
completely
wrong.
I've
worked
a
lot
on
all
right
where
you
have
to
integrate
lots
of
different.
Let's
say
data
types
right,
so
you
have
like
15
databases
for
protein
structures
right
and
we
need
to
continue
all
of
them
as
soon
as
energy
protein.
Every
remotely
no
I
will.
B
So
my
question
is:
is
there
also
from
an
engineering
standpoint,
an
opportunity
to
say
this
is
sort
of
our
framework?
You
know
like,
if
you
add
the
data
type
right
and
we
replicated.
This
is
how
you
make
sure
the
verification
works,
so
that
it's
not
always
like
an
effort
of
like
five
days
each
for
each
thing,
but
like
I,
have
no
idea
if
that's
feasible
or
if
it's
really
so
so
different.
B
A
We
want
to
make
it
easy
to
add
new
data
types
and
we
want
it
to
be
easy
by
the
people
to
be
able
to
add
them
for
us,
I
didn't,
but
it
sounds
like
we're
running
the
same
thing
now:
we've
we've
half
added
a
data
type
and
we're
now
needing
to
do
the
rest
of
the
work
to
finish
it
in
the
data
type
like
we've
replicated
it,
but
we
haven't.
We
haven't
finished
the
verification
side
of
things.
A
It
feels
like
this
also
needs
to
be.
If
we
did
this
at
least,
we
would
also
have
a
way
of
testing
it
when
we
were
doing
the
work
ourselves.
Yes,
because
we
would
know
these
are
all
the
steps
that
need
to
happen.
So
when
we
go
back
to
talking
about
uploads,
have
we
gone
through
all
the
steps?
When
we
talk
about
LFS
objects,
have
we
gone
through
all
the
steps
it
like
reviewed
all
of
the
different
types?
A
C
C
B
Yeah
no
I,
agree
and
I
think
this
is
like
so
I
am
keen
on
doing
it,
because
I
think
our
customers,
I
think
need
to
trust
us
that
when
we
say
we
are
replicating
this,
we
know
exactly
what
we
what
we
do
and
we
can
also
clearly
explain
what
the
limitations
are.
You
know
it's
like
this
is
when
we
say
we
are
replicating
11
subjects.
B
This
is
exactly
what
we
need,
but
that
also
may
mean
that
we're
not
doing
like
some
other
things
right,
but
I
I
think
that
should
be
like
very
clearly
stated,
all
right,
because
otherwise
people
like
well,
you
know
like,
but
what
about
this
you
know
like?
Is
it
it's
true
here,
but
not
true
there,
and
then
it
becomes
really
confusing.
So
I
think
it's
a
it's
a
good
opportunity
for
us
to
to
get
this
straight,
which
is
fun
cool.