►
Description
This video is discussion about the end goal of Iteration 2 (Demo), and how to get there.
A
A
A
B
Yeah
so
I
think
yesterday's
session
with
just
walking
through
that
big
is
good,
so
the
iteration
too,
is
simpler
than
I
thought.
It's
basically
that
which
is
which
is
good,
but
I.
Think
there's
a
lot
of
excuse
details
that
that
we
need
to
work
through
I.
Think
all
the
major
issues
is
that
what
I
miss
is
which
I
didn't
have
time
to
look
for
was
any
Mrs
that
that
you
might
have
been
working
on
Camille.
B
Yeah
to
to
categorize
or
or
find
all
the
cross
joints
but
I
I
know,
we've
got
the
crosstrained
analyzers.
But
how
did
you?
How
do
you
do
as
some
Mr
to
categorize
like
what
sport
what's
cluster-wide,
okay,.
A
So
maybe
let's
go
step
back
again
from
like
skill
kind
of
similar
processor,
so
I
can
go
what's
going
with
you
yesterday.
C
Should
we
should
we
take
notes
for
this
I
think
what
about
what
about
in
the
sync
document.
C
A
This
is
this
is
like
the
end
goal
of
the
iteration
tool.
We
want
to
record
the
demo
and
the
demo
has
like
a
very
specific
prescription.
A
The
idea
behind
a
demo
is
like
user
account,
is
search,
basically
macro
cells.
So
when
you
look
as
a
root
user
on
the
server
to
this
host,
which
is
why
it
can
be
like
your
GDK
one,
you
can
go
and
create
a
new
top
level
group
on
that
cell
and
other
existing
functions
continue
working
as
desired,
but
you're
gonna
have
another
GDK
configured
on
a
different
port
and
because
you
serve
a
hostname,
the
session
cook
is
gonna,
be
set
across
these
two
different
cells.
A
So
when
you
go
to
the
cell
tool,
you
should
be
still
logged
in
as
the
same
user,
so
your
user
account
is
shared
across
all
cells
as
part
of
the
cluster-wide
tables.
So
when
you
go
to
the
circle,
the
idea
is
like
you
simply.
We
don't
have
to
fix
all
of
views.
We
have
to
basically
be
able
to
go
to
this
page,
create
a
new
group
click.
This
button
create
group
type
in
group,
name.
A
New
group
cell
tool
create
a
group
button
and,
after
a
few
seconds,
just
be
presented
with
this
View
of
like
a
dashboard
of
the
group,
and
the
idea
of
that
is
like
it.
You
can
execute
that
on
the
third
one,
which
is
like
all
the
existing
function,
and
you
can
execute
that
on
the
third
tool,
which
is
like
something
new
but
me
as
an
iPhone
or
wrote
in
the
GDK
is
the
same
account.
D
A
Result
like
if,
for
whatever
reason,
we
also
get
explore,
function
working
the
third
one,
we
only
gonna
see
this
group
created
on
this
one
on
the
third.
We're
gonna
only
see
a
group
created
on
the
circle.
This
is
demo,
we
don't
need
anything
else,
just
the
simplest,
the
most
basic
and
most
common
word
for
the
people
even
start
with
at
GitHub.
C
What
you
have
done
before
once
right
once
you,
you
did
some
luck,
POC
demo,
it's
similar
to
this
one
right.
Yes,.
A
It's
similar,
but
I
had
separate
accounts
there.
It
was
not
the
same
account
so
basically,
I
get
user
stable,
replicated
on
both
databases.
So
all
the
foreign
keys
were
not
were
kept
intact.
Cross
joints
were
happening,
but
just
by
by
The
Coincidence
of
me
storing
the
same
encryption
password
with
the
same
hashing.
A
It
just
worked
with
the
same
souls
and
things
like
that.
It
just
worked
that
I
could
use
a
cookie
echoes,
but
if
I
was
for
whatever
reason,
change
the
password
on
on
cell
one
in
my
POC,
it
would
not
work.
I
would
not
be
logging
to
Sir
to
them,
because
the
password
encrypted
would
be
of
different
form.
So
there
was
no
data.
Synchronization
of
the
users
like
the
primary
purpose
here
is,
like
users
are
searched
and
you,
as
the
user,
can
create
a
group
on
the
cell
tool,
which
is
like
the
most
essential
workflow.
A
A
Is
it
clear
like
do
you?
We
need
some
extra
clarification
on
the
demo,
any
questions,
what
what
would
it
be
as
needed?
There.
A
This
is
I
pass
I
as
part
of
this
record.
All
queries
that
I
did
some
time
ago.
I
recorded
all
the
sqls
that
were
executed
as
part
of
this
workflow
when
I
create
a
new
group
and
and
I
saw
all
of
these
tables
being
touched
by
either
going
to
a
page
clicking
button,
create
a
group
or
side
key
job
being
executed.
These
were
all
tables
that
were
attached
by
that
workflow.
A
So
now
this
is
I
think
the
most
essential
part,
because
we
not
only
have
to
make
users
to
be
part
of
the
cluster
white,
but
also
some
of
these
tables,
because
they,
for
example,
store
some
data
that
we
access
that
may
Up
application
may
require.
There
may
have
some
foreign
keys
so
some
of
these
tables
they
need
to
be
marked
as
the
GitHub
main
cluster
white,
and
then
each
of
these
models
needs
to
be
based
to
use
main
cluster-wide
application
record
to
access
cluster-wide
database.
A
So
when
you
go
to
the
server
tool,
when
you
access-
let's
say
user
details,
it's
gonna
go
to
the
cluster
y
database
to
fetch
this
information,
so
this
type
was
gonna,
be
set
across
clusters.
Sorry,
across
cluster,
the
the
thing
is,
we
don't
need
to
fix
all
of
them.
We
need
to
fix
only
some
of
them
I.
Think
to
to
make
this
to
be
like
a
happy
path,
because,
as
part
of
the
demo,
we
focus
the
happy
path.
We
don't
focus
on
the
edge
cases.
A
So
as
long
as
you
can
get
this
work
with
the
minimal
set
of
the
changes,
our
goal,
we
don't
have
to
fix
all
of
this
and
in
some
cases
it's
gonna
be
hard
to
break
across
joins
because,
for
example,
roots
and
thread
directors
is
like
join
extensively
across
code
base
with
the
namespace
it's
been
altered
very
often,
so
in
some
cases
it's
gonna
be
hard
for
us
to
make
these
cluster-wise
changes,
but
also
we
don't
need
to
fix
all
of
them.
We
need
to
fix
only
some
of
them.
A
C
A
This
is
this:
is
the
lead
clock
main
cluster
white,
the
site
of
the
story?
There
is
another
issue
that
talks
about
the
same
thing
from
the
namespace
perspective,
because
there
is
like
a
number
of
namespace
I
was
kind
of
looking
at
these.
This
was
like
probably
at
least
of
the
tables
that
should
be
marked
as
a
GitHub
main,
so
that
were
attached.
A
Nice
place,
settings
for
sure
match,
request
issues
graph
features,
probably
we
need
to
classif.
We
need
to
provide.
We
set
some
of
these
labels
to
GitHub
main
so,
but
also
the
minimal
amount
of
possible
to
fulfill
the
goal,
because
they're
gonna
create
a
lot
of
questions
violations
and
the
idea
is
like
to
focus
only
on
the
happy
path.
A
So
this
is
like
a
step
before
a
demo,
but
if
we
go
further,
it's
actually
this
issue.
That
is
probably
like
the
right
now.
The
most
important
there
is,
like
I,
merge
a
chance
that
puts
users
and
namespaces
into
separate
schemas,
but
it
still
allows
to
cross
join
to
GitHub
Main
cluster-wide.
A
You
we
go
with
the
cross
size
between
users
and
namespaces.
We
identify
these
cross
joints
with.
We
maybe
not
even
fix
them
now,
because
we
don't
need
to
fix
them
now
we
need
to
identify
them,
we're
gonna
figure
out
which
one
to
fix
later
then
we
go
to
basically
some
of
these
user
table
and
say:
okay
from
blogs.
It
seems
like
that
user
status,
as
an
user
preferences,
needs
to
be
clustered
from
that
side
of
the
things
it
needs
seems
to
be
that
probably
projects
needs
to
be
Excel
local.
So
you
change
the
specific
specification.
A
You
get
a
new
release
of
the
Crosslines
that
are
violating
between
these
set
of
the
tables
you
kind
of
allow
that
identify
and
allow
them
and
kind
of
its
license.
Repeat.
Basically,
you're
gonna
add
one
more
table.
It's
going
to
create
a
new
violation,
so
you
need
to
fix
them
to
get
the
pipeline
green
until
you
kind
of
let's
say
exhaust
until
you
kind
of
get
to
the
point
that
when
you
click
the
button,
create
a
group,
you're
gonna
observe
all
of
the
your
violations
being
registered.
A
Basically,
because
then
out
of
that
list
of
recorded
queries
that
I
saw
this
were
this
was
like
the
cross
lines
that
I
observed
and
from
that
list
of
the
Cross
lines
that
it
seems
that
I
have
to
that
we
have
to
solve.
It
seems
that
this
is
the
only
one
that
we
have
to
solve.
That
is
like
a
cross
join
that
would
block
the
this
workflow.
A
So
the
minority
of
the
work
is
really
about
classifying
a
table,
whether
it
should
be
cell
or
cluster
and
fixing
this
individual
across
lines
that
are
crossing
the
boundary,
and
it
appears
that
there
is
only
like
one
or
maybe
two
or
three,
because
I
also
kind
of
changed
some
close
lines
to
block
today
and
explicit
select.
But
there
is
just
like
a
very
small
number
of
cross
joints
that
is
from
the
Alloys
that
we
would
create
that
we
actually
have
to
solve
for
that
workflow,
and
this
is
basically
rice
and
repeat
process.
A
You
identify
closed
zones
that
are
existing
for
users.
The
namespace,
as
you
add,
more
tables,
you
classify
more
tables.
You
get
more
closely
violations,
you
identify
them
until
you
get
in
your
workflow
all
of
the
Cross,
some
crossings
that
you
have
to
fix.
Then
you
pick
exactly
that
cross
line
to
fix
you,
don't
you're
not
going
to
fix
all
of
the
croissants,
it's
not
needed
it's
only.
We
only
need
to
fix
erosions
instead
are
being
touched.
When
you
create
a
group.
D
I
have
a
question,
so
I
I
think
we
have
already
answered
the
time
just
summarizing
it.
So
identification
is
across
the
app
because
pipeline
will
fail,
because
it's
a
whole
test
suit.
It
is
going
to
fail,
even
in
other
areas,
not
just
while
creating
a
group,
so
identification
is
across
the
app.
But
fixing
is
only
for
that
page.
Where
you
create
a
group.
Is
that
what
you're
trying.
A
To
yes,
yes,
exactly
so
like
we're
gonna
allow
about
of
those,
we
don't
know
that
they
are
failing,
but
we
only
we're
gonna
only
fix
like
a
single
once.
Okay,
so
like
they're
gonna,
be
like
I
think
up
to
maybe
five
that
we're
gonna
fix
cross
joints
on
that
part
of
the
effort.
So
we
like
the
balk
of
the
work,
is
really
identify
and
classify
yeah.
D
A
A
A
Add
something
like
that,
depending
on
the
approach
that
you
take,
you
maybe
add
dot
allocation,
Cycles
databases
with
some
referencing
URL
to
just
simply
cut
that
we
would
pretty
much
put
in
the
back
hook
and
out
of
this
list
of
the
issues,
you
would
simply
pick
just
the
ones
that
are
being
shown
in
a
lock.
A
It's
gonna
be
visible
in
a
lock
that
this
is
the
cross
line
that
was
allowed,
because
there
is
like
a
specific
comment,
added
sorry
annotation
order
to
take
SQL
query
when
you
call
this
method.
So
if
you
would
use
this
method-
and
you
then
go
to
the
SQL
logo
like
Vlog,
slash
development,
log,
you're,
gonna
see
all
of
the
queries
that
were
annotated
pretty
much
with
the
cross
Stream
So.
If
you
find
this
kind
of
query
that
is
annotated,
this
is
probably
this
is
pretty
much
as
part
of
the
TOEFL.
A
This
is
really
like
your
targets
to
fixing
only
out
of
the
list
of
the
issues
that
were
identified
and
it's
really
like
rice
and
repeat
process.
It's
slightly
boring
to
be
honest,
but
but
like
we
have
to
go
through
this
initial
classification
of
citing
the
cluster
white
and
cell
database,
because
then
he's
gonna
way
easier.
The
more
tables
you
add,
the
more
tables
you
add
is
basically
the
amount
of
the
values
the
narrator
is
gonna,
be
going
down
step
by
step
because
most
of
these
things
gonna
be
identified
already.
A
So
that's
that's!
That's
the
idea
how
we
gonna
how
how
I
was
kind
of
envisioning,
how
we
would
use
the
demo,
but
it's
pretty
much
just
specific
opinionated
workflow.
No
any
other
page
has
to
work
only
that
page
and
that
workflow
rise
and
repeat
on
tables
pretty
much
only
on
the
tables
that
you
observe
as
part
of
this
workflow
classify
only
the
ones
that
I
need
that
you
probably
don't
need
to
classify
all
of
them.
So
I!
Guess
it's
more
like
figuring
out.
A
Do
any
really
do
I
really
need
to
classify
the
table
if
the
answer
is
like?
Yes,
it's
some
in
some
cases,
it's
like
a
little
exploration
like
if
it's
needed
or
not,
then
I'm,
just
gonna,
classify
and
get
pipeline
green
by
adding
these
Arlo
cross
lines
or
adding
to
respect
allergies.
A
Okay
and
they're
gonna
be
also
foreign
keys
to
be
fixed
because
you
don't
have
touch
tables,
they're,
gonna,
use
foreign
keys
and
like,
for
example,
this
foreign
Keys
is
not
gonna
work
so
as
part
of
the
five
years.
It's
still
also
Allowed
by
this
query,
but
if
you
remove
that
they're
gonna
be
foreign
Keys
violations,
so
some
of
the
existing
a
foreign
Keys
crossing
boundaries
will
have
to
be
migrated
into
those
foreign
keys,
but
it's
pretty
mechanical
step
with
our
existing
script
that
you
could
use
for
that
purpose.
A
Cross
transactions
are
significantly
harder
to
solve,
but
we
don't
need
to
solve
cross
transactions
for
the
demo,
cross
transactions
being
identified
and
33
like
a
requirement
for
deploying
into
production.
But
we
really
don't
need
to
do
it
now.
We
only
need
to
focus
on
the
originals.
We
can
focus
on
because
transactions
later,
the
problem
with
the
lack
of
the
Cross
transaction
being
solved
now
or
identified,
is
which
is
really
not
enough,
like
a
problem
for
a
demo,
if
you
do
cross
transaction,
we're,
gonna,
open
database
transaction
across
two
different
connections.
A
Second,
downside:
close
transactions
are
not
rollback
already,
because
you
cannot
easily
generate
an
event
and
get
like
this
kind
of
rollback
be
be
done
so
in
our
production
code
base,
we
forbid
overlink
having
like
outer
and
inner
transaction,
that
are
targeting
different
databases
because
of
the
performance
reasons
and
rollback
reasons,
but
for
the
purpose
of
demo
we
don't
really
care
about
them,
because
we
only
cover
happy
we're
gonna
have
to
fix
them
later,
but
it's
more
important
to
fix
cross
joints
now
than
to
fix
cross
transactions
because
we
can
work
even
if
there
are
like
across
transactions
in
the
development.
A
But
this
will
not
work
if
there
are
cross
joints,
because
this
close
one
is
going
to
be
invalid,
so
cross
transaction
is
something
to
keep
in
mind.
This
is
like
the
problem,
but
we
don't
specifically
have
to
solve
that
now.
We
can
kind
of
maybe
work
and
identify
that
concurrently,
if
you
have
actual
capacity
to
do
it,
but
it's
not
really
like
a
requirement
to
to
to
to
do
demo
because
lemon
doesn't
depend
on
they
need
to
solve
cross
transactions.
A
This
is
one
change
to
the
pure
plan,
because
I
previously
it
was
like
I
identify
and
fix
all
of
the
Cross
lines,
cross
transactions
corresponding
Keys.
Now
it's
basically
fix
identify
and
fix
only
specific
cross
joints
and
fix
cross
foreign
Keys
as
the
requirements,
because
transactions
I
removed
from
that
list,
as
not
required
and
I
think
I
I
posted
everything
that
is
in
the
Epic
because
of
the
issues,
because
other
issues
gonna
be
basically
created.
While
you
identify
cross
lines,
but
you
only
gonna
pick
some
of
those.
B
The
the
cross
joins
I.
Guess
we
have
to
be
careful
not
to
fix
too
much
I
guess
because,
especially
if
we
are
not
sure
if
something's
cluster
wide
or
not.
A
A
So
today
it's
like
justice,
Arlo
3
key,
but
you
could
also
defense
that
to
be
like,
including
URL.
If
you
want
to
it's,
also
gonna
work.
So
then
you're
gonna
have
like
in
a
comment
so
in
the
log
when
you
execute
a
the
specific
workflow,
a
list
of
the
SQL
queries
to
execute
that
with
the
command
with
a
URL
of
the
Cross
drawing
and
basically
this
is
the
only
Target
that
we
have
so
that
we
have
to
fix
this
ones
that
are
annotated
there.
D
If
there
is
a
way
to
annotate
all
sets
cross
DB
automatically
and
if
you
run
the
workflow
of
creating
a
group,
it
will
highlight
the
things
that
we
actually
have
to
fix
and
then
we
can
just
fix
those
right.
Have
we
thought
of
that
approach,
but
then
it
will
not
fail
the
pipeline
or
give
us
the
confidence
that
it
is
doing
the
whole
thing,
but
then
still
for
the
happy
path
might
be
a
viable
approach.
D
So
The
annotation
comes
in
only
when
you
use
this
method,
but
then
is
there
a
way
we
can
annotate
cross
DB
transactions
automatically
so
that
the
logs
would
show
hey.
This
is
cross
joint
when
we
run
the
workflow
of
creating
a
group
and
then
we
can
just
fix
that
I
don't
know
if
the
question
is
clear,
but.
A
A
A
No,
there
is
no
query
analyzer
because,
like
you
could
probably
have
like
a
query
analyzer
here
like
this
one
lock
SQL,
where
that
is
having
a
GitHub
schemas
in
particular
having
GitHub
main
cluster
white
and
git,
lock
main
cell
as
part
of
that
specification,
and
basically
log
that
very
variable
city,
where
it's
happening
and
from
that
and
basically
try
to
find
those
and
fix
those.
So
I
kind
of
understand
that
this
is
what
you're
kind
of
proposing
like
is.
There
is
like
another
way
to
identify
that
is
not
using
this
method.
A
Yes,
it's
like
a
possible
way
to
do
it,
but
I
wanted
like
to
to
get
one
thing.
Also,
I
wanted
to
get
rid
of
that
line.
Schemas.
A
My
my
thinking
was
I
wanted
to
get
rid
of
that
specification
because,
like
like
you
can
do
it
that
way.
Yeah
but
and
everything
gonna
be
green
and
you're
gonna
fix
a
very
specific
one,
so
we're
gonna
get
a
demo.
We're
gonna,
probably
get
demo,
maybe
faster,
because
you're
not
gonna,
Mark
everything
that
is
violating,
but
it's
not
gonna,
let's
say
Mark
all
underlying
queries
that
are
cross
joining
users
and
namespaces.
A
A
Yes
and
I
was
thinking
that,
like,
regardless
of
how
many
failures
You
observe
you
may
kind
of
putting
back
and
forth
to
to
ease
in
work
or
just
if
there
is
like
not
a
lot
of
failures,
you
just
kind
of
try
to
not
put
it
back
so,
but
I
I
think
like
one
thing
that
I
felt
was
overwhelming
is
like
when
you
put
half
of
the
tables
in
the
sorry
like,
if
you
put
50
tables
in
the
cluster
White
and
the
restaurant
in
the
cell,
it's
basically
gonna
have
the
whole
test
suit.
A
So
I
was
kind
of
like
thinking
that,
like
doing
that,
iteratively
starting
to
let's
say,
we've
just
two
tables
you're,
just
gonna
still
have
like
a
bunch
of
the
specs
fading,
but
from
them
ipoc,
it
turned
out,
there's
like
maybe
40
Cross
joins
that
needs
to
be
allowed
through
that
query.
Basically,
so
it's
still
like
not
a
big
number
to
to
kind
of
iteratively
go
through
a
test
suit
to
to
remove
that,
but
it
was
generating
like
a
a
dozen
of
thousands
of
barriers,
because
this
closer
is
between
users
and
namespaces.
A
They
are
at
the
root
of
the
project.
They
are
used
everywhere.
They
are
used
everywhere,
simply
so
I
I.
This
would
be
my
Approach,
but
we're
gonna
figure
out
your
approach
that
this
is
my
recommendation,
but
you're
gonna
figure
out
your
best
way
to
do
it
monthly,
so
I
think
it's
like
a
valid
solution.
Also.
D
Yeah
yeah
I
wanted
to
close
the
loop
around
my
thoughts,
so
I
understand.
So
what
what
I
propose
can
implicitly
solve
the
problem
for
the
happy
part,
but
then
it
will
not
push
us
in
the
direction
we
want
to
go,
because
only
your
solution
will
push
us
in
that
direction
because,
like
this
is
exactly
what
Dylan
asked
us
or
asked
you
like,
we
are
not
moving
in
the
direction
we
want
to
be
when
you
introduce
this
yml
and
allowed
cross
joins.
So
at
some
point
we
have
to
remove
it.
A
It's
like
so
it's
it's,
not
a
bronx
idea
like
yeah.
My
idea
is
also
not
a
good
Jesus
has,
let's
say
it
put
accent
on
a
different
things,
so
I
I
think
like
in
the
initial
phase.
It's
gonna
be
tricky
to
figure
out
which
one
of
those
is
better.
So
just
do
whatever
makes
you
like.
The
most
efficient
I
I
personally
played
with
different
ideas
to
find
like
my
way
of
working
on
that
program,
because
it's
like
pretty
daunting
and
pretty
weak,
but.
A
A
I
I
have
like
Mr
and
I.
Don't
even
look
at
the
GitHub
to
see
all
the
failing
things.
I
just
run
a
script
and
it's
gonna
pull
the
phytometry
question.
Id
find
the
pipeline
ID,
get
like
all
of
my
jobs
and
give
me
a
list
of
the
specs
fading,
and
this
is
what
I
was
using
in
my
iteration.
I
didn't
even
use
like
a
gitlab
UI
to
it
all
right,
but
if
I
would
have
to
use
a
gitlab
UI
today
to
go
through
that
method
of
just
removing
that
and
finding
out
of
the
face.
A
In
the
two
hours
time,
I'm
gonna
get
back
and
run
my
script
and
see
what
is
filing
then
I'm
gonna
run
a
terminal.
I'm
gonna
get
free
from
different
specs
run
them
in
the
terminal,
see
what
is
failing.
Mark
these
things
that
are
failing
as
aloe
cross
join,
commit
push
back,
push
to
GitHub,
I,
guess
back
in
the
two
hours
to
get
like
these
test
results
and
like
when
I
was
doing
this
iteratively
iteration
over
iteration
by
kind
of
following
the
study
process.
A
But
I
was
only
able
to
do
it
because
I
had
the
touring
to
be
it,
for
that,
so
I
had
very
easy
way
to
aggregate
all
the
failings
packs
to
get
them
in
the
basically
I,
don't
know
15
seconds
instead
of
going
to
UI
taking
the
pipeline
link,
clicking,
tries
and
then
scrolling
it
down
to
get
like
what
is
playing
so
I
work,
my
process,
basically
to
make
it
efficient
to
work
that
way.
A
If
you
are
curious,
this
is
all
this
I
push
the
script
I
just
started
using
script
extensively,
because
I
just
pushed
this
script
in
my
bin
folder
and
on
every
Mr
that
I
have
failing
I,
simply
write
that
and
he's
gonna
pull
me
a
status
of
the
pipeline
edit
of
the
five
years.
So
I
don't
even
use
GitHub
UI
anymore.
For
that
stuff.
A
C
No,
but
it's
it's
particularly.
D
A
Are
not
solving
that
we
are
identifying
them?
Yes,
it's
also
that
we're
gonna
solve
only
what
we
need
and
for
the
things
that
we
don't
need.
We're
gonna
ask
others
to
fix
that.
Okay,
so,
like
we're
gonna
go
through
the
plan
of
just
this
is
the
workflow
that
you
are
focusing.
We
are
fixing
only
what
is
essential
for
that.
A
Honestly,
it
is
strictly
required
by
your
chances
that
we
do
all
the
capacity
we're
gonna
solve
some
of
those
things,
but
for
the
other
stuff
we're
gonna
simply
similar
to
the
the
composition,
it's
gonna,
be
others.
They're
gonna
be
you're,
just
gonna
describe
what
is
the
problem?
Give
them
tours,
but
they're
gonna
fix
to
fix
them.
We're
not
gonna,
be
able
to
solve
ours,
some
of
those
because
they
are
white
cross
zones.
They
have
some
very
high
cardinality,
so
some
of
this
person
is
going
to
have
to
be
reworked.
A
Some
it's
gonna
have
to
be
pre-compute,
so
we
we
not
gonna,
be
able
to
efficiently
solve
all
of
them
so,
and
it's
also
like
to
beat
up
the
task,
because
I
I
think
essential
for
us
is
like
to-
and
this
is
also
the
question
I
guess
that
you
should
think
initially
in
what
cases
you
make
a
decision
to
make
something
to
duster
White
when
you,
when
you
classify
these
tables,
why
you
do
it
for
what
reason?
A
And
what
is
the
data
access
pattern
that
forces
that,
because,
ideally,
the
amount
of
the
cluster-like
stuff
should
be
just
a
handful?
Just
there
must
be
new
marketing
that
we
need,
because
the
cost
of
making
things
cluster-wide
is
like
the
amplification
of
the
whole
system
on
the
whole
system.
So
our
default
decision
is
like
everything
is
sell
unless
we
have
very
strong
justification
for
being
cluster
wide,
but
we
need
to
figure
out
yet.
What
is
this
justification?
A
A
A
So
I
think
our
role
here
is
like
set
classification,
which
is
basically
user
and
naturally
switch
like
roots
of
the
hierarchy,
but
really
make
other
teams
to
let's
say:
do
this
fine
line,
classification
of
their
ideas
and
tables
below
so
but
I
think
our
purpose
is
like
to
get
a
sense
of
the
workflow,
get
a
sense
of
the
classification
build
tooling,
be
it
like
a
procedures
figure
out
like
how
we're
gonna
Implement
that
figure
out
what
is
essential
in
our
plan,
so
we
could
actually
work
with
others
to
with
us.
A
D
And
when
we
create
a
Mass
for
this,
is
it
okay?
If
we
like
take
a
fixed
approach
like
I
identify
so
I,
remove
cluster
wide
from
the
yml,
then
I
identify,
say,
10
cross
joints
and
allow
it
and
then
I
add
it
back
and
then
the
pipeline
will
be
green
and
then
we
can
push
it
right.
So
we
can
do
multiple
mass.
At
the
same
time,
right.
D
So
you
add
it
back
in
the
Bible
so
that
so
identify
10
failing
things,
I
allow
it
and
then,
when
you
add
it
back,
the
pipeline
will
be
green
again,
but
then
you
have
identified
10
of
it,
so
that
can
be
merged
as
a
1mr
right.
So
you
can
take
this
repeat
approach.
A
Whatever
it
works
for
you
like
I,
I
I,
don't
like
it's
pretty
it's
pretty.
Let's
say
hard
figure
like
the
hard
process
is
like
very
repetitive
and
like
like
our
pipelines,
are
flaky.
So
whatever
makes
you
like
the
most
efficient
to
to
get
these
things,
it
could
be
like
it
could.
C
A
Be
differently
like,
in
other
cases,
I
kind
of
did
like
a
constant.
Revising
guy
I've
had
like
a
PMR
open
because
it
was
like
getting
all
of
these
things,
but
but
I
was
chipping
away
from
the
MR
smaller
things
that
I
know
that
they
actually
like
a
problem
emerging
these
smaller
things
in
the
separate
and
rebasing.
This
bigger
Mr
I
also
have
a
script
for
basically
maintaining
a
merge
train
of
five
of
whatever
a
number
number
of
Mrs
I
have
in
the
chain.
A
Okay,
that
I
will
close
it
to
doing
that,
because
I'm
kind
of
used
to
like
a
workflow
where
I
have
four
Mrs
in
a
chain
pretty
much
and
I
kind
of
like
have
a
beak
at
the
end
and
kind
of
I.
Slowly
trip
away
from
that
big
to
put
that
in
the
front
of
the
queue
for
things
to
be
merged
and
then
I
constantly
rebase.
D
I
am
thinking
if
we
want
to
divide
this
work
with
between
Engineers
I.
Think
taking
a
per
table
approach
would
be
nice
like
add
a
table
to
main
or
custom
wide,
remove
it
from
my
mlc.
What's
feeling
fix
it,
add
it
back
to
the
yml
and
then
merge
VMR,
so
you
can
do
it
on
a
per
table
basis.
Also.
A
Whatever
works
really
like
you're
gonna
figure
out
your
like
your
best
way
of
working
once
you
start
trying
that
it's
also
I,
think
the
tricky
part
is
like
how
to
get
it
in
the
parallel.
But
I
was
kind
of
thinking
that
it
could
be
maybe
there's
some
sequence
only
on
the
identify
Crossings
between
users
and
namespaces
initially,
but
maybe
not.
A
A
It
could
be
like
a
a
crossline
that
could
be
actually
worked
on
by
someone
else
already,
so
you
have
like
other,
identify
questions
between
users
and
namespaces,
but
you
also
have
figure
out
this
classification
of
ideas
and
tables
like
you
could
maybe
start
going
one
by
one
on
them
and
trying
to
understand
like
the
data
relation
between
those
tables
where
they
should
leave,
and
maybe
in
some
cases
it's
like,
it's
even
easier
like
maybe
licenses
or
plants,
is
like
it's
like
this
easy
win
to
even
like
put
them
in
the
cluster
white
already,
because
why
not
if
we
know
that
we're
gonna
have
to
fix
them
later?
A
But
this
is
you
in
because,
like
I
I,
think
the
like
a
few
stages
in
this
process
like
it,
helps
to
start
with
something
smaller
to
get
like
a
good
sense
like
of
the
practice
so
doing
this
easy
win.
A
If
you
didn't
work
on
that,
before
of
like
just
okay,
I'm,
just
gonna
change,
the
classification
of
the
plants
and
I'm
gonna
rebase
this
base
model
and
get
this
Mr
Green
to
see
exactly
what
is
needed
to
get
it
green,
basically
to
just
get
just
getting
a
general
sense
of
how
to
approach
it
before
you
get
more
efficient
over
time
on
something
bigger,
so
I,
so
I
I
think
it's
like
a
like
a
balance
on
between
getting
small
things.
Maybe
done
ramping
up
to
to
this
much
harder
fix.