►
From YouTube: Import/export feature - think Big
Description
Discussion about import/export feature and what we can do to improve resiliency, security and reduce complexity of the process
A
Mark
as
Sonic,
so
welcome
on
the
meeting
about
the
import/export
feature
of
github
finger
big,
the
idea
of
this
meeting
is
right
is
tried
to
articulate
or
the
current
challenges
that
we
have
with
this
with
this
feature
and
think
about
very
particular
problem
to
solve,
and
this
problem,
as
he
stated
in
our
talk,
is
trying
to
make
this
process
which
were
able
or
resort
or
restart
able
the
the
idea
behind
thinking
big.
It's
like
we
try
to
like
gather
like
the
cert
knowledge
about
the
that
filter.
A
That
is
kind
of
spread
across
like
different
folks
working
on
on
this
particular
code
base
to
better
understand,
like
the
current
state
of
that
and
how
different
people
receive.
How
this
feature
is
like
easy
to
maintain.
How
resilient
is
that
and
how
performant
is
that
I?
Have
the
next
question
really
about
that
before,
like
we
get
to,
the
problem
is
like
these
meeting
is
actually
like
speed
in
three
sections.
A
The
next
part
of
this
meeting
is
like
I
would
like
us
to
go
outside
of
the
box
and
think
if
you
would
have
to
start
working
this
feature
from
scratch
today.
How
would
an
ideal
world
would
like
this
feature
to
work
like
completely
disconnecting
from
how
it
is
working
today,
like
No
Boundaries
like
pick
anything
something
that
is
maybe
completely
dump
like
your
perspective,
but
it's
actually
maybe
quite
clever
idea
to
think
about
completely
different
concept,
how
this
problem
couldn't
be
solved.
A
We
know
that,
like
import
and
export,
it's
a
feature
that
has
to
process
a
ton
of
data,
it
works
hours,
so
it
maybe
it
says
that
it
has
to
be
super
optimized.
It
has
to
be
like
very
tuned
to
this
particular
case,
but
then
how
we
maintain
a
complexity
of
that
and
the
last
part
of
this
meeting,
like
that.
The
third
part
that
I
hope
that
we
spend
most
of
the
time
is
like
I,
want
other
this
particular
problem.
A
So,
let's
start
like
with
this
quick
introduction,
let's
start,
maybe
with
with
the
current
efficiencies,
there
is
like
a
lot
of
overlap
between
them,
but
my
are
very
so
Alec
see
if
you
could
start
with
yours
if
you
could
write
time
box
like
each
of
these
deficiencies
to
I,
don't
know
30
seconds
or
something
like
that
to
keep
that
in
the
schedule.
Yes,.
B
Sure
sure
so
my
main
deficiencies
I
try
to
highlight
where,
like
missing,
of
solid
testing,
we
have
so
we
don't
have
blackbox
tests
that
kind
of
functional
and
we're
missing
the
vision
from
the
product
side
that
import
exists
and
when
you
are
creating
new
relation,
you
need
to
also
taken
to
account
that
this
one
should
be
tested
from
simple
perspective
and
also
I
wanted
to
highlight
that
the
quote
from
my
standpoint
is
very
complex
and
we're
missing
some
granularity
and
some
proper
flow.
So
that's
it
for
30
seconds.
B
I
think
I
think
that
somehow
resonates
with
the
idea
that
we
wanted
to
introduce
this
importable
concern
into
models.
This
will
make
it
like
raise
awareness
that
this
models
will
be
important
and
we
could
track
this
kind
of
models
easily
and
remind
developers
that
you
need
to
add
this
concern
and
tippier
disconcerning
you
to
aspects
and
supports
also
I
think
we
need
one
black
box
test
that
will
do
completely
blind
import-export
as
Camille
proposed
some
time
ago.
A
B
C
I
actually
didn't
I
didn't
watch
time
working
that
story
today,
but
yeah
it's
on
our
list,
so
we
have
a
story
for
this
already,
so
I
think.
That's
yes,
something
we're
already
looking
into
I
have
a
couple
other
things.
I
wanted
to
add,
though,
because
I'm,
not
sure
of
which
is
two
things
around
testing.
C
Specifically,
one
thing
that
I
find
difficult
is
to
have
a
representative
set
of
projects
of
like
a
given
like
Mary's
given
sizes
and
maybe
also
different
distributions
of
the
data
within
these
projects,
that
kind
of
represent
to
a
large
majority
what
our
customers
actually
import.
You
know
out
in
the
wild
and
that
we
can
then
use
for
testing
and
also
the
idea
being
that
our
team
uses
the
same
projects
and
we
don't
have
to
cut
them
all
the
time.
I
remember
doing
the
customer
incident.
C
The
reproduction
import
issue
that
we
had
like
every
single
team
member
would
eventually
we
shared
a
couple
scripts
with
us,
but
we
would
like
control
our
own
like
projects
to
do
performance
testing.
It
was
really
hard
to
compare
results
because
everyone
like
use
their
own.
Yes,
so
that
was
one
point.
Another
point:
was
it's
really
hard
to
test
these
things
against
a
production
like
environment
before
we
actually
merge
your
change?
C
That's
something
I'm
stuck
with
currently
I'm
trying
to
so
I
pushed
in
mr
and
it's
being
deployed
to
a
review
of,
but
so
far
I've
failed
to
upload
like
create
an
import
with
the
review.
App
could
be
a
PII
that
we
have
but
either
be
a
VI
call
the
web
UI.
So
none
of
these
things
were
Kearney,
so
that
makes
testing
really
hard
yeah.
Some
of
the
other
things
Alex.
They
already
mentioned,
I.
B
Also
would
like
to
add
that
get
webpage
view
from
what
I've
seen
is
not
being
updated
from
the
one
site
is
good
because
we
will,
but
we
will
be
testing
against
like
consistent
dataset
and
we
could
compare
the
results
from
the
other
side.
It
may
miss
some
new
additions,
the
project
structure,
so
we
need
to
adjust
the
process
of
updating
cover
like
fixture
or
like
whatever
is
being
tested
against.
C
C
A
So
there
is
like
a
site
of
that
like
like,
if
you
import,
like
external
I,
like
the
Linux,
you
basically
import
like
repository,
which,
like
the
data
set
of
the
repository
speak,
which
puts
a
pressure
on
the
qatari
about
github
hqx
body,
has
a
lot
of
data
related
to
issues
and
mash
requests,
and
this
is
actually
like
something
that
I
find
the
hardest
model.
Because,
like
we
like
the
process
of
importing
like
repository
wiki,
refers
it's
kind
of
predictable.
A
You
kind
of
know
what
to
expect,
and
it's
quite
easy,
like
the
model
that,
if
you
put
Torvalds,
denotes
you
kind
of
know
what
to
expect.
But
the
tricky
part
is
like
how
we
model
every
possible
combination
of
between
relations,
like
let's
say
pipeline,
depends
on
the
magic
was
magical
seconds
on
the
pipeline.
Its
magic,
which
has
a
thousandth
of
knows
these
magic
whispers
issues.
A
It's
user
provided
input.
User
can
put
anything
in
this
Jason
and
and
I
believe
that
this
is
actually
probably
like
the
biggest
challenge
for
me
that
we
are
not
strong
validating
this
Jason.
That
is
actually
saying-
and
it
seems
from
my
perspective
that,
like
a
lot
of
security,
vulnerabilities
came
from
the
fact
that
we
are
sized
setting
every
other
API
that
you
would
be
otherwise
using.
A
D
D
There
are
some
areas
that
we
can
start
working
on
immediately
like
doing
they
allow
least
or
whitelist
of
attributes
and
validating
them,
as
well
as
you,
as
you
mentioned,
instead
of
black
listing,
because
that's
the
current
approach
that
we
are
taking
and
with
black
listing
the
fact
that
we
also
have
older
the
the
code
base
is
open
sourced.
You
know
it
doesn't
help
either
and
some
people
can
exactly
investigate
what
what
is
happening
when
you
upload
the
file
that
definitely
doesn't
help.
D
But
one
of
the
approaches
that
we
are
going
to
investigate
is
how
to
be
more
strict
in
order
to
introduce
the
the
white
list
of
what
your
attributes
per
model
instead
of
the
black
list,
as
well
as
maybe
in-depth
validation
of
the
of
each
relation,
because
right
now
what
we
do
is
we
were
only
clean.
The
top-level
attribute
keys.
C
And
I
want
to
add
one
thing
to
this
feel
free
to
challenge
me
on
this,
but
just
having
worked
with
this
for
only
a
few
weeks
now,
I
wonder
how
useful
it
really
is
that
the
project
description
is
in
Jason,
because
Jason
is
human
readable
but,
like
most
of
the
parties,
I've
worked
with
the
Jason
file
is
so
large.
It's
essentially
not
usable
like
if
you
would
open
it
in
a
text.
Editor
we'll
just
blow
the
memory
limit,
even
Jake
who's
choking.
C
On
some
of
these
things,
like
the
customer
input
we
had,
it
was
a
two
and
a
half
gigabyte
JSON
file.
It's
just
not
something
you
can
really
easily
work
with
so
I'm
wondering
if
this
is
a
lost
opportunity
for
performance
optimizations,
because
there's
way
more
efficient
file
format
than
Jason
like
protocol
buffer
state,
they
do
do
things
like
deduplication
is
part
of
a
protocol,
so
that
would
be
something
where
maybe
we're
losing
efficiency
for
something
that
isn't
it
like
that
useful
after
all,
except
for
them,
like
smallest,
like
projects,
I
think.
D
E
So
I
was
gonna,
say
and
I
think,
regardless
of
the
format,
we
should
have
a
problem
because
there's
a
lot
of
data
there,
so
I
think
it
would
be
great
to
split
that.
You
know,
and
you
split
it
logically
then,
will
be
easier
to
also
edit
it
like
most
quest
or
something
I.
Think
I,
don't
think
the
using
a
different
format
I
mean
may
help
like
especially
performance.
Perhaps
but
I
did
over
there.
You
know
I.
E
Think
rails
wants
to
do
something
like
journey
because
of
the
way
the
way
this
is
torn,
otherwise
they
have
to
write
like
different
policies
and
so
on
again,
you
know
I.
Think
from
my
thinking
is
that
we
could
do
that
and
it
will
definitely
prove
performance,
but
then
it
will
mean
a
little
work
and
maybe
maybe
there's
an
easier
path
there.
You
know,
if
you
just
split
it.
Basically,
you.
C
To
quickly
follow
up
on
the
site,
the
idea
behind
put
about
this
that
you
have
an
idea
which
is
universal
and
you
generate
clients
from
that
idea.
That's
how
most
of
these
protocols
work.
Thrift
is
the
same
thing,
so
you
don't
have
to
write
your
own
pasta.
It's
generate
it
for
you
and,
I'm
pretty
sure,
there's
findings
for
food,
so
I
don't
think
that
should
be.
C
In
the
Jason-
or
you
could
argue,
the
schema
less
but
but
yeah,
so
the
idea
is
that
you
have
interface
and
message
definitions
in
a
product,
bus,
specific
IDL.
So
that
is
something
you
would
have
to
maintain,
but
that
can
be
an
advantage
as
well,
because
currently
we
have
Jason
terms,
but
we
don't
even
know
what
they
look
like
right.
That's
why
we
have
a
bunch
of
workarounds
as
well
in
the
input
itself,
where
we
check.
A
A
So
here
is
a
good
representation
if
in
case
of
the
proto,
but
like
you
have
to
have
additional
schema
on
site
pretty
much
for
each
of
these
object,
unlike
you
explicitly
enumerate
what
you
accent
and
in
what
format
is
kind
of
like
very
strong
validation,
then
really,
which
kind
of
serve
could
would
serve
as
a
middle
liar
to
say
that
everything
has
to
go
for
this
middle
layer
that,
from
this
strong
validation
on
what
is
allotting,
what
version
on
in?
What's
what
and
how
it's
like
constructed?
A
It
just
puts
a
more
important
because,
like
you,
have
to
effectively
maintain
additional
representation
of
a
model
that
is
disconnected
from
the
database
schema
which
George
technically
like
today,
it's
our
import/export
yml,
it's
just
it's
not
super
structure
about
the
technically.
This
is
exactly
what
is
is
today.
We
just
have
this
representation
in
a
yml
today,.
A
A
A
Unless
you
explicitly
say
that
you
don't
want
this
to
be
exported,
or
you
say
that
only
some
specific
list
of
attributes
has
to
be
exported.
So
I
think
he,
like
from
I,
saw
a
lot
of
people
so
far
what
they
told
a
us
to
import-export
spec.
Why
am
an
attribute
that
it
got
covered,
but
it
gets
automatically
exported
and
this
kind
of
resulted
him
in
a
few
issues
already
that,
like
people,
others
I
beam
column?
E
Should
we
do,
we
do
have
a
few
checks
like
we,
we
make
sure
you
know
the
the
some
of
the
airport.
The
model
contains
the
attributes,
so
we
can't
just
put
any
crop
there.
You
know
when
you
input
anything,
you
will
just
go
and
say.
Well
let
get
me
you
know
the
list
of
the
modal
attribution
see
if
this
is
1
and
if
it
is
1,
then
you
know
you
might
be
able
to
report
it,
and
then
we
do
have
the
manual
action
there
and
the
spec
does
mention
you
know.
E
If
you
really
think
that
business
to
be
imported,
then
add
it
to
the
list
of
the
things
you
know
there
is
like
sort
of
white
list
there
and
that
can
be
can
be
important,
supported
otherwise
added
to
the
blacklist.
So
this
a
manual
action
there
and
I
wonder
if
you
know
that,
because
we
can
detect
when
when
something
gets
started
or
a
new
model
or
a
new
column,
we
definitely
detective
and
the
spec
face
all
the
time,
because
this
is
a
manual
action.
E
You
know
there
for
the
developer
to
do
something
and
I
wonder
this
is
related
to
what
we
talked
about
the
spec
before
I
wonder
if
it
makes
sense
to
as
part
of
that,
because
we
can.
We
know,
you
know
when
someone
adds
something
to
the
import-export
I
wonder
we
need
to
force
as
well
adding
a
spec,
because
we
don't
do
that
at
the
moment
we
just
say
hey.
B
E
Yeah
yeah
I
think
that
this
good
and
the
other
thing
that
is
relevant
here
is
that
we
do
have
some
sort
of
integration
spec.
Well,
you
know
we
we,
you
know,
we
have
like
project
setup
with
issues
of
EMS
requests,
a
few
notes,
and
then
we
export
that
and
then
we
get
the
file
you
know,
and
then
we
check
the
file
and
then
another
one
very
similar
to
the
input
where
we
actually
have
a
file.
E
You
know
this
with
this
integration
spec
and
then
we
import
the
file,
and
then
we
check
that
all
the
relevant
relations
that
are
created
and
one
of
the
columns
here
and
I
think
I
care
about
this
with
George
that
we
don't.
We
don't
force
any
changes
in
those
files.
So
at
the
moment
those
integration,
specs
so
really
basics.
Maybe
you
know
table
or
just
have
like
a
few
issues,
a
few
miss
requests,
but
we
never
really
update
those
integration,
specs
and
I
think
they're,
quite
good
ones,
because
they
actually
actually
sort
of
supporting.
E
You
know
a
whole
project
with
a
bunch
of
things
and
they're
also
doing
the
same
with
import.
They
grab
an
actual
import,
our
board.
You
know
and
input
a
bunch
of
things
there,
but
we
never
Falls
that
and
I.
Think
one
of
the
reasons
is
that
is
a
bit
it's
a
bit
difficult
because
then
you
have
to
you
know,
input
the
trouble
out
of
you.
You
know
the
things
that
you
added
and
then
export
it
again
and
I
wonder
if
it's
possible
to
make
that
process
a
bit
a
bit
easier,
then
we
might
be
able.
E
You
know
to
force
that
or
you
make
the
automatic
Kevin
I'm.
Somehow,
like
you
know,
we
have
an
example
is
that
we
have
a
rake
task,
update
version
and
you
know
I
I'm
simplifying.
Does
that
thing
and
I
need
zips
again
and
then
it
works.
You
know
so
I
mean
something
something
like
that
and
I
think
that
that's
probably
one
of
the
best
specs
that
we
have
the
integration
ones
and
enforcing
I
have
to
rank
which
have
another
meeting
out.
A
Thank
you,
James,
okay,
so
I
would
like
us
to
move
like
to
the
next
section,
because,
like
more
like
knowledge
is
part
of
the
deficiencies
like
probably
like
the
biggest
is
like
the
complexity
of
the
process
and
time
needed
to
take.
Let's
think
about
the
idea,
world
ideas.
Could
you
start
with
yours
because,
like
you
for.
C
C
Down
the
entire,
like
stack
of
the
importa,
to
really
understand,
what's
going
on
and
I
think
a
lot
of
this,
because
all
right
actually
I
think
there's
two
main
reasons
for
this.
One
is
because
there's
a
lot
of
like
improvements,
optimizations
and
like
edge
cases,
kind
of
sprinkled
all
over
the
main
classes,
which
I
think
the
project
restore
the
relationship
restore
and
relation
factory.
So
you
can't
have
to
go
through
every
single
one.
E
C
A
Wonder
because,
like
you
mention,
if
of
the
sequential
steps,
it's
really
nice,
like
my
song,
because
like
what
I
think
about
the
import/export,
is
like
you
have
a
set
of
steps
that
you
have
to
execute,
but
we,
like
only
serve
the
first
time
that
you
have
to
execute.
It's
like
creating
and
updating
them
a
project.
But
now,
if
you
think
about
every
others,
that
probably
there
is
no
real
I
can
work.
You
find
order
in
which
day
like
have
to
be
executed.
For
example,
let's
think
about
the
importing
of
the
week.
A
It's
just
another
step
really
of
the
process.
Currently,
it's
like
modern
dance,
a
completely
like
sequential
system
that
is
happening
before
everything
else.
But
now,
if
you
look
out,
for
example,
between
it's
actually
very,
very,
very
interesting
question
like
Easter,
like
the
ordering
of
the
import
execution
defined
today,
what
happens
if
in
the
JSON
we
move
labels.
E
A
My
request,
or
we
move,
merge
request
before
or
after
issues
with
the
project
import
fail
or
not.
Do
we
somehow
resolve
like
the
topological
import?
What
objects
needs
to
be
imported
first
before
others?
Is
it
somehow
like
covered
or
is
it
like
the
problem,
or
is
it
not
happening
because,
like
we
somehow
put
the
correct
ordering
of
the
objects
that
they
need
to
be
I
like
imported
now?
The
next
question
is
like
what
happens
if
you
have
kind
of
circular
dependency
between
orbit,
how
it's
actually
resolved,
is
it
resort
or
is
it
not
resolved.
C
I,
don't
know
what
happened,
but
one
thing
I
did
notice
and
why
why
I
said
there
is
some
sort
of
dependency
already
right
now.
Is
we
store
every
single
relation
one
by
one,
and
this
means
that
I'm,
like
effective
record
validation,
perspective
and
database
integrity
perspective?
There
are
certain
things
we
need
to
do
first
kind
of
lean
right,
so
we
need
to
so
that's
already
in
the
case
like
we
like
there.
C
In
the
database
before
we're
able
to
insert
whatever
like
a
much
request
for
something
that
depends
on
on
this
issue,
but
what
I'm
saying
is
they're
all
it's
all
like
one.
We
do
all
of
these
things
per
item
right
once
and
then
we
move
on
to
the
next
item
to
all
these
things,
I
think
more
like
slicing
it
differently,
where
maybe
we
can
find
a
way
that
we
build
a
representation.
C
We
do
another
database
round-trip
to
find
that
item
again,
which
I
think
is
totally
unnecessary
like
we,
we
had
like
a
yeah
if
we
would
flatten
out
this
representation
upfront,
and
we
know
that
everything
that
has
come
before
the
current
item
has
been
processed
and
we
can
make
more
of
assumptions.
I
guess.
A
C
A
Sakura
dependencies
can
sample
directional
dependency
only
but
like
we
don't
really
have
any
way
to
touch
that
today,
because,
even
though,
like
we
have
full
knowledge
about
the
tree,
we
are
not
really
using
that
knowledge
to
detect
that
offenses.
So,
like
every
new
person,
adding
maybe
this
kind
of
seeker,
the
penis
is
not
aware
of
that,
and
it
might
mean
tension
not
intentionally
breaking
the
process.
I.
C
C
C
A
A
But
like
like
you,
look
at
the
step,
for
example,
you
say
that
that
for
it,
working
step
of
importing
wiki
requires
you
to
have
a
project
pure.
This
is
like
your
requirement,
but
step
for
importing
much
request.
It
just
means
issues
before
and
issues
and
emerge.
She
was
requires
relation
Labor's
be
important
before
and
you
kind
of
like
create
these
steps
automatically
based
on
the
tree.
A
But
then
you
kind
of
try
to
order
them
in
the
correct
like
in
like
in
the
correct
sequence
of
execution
and
then
is
up
to
the
scheduler
to
define
whether
it
executes
them
sequentially
on
in
a
parallel
whether
the
parallelism
is
achievable
there,
because
then,
then
it
can
mean
that,
like
maybe
during
like
within
a
step,
you
can
execute
in
parallel
like
input
of
the
multiple
items
because,
like
you
have
flavors
created,
you
have
members
assigned
it's
just
fine,
you
kind
of
firing
with
deep
or
concurrent
jobs
that
they
are
doing.
Charleen
free
data.
A
You
can
access
like
seek
input
like
your
data
or
presentation
on
what
is
to
be
imported.
Any
kind
of
like
have
better
representation
of
the
of
the
process
as
a
whole,
because
the
processes
model,
as
as
a
graph
of
operation
on
and
I,
keep
that
beginning
and
YouTube
that
ends,
and
at
least
kind
of,
like
you
depend
on
all
prior
steps
to
execute,
because
now,
like
this
ordering
I,
think
it's
kind
of
like
it's
not
defined
explicitly.
It's
rather
like
defined
by
two
incidents:
technique,
II.
B
A
Maybe
III
don't
know
the
answer
for
that,
but
maybe
we
simply,
these
are
low
and
this
kind
of
conflicting
representation
today
like
this
is
the
first
iteration
like
we.
These
are
local
15
representation.
If
you
have
dependency
on
something
different,
we
feel
like.
We
cannot
construct
that
graph.
Some
someone
is
holding
variation
on
that
makes
it
secret
and
Nancy.
We
just
cannot
suffer
that
today.
A
You
just
do
not
you
give
me
not
import
it
unless,
like
we
define
like
like
a
second
step,
because,
like
now,
it's
kind
of
interesting,
if
you,
if
we
define
a
step
as
a
symbol
like
the
smallest
primitive,
maybe
like
you
have
multiple
steps,
a
relation
that
just
define
different
like
operations
to
execute,
and
this
is,
for
example,
how
you
reserved
secret
dependency,
because
you
may
define
if
magic
was
depends
on
the
pipeline.
The
pipeline
depends
on
that
lease
of
them
our
cheapest.
It
can
be
defined.
A
Okay,
the
first
we
import
mesh
request
and
we'll
import
by
price,
but
then
we
have
additional
step
automatically
generated
that
adds
like
fixing
based
on
the
JSON
data.
The
state
of
the
magic
was
to
reflect
to
the
pipelines
like
it's.
It's
kind
of
like
free
freeform,
like
you
kind
of
define,
as
first
like
execution,
is
kind
of
like
resolved
automatically.
A
So
that
was
like
the
idea
about
the
API.
What
do
you
think
about
the
API
because,
like
I
think,
like
the
the
concept
of
the
API,
is
very
similar
to
to
Mattias
idea
of
the
protobuf,
it
doesn't
really
differ
that
much
it's
like
it's
more
about
having
submit
a
liar
to
say,
and
this
is
like
valid
format.
A
This
is
like
the
data
that
we
expect
if
it
doesn't
like,
if,
if
these
data
cannot
be
because,
like
the
my
idea
about
the
API
was
like
to
have
some
additional
layer
of
the
validation
of
the
data,
wouldn't
we
have
if
this
would
be
like
a
separate
service
or
would
it
like?
Would
it
be
hard
to
like
to
make
it
performance.
A
So
I'm
thinking
about
like
two
different
ways
of
like
doing
that,
because
right
now,
like
everything,
is
like
very
deep
into
application,
and
it's
like
quite
easy
to
expose
in
your
data,
but
there
is
like
quite
hard
to
maintain
the
consistency
of
the
data.
What
if
we
would
say
that,
like
everything
that
is
added
to
the
Kid
Rock,
has
corresponding
API
right
away.
A
Maybe
during
like
the
import
process,
this
API
is
more
permissive
so
like
it
allows
you
to
accept
significantly
more,
but
like
its
responsibility
as
like,
every
team
to
provide
an
API,
its
responsibility
of
every
team
to
add
their
increase
in
to
import/export
is
the
responsibility
of
them
to
violating
that
these
things
are
apparently
important
were
like
import.
Team
responsibility
is
like
to
provide
everything
around
that
which
is
like
the
service
that
like
calls
this
api's.
A
A
The
exported
project
is
accepted,
pretty
much
for
every
meet-up,
so
like
it's
kind
of
almost
impossible
for
us
to
start
removing
things
from
the
from
the
export,
because
people
may
still
try
to
import
from
the
newest
version
to
the
past
person
but
like
we
are
not
effectively
testing
it
at
all.
So,
like
someone
goes
to
each
other's
comments
for
suspect
and
imports,
him
like
in
the
11.11,
for
example,
and
it's
very
likely
that
it's
gonna
break
and
then
we
have
customer
going
to
us
saying
hey.
My
import
is
not
working.
A
What
works
in
your
Raonic
11.11
and
we
normally
have
the
solution
to
solve
that.
It's
because,
like
we
all
know
that
it's
not
defined
like
the
versioning
schema,
we
don't
have
any
logic
in
the
application
to
be
backward
compatible.
We
are
just
kind
of
additively
adding
new
data,
but
we
are
very
thoroughly
removing
data
because,
like
we
don't
have
mekinese.
That
would
allow
us
to
safely
say
this
is
the
versions
that
we
support,
because
currently
documentation
says
that
we
stopped
caught
up
to
I,
guess
11.3.
A
So
there
is
like
multiple
sides
of
that
API
having
that
as
a
micro
serviced
would
actually
able
to
have
a
stronger
consistency
of
the
data
being
accepted
because,
like
API
by
design,
could
be
worsened
or
could
provide
least
a
temptation,
but
is
also
harder.
So
we
like
to
maintain
like
ideas
like
the
idea
with
the
brought
up
of
protocol,
for
example,
defines
different
versions.
You
can
define
different
versions,
we
can
proto
book,
it
can
be
finite
fields
where
zones
and
brother
both
is
effectively
investment.
You
can
have
multiple
versions,
and
it
mean
much.
C
C
It
so
compact
as
well
there's
like
no
identifiers
on
the
wire.
So
it's
a
quite
simple
approach
because
they
basically
say
you
never
really
use
a
number
you've
already
used
in
the
past.
So
so
there's
like
some
care
to
be
taken
on
the
development
side
as
well.
It's
not
like
automatic,
but
it
allows
you
to
do
that.
Yeah
and.
C
Add
something
else
so
I'm
totally
okay,
with
like
the
idea
of
putting
this
behind
in
an
API
or
several
I,
think
what
I
would
love
to
see
and
I
would
love
to
get
your
thoughts
in
this
is
I.
Think
there's
too
much
going
on
there's
like
one
importer
module
that
we
have.
Actually
it
haven't,
looked
at
the
export
part
that
much,
but
just
talking
about
the
import.
It
is
already
so
much
going
on.
I.
C
Think
I
personally
have
been
successful,
doing
like
making
things
simpler
in
the
past
by
just
so
I
like
to
think
of
these
things
as
what
what
can
like,
given
some
input.
What
is
the
next
like
logical
output,
that
I
can
produce
which
I
can
deliver
as
a
separate
artifact
right
and
one
example
for
this
I
have?
Is
we
just
work
on
this
optimization
where
we
shrink
the
Jason
that
we
feed
into
the
importer?
C
C
So
this
makes
you
really
hard
to
a
know
what's
going
on
at
any
point
in
time,
and
you
cannot
verify
intermediate
artifacts
that
you
produce
because
they
implicit
they
just
generated
as
the
whole
input
runs
and
it's
a
black
box
so
and
there's
then
all
kinds
of
different
approaches.
You
know
it
might
be
multiple,
so
this
is
talking
to
each
other.
It
might
be
just
a
job
pipeline
where
so
you
have
an
importer.
Sorry,
you
have
an
optimization
job
that
runs
before
you
actually
import.
C
C
To
think
of
like
what
our,
what
are
our
inputs
and
outputs,
what
our
logical
steps
to
break
down
this
problem
so
that
each
part
gets
simpler
as
well
and
I,
don't
know
exactly
what
all
these
steps
are
or
what
belongs
in
each
particular
step
necessarily
but
but
I
think
that
is
something
I
would
I
would
love
to
see.
Let
me
break
it
down.
A
So,
let's
go
to
their
like:
let's
go
maybe
to
the
problem
because,
like
we
are
story
running
out
of
the
time,
if
you
would
have
to
do
today,
making
import-
let's
say:
let's
focus
on
decking
board.
Let's
make
him
forester
table
so
like
import
can
like
resume
from
from
the
operation
and,
let's
think
about
like
the
steps
that
would
make
it
somehow
possible
and
like
the
steps
that
would
make
it
like
performance,
because
the
director
sites
back
I
believe
that
we
can
do
something
that
makes
this
process.
A
We
start
able,
but
he's
not
gonna,
be
super
performant
so
like
we
need
to
think
about.
If
we
would
want
to
make
this
process
performant
and
achieving
the
goal.
What
would
be
like
this
and
goal
look
like?
We
can
then
pick
liked
iterative
way
to
achieve
that
angle.
So,
let's
think
about
how
we
could
achieve
that
goal
in
a
way
that
is
performant
I.
E
C
E
C
E
C
Like
checkpoints
that
we
can
save
and
then
say,
okay
like
in
this
list
of
things
that
we
need
to
process
we've
gotten
this
far
and
then
progress,
it's
much
easier
to
reason
about,
because
if
I
have
a
list
of
a
million
things,
you
know
if
I'm,
if
I'm,
like
so
many
items
into
that
list,
it's
super
easy
to
calculate
process.
It's
much
harder
to
do
in
a
tree.
B
Yeah
I
agree:
it
falls
in
the
same
idea.
Camille
mentioned
that
way.
Song
called
somehow
make
it
possible
to
like
split
into
separate
tasks
and
schedules,
and
we
could,
just
since
we
all
have
natural
ordering
like
based
on
topological
sort
or
some
something
else.
On
top
of
that,
we'll
just
remember
boy
interposition
and
could
like
restart
from
that
position.
I
wonder.
B
B
A
How
about
because,
like
maybe
one
one
of
the
ways
to
like
to
improve
performance,
is
like
we
know
that
in
the
end,
we
want
to
store
each
relation
separately.
Maybe
what
we
should
do
today,
like
we
get
this
big
JSON
as
we
receive
that,
and
we
just
put
it
that
into
separate
files
in
the
most
efficient
way.
A
So,
like
we
transfer
that
big
tree
JSON
into
multiple
Andy
Jason,
something
that
we
may
be
doing
something
in
the
future
when
we
solve
the
problem
of
the
burgeoning,
but
we
may
provide
his
transfer
translated
logic
today
as
part
of
the
import
process.
Maybe
it
takes
like
a
few
seconds,
but
like
we,
we
write
that
big
Jason
for
multiple
small
objects
and
I
need
an
Eric
will
do
exactly
what
we
do
with
the
MD
Jason
I.
C
Think
that's
a
good
idea,
I
think
also
correctly
from
brown,
but
the
top
level
nodes
that
live
like
one
level
into
that
tree.
I,
think
a
lot
of
them
do
not
strictly
dependent
I
mean
labels
is
one
that
is
clearly
a
dependency
for
I,
make
many
other
things,
but
I
think
the
rest
of
the
you
know
it's
on
this
first
level,
probably
not
that's.
D
Correct,
yes,
the
top-level
relations
can
can
be
imported
it
independently
and
things
like
label.
There
are
two:
you
probably
saw
existing
object
check
and
things
like
that.
The
the
those
are
like
a
set
of
models
that
are
special
were
instead
of
creating
a
new
object.
You
just
try
to
locate
locate
one
and,
if
that
it
exists,
use
it
if
it
doesn't
create
a
new
one.
D
A
I
would
be
thinking
the
track.
We
just
got
the
JSON,
we
don't
have.
Why
am
L
and
the
top-level
relations,
as
you
said,
and
like
we
try
to
process
the
JSON
in
a
stream
instead
of
parsing
that,
and
we
just
rewrite
parts
of
this
JSON
in
two
separate
files,
basically
creating
and
then
like
as
part
of
our
steps,
I
would
actually
think
about.
Like
two
approaches
like
first
is
like
doing
something
that
would
take
the
current
item
in
to
within
within
that
directory
into
multiple
nd
JSON
and
the
second.
A
We
like,
we
model
clear
steps
for
each
relation,
and
each
of
these
steps
then
kind
of
consumes
these
separate
ndj
zones
and
just
beat
them
run
applied.
My
line
so
like
our
memorial
would
mean
like
we
do
not
hold
the
big
file
in
the
memory
ever
because,
first,
like
we
stream
the
file
into
creating
separate
ones,
then,
like
we
read
the
lines
from
from
that
file,
but
like
we
don't
have
to
parse
the
whole
structure
of
that
file.
A
So
if
we
get
to
that
point
that
we
prime
at
any
JSON
the
expert
site,
we
basically
on
the
import
side,
you
will
have
this
anti
JSON
or
rather
implemented,
because
we
will
basically
remove
these
a
transformative
logic,
because
we
would
not
need
that.
So
it
could
be
useful
to
twist
the
problem
around
to
kind
of
reduce
them.
Amore
usage
on
the
import.
C
There
are
also
an
opportunity
to
paralyze
so
that
we
could
fork
off
multiple
like
smaller,
in
importance.
They
wouldn't
be
able
to
do
everything
again,
of
course,
because
the
project
should
only
be
creative
ones,
but
if
we
have
multiple
JSON
files,
where
the
relations
defining
them
do
not
depend
on
each
other
or
only
likely
like
things
like
issues
that
I'm
not
credit
anyway,
when
they
already
exist,
then
then
we
could
run.
You
know
like
and
of
these
things
in
parallel
right
that
should
speed
up
things.
A
Yes,
but
like
like
running
in
the
part,
is
a
little
more
complex
because
running
between
a
single
processor
doesn't
being
a
lot
of
benefit,
or
maybe
tactfully
does
bring
some
benefit.
It
will
bring
the
benefit
that
we
would
that
you
would
or
the
database
connection.
So
your
executive,
multiple
database
queries,
maybe
concurrently,
but
that's
for
the
CPU
processing.
You
would
be
saturating
yourself
because
you
would
be
finding
with
another
threads
for
this
single
core
on.
There
will
be
I.
A
Yes,
but
the
problem
is
like
the
cloud9
teeth,
like
the
team
is
now
working
on
creating
ephemeral,
storage
for
the
import
and
export.
This
is
what's
like
the
imitation
on
the
on
the
big
side.
Key
job
like
we
cannot
speed.
These
big
size
keeps
on
because
each
time
you
would
have
to
extract
the
big
air
guys
in
order
to
access
their
well
I
have
on
file
that
you
would
like
to.
This
is
right.
Why
I
propose
the
zip,
because,
zip
you
don't
have
to
transfer
the
seat
to
read
the
individual
file?
A
It
can
read
the
individual
file
by
reading
central
director
of
the
zip
aircraft.
That
is
always
at
the
end
of
the
archive
and
know
exactly
where
we
gonna
fire.
The
contact
is
starting.
Where
is
the
compressed
content?
How
big
is
that,
and
you
can
basically
stream
from
any
type
of
I/o?
It
could
be
object,
storage,
pretty
much
you
kind
of
get
a
range
of
the
of
the
files,
but
on
the
object,
storage,
and
you
just
extract
exactly
what
you
want
to
extract
so
so
Z
is
fixable
on
that.
B
Well,
considering
what
set
about
like
linear
order
and
I
really
like
the
idea
of
like
introducing
obstructions
like
a
task
that
that
is
quite
as
something
needs
to
be
added
to
database,
the
scheduler
and
consumer.
So
if
we
try
to
push
these
entities,
we
could
basically
scale
from
them,
and
maybe,
if
you
have
like
a
singular
scheduler,
we
could
even
control
some
pressure
and
database
and
maybe
avoid
the
situation
when
a
couple
of
huge
imports
will
like
drown
our
github
instance.
For
example,
it
will
go
through
some
middle
layer.
B
A
Okay,
we
are
at
the
time,
I,
think
that
we
should
wrap
up
for
today.
Let's,
let's
spend
some
time
on
like
cooking
for
like
these
items.
If
you
could
spend
a
few
moments
on
reading
that
and
I
believe,
we
should
create
a
few
issues
for
this
ideas
that
were
mentioned.
I,
definitely
like.
Let's
maybe
start
like
we've
I'm
gonna
create
a
shallow
issue
for
making
import
process
mr.
A
taebel,
and
we
will
post
our
ideas
in
that
issue
and
then
try
to
synchronously
define
like
the
steps
and
iterations
how
we
could
achieve
that
and
how
it
could
work
in
the
like.
In
the
end
scenario,
if
all
that
I've
been
pure,
the
analysis
would
be
would
be
provided.
So
thank
you
very
much
have
a
good
time.
Thank.