►
From YouTube: Manage:Import Onboarding - GitLab Migration
Description
An introduction to the GitLab Migration tool by George Koltsov.
A
Hello
and
welcome
to
gitlab
migration
feature
overview.
This
is
a
gitlab
feature
that
allows
you
to
migrate,
one
to
migrate
groups
from
one
instance
of
gitlab
to
another
and
these
two
urls,
where
you
can
find
more
information
about
them.
A
Fill
out
the
source,
url
and
the
access
token
click
connect,
and
this
will
be
the
page
that
you're
going
to
be
presented
with
which
lists
top
level
groups.
On
your
instance,
once
you
select
the
group,
you
want
to
import
and
select
the
destination,
so,
for
example,
this
is
the
source
group
that
I
have
it's
opened
for
me
right
here.
A
So
we
have
two
subgroups
here
two
projects
and
then
in
every
of
the
subgroups
we
have
two
more
projects
in
each
okay
and
then
you
select
the
destination
group
now
on
destination
instance
of
gitlab
and
click
import,
but
I
already
did
that
it
presents
me
with
the
option
to
re-import
it
under
new
destination,
but
here
you
can
see
last
imported
to
grade
group
and
if
we
compare
the
two
we're
going
to
see
that
the
grade
group
also
has
two
subgroups
and
six
projects
in
total
there.
A
A
You
can
export
a
project
or
a
group
and
download
the
tarball
file
and
then
on
the
destination
you
have
to
manually
select
that
file
and
upload
it.
So
if
you
have
a
lot
of
groups,
if,
let's
say
you
have
a
large
organization
and
you
have
to
migrate
over
a
com-
complex
group
structure,
you
know
with
a
number
of
subgroups
that
span
deeply
into
into
the
tree.
Let's
say
you
have,
I
don't
know
10
subgroups,
10
levels,
10
nested
levels
for
for
the
group
and
then
in
each
of
the
levels.
A
You
have
the
amount
of
projects
you
can
imagine
that's
a
quite
tedious
and
time-consuming
process.
A
The
other
reason
is
import.
Export
is
a
single
process
that
can
take
quite
a
long
time,
especially
if
you
have
groups
or
projects
that
are
of
significant
size.
You
know
it
can
take
hours,
sometimes
tens
of
hours
to
import
or
export
a
certain
project.
So
the
new
approach
is
a
bit
more
distributed
across
multiple
setting
jobs,
which
should
help
with
performance
and
not
occupy
a
single
process
of
for
for
too
long.
A
A
So
we
have
the
bulk
import
main
model
that
stores
the
overall
state
of
import
process,
and
I
did
a
little
drawing
and
I
apologize
in
advance
for
for
how
it
looks.
But
here
it
is,
and
I'm
going
to
move
around
to
show
you
a
bit
more
but
right
now
you
know
you
can
think
of
bulk
import
as
an
entity
that
has
that
keeps
track
of
state
for
the
overall
import.
A
A
A
A
You
know
the
url
and
the
access
token,
the
the
page
that
I
showed
you
before,
where
you
import
your
group
when
you
enter
this
information.
This
is
where
it
gets
stored.
In
order
for
the
background
processing
to
retrieve
the
data
you
know
and
to
use
that
information
to
to
connect
to
the
source
instance
and
yeah.
If
we
take
a
look
at
the
table,
structure
yeah
it
has
the
source
type,
don't
have
to
worry
about
it
too
much
at
the
moment.
A
It's
always
set
to
gitlab
at
the
moment,
but
ideally
the
the
grand
vision
for
this
code
base
is
to
be
able
to
migrate
all
of
the
importers
into
one
way
of
importing,
for
instance,
github
or
bitbucket,
etc,
but
yeah.
The
most
important
thing
is
status
now
and
the
second
thing
that
bulk
import
has
is
entities.
A
So
entities
represent
one
of
two
things:
either
a
group
or
a
project.
Here's
a
little
snippet
that
kind
of
helps
visualize.
What
I'm
talking
about
here.
A
A
Then,
in
that
case,
we're
going
to
create
this
kind
of
structure
that
we
will
have
an
entity
that
is
of
source
type
group
entity
and
it
will
have
preserved
associations
where,
let's
say
this
group
entity
will
have
a
parent
id
of
this
group
entity,
and-
and
this
will
allow
us
to
recreate
this
kind
of
structure
on
on
destination.
A
So
yeah,
the
bulk
import
entity
is,
is
a
representation
of
either
a
group
or
a
project.
So,
for
instance,
here
we
have
the
source
type,
which
is
a
group
either
group
entity
or
a
project
entity,
and
in
our
drawing
here.
A
Again,
yeah,
I
apologize
for
this,
I'm
upgrades,
but
I
hope
it
does
get
the
point
across
that
for
every
bulk
import,
the
overarching
import.
We
have
a
number
of
entities
that
we
create
in
the
database
and
each
of
the
entities
represent
either
a
group
or
a
project,
and
they
all
have
also
status
of
that.
We
need
to
keep
track
of.
So
it's
like
a
tree
right,
so
we
have
the
import
that
has
its
own
status.
Then
individual
entities
has
their
own
status
as
well.
A
So
we
have
let's
say
in
this
case:
we
have
a
root
group
subgroup
and
a
project
just
like.
I
said
group
subgroup
and
project
and
yeah.
You
have
the
source
type
and
the
name,
space
id
the
project,
id
etc,
and
we
can
have
a
look
at
the
project
entities
table.
A
Yeah,
that's
the
representation.
But
if
you
take
a
look
at
the
contents,
you
can
map
out
the
you
can
recreate
the
same
structure
on
destination.
So,
for
instance,
we
have
this
record
here
and
the
source
full
path.
We
need
to
store
the
source
full
pass,
the
destination
name
destination,
namespace
and
then,
if
it's,
if
it's
a
group
entity,
then
it's
going
to
have
an
associated
namespace
id.
If
it's
a
project
entity,
it's
going
to
have
an
associated
project
id
and
obviously
the
source
tab
will
change.
A
A
A
That's
something
that
is
what
we're
going
to
cover
next,
so
just
one
more
time,
so
the
bulk
import
so
far
stores
the
state
for
the
overall
import
process,
the
configuration
stores
credentials,
the
entity
stores,
the
information
about,
or
project
to
import
and
the
import
state
as
well
to
keep
track
of.
What's
finished,
what's
still
in
progress
now,
the
next
thing
that
we
create
for
after
entity
is
pipeline
trackers.
A
That
are
records
to
keep
state
of
import
for
small
things
like
labels,
import
or
milestones,
import
or
epics
import.
A
So
if
we
have
a
look
at
the
imports
entity,
bulk
imports
entity,
it
has
many
trackers
and
it
has
many
failures
and
it
has
a
state.
You
know
the
the
model
itself
is
quite
small,
but
if
we
open
the
tracker
in
the
tracker,
we
store
the
relation
name
so,
for
example,
labels
milestones,
epics
whatever
and
the
pipeline
name.
A
If
we
have
a
look
at
the
trackers
table,
it
has
a
lot
of
information.
But
let's
say
we
focus
on
on
this
first
one.
For
now
you
know
it's
a
badges
pipeline
which
indicates
that
we
are
importing
badges
for
this
particular
pipeline
right.
The
job
id
the
stage
I'll
cover
stage
in
a
bit
and
the
status
2,
which
is
finished.
A
So
you
can
kind
of
map
out
to
yourself.
So
here's
the
the
trackers
and
they
all
actually
perform
via
the
pipeline
worker.
A
And
the
pipeline
worker
is
the
one
that
actually
is
performing
the
import
okay,
so
we
have
like
this
three
of
asynchronous
workers.
A
I
mean
this
in
a
distributed
fashion
so
that
it's
it's
not
all
performed
within
one
single
process
and
these
workers
perform
pipelines
and
I
haven't
covered,
haven't
touched
on
what
pipelines
are
yet,
but
I
will
in
a
second
and
when
these
workers,
the
pipeline
worker
executes
and
if
something
goes
wrong
at
any
actually
at
any
point
of
import.
A
A
So,
like
I
said,
the
pipeline
tracker
is
the
one
that
is
executing
the
pipeline.
This
tracker
is
the
pipeline
tracker
right,
but
what
is
a
pipeline
and
the
pipeline?
A
Where
you
have
something
like
this,
you
have
separate
parts
separate
pieces
of
responsibility
that
are
responsible
for
smaller,
smaller
things
right,
so
we
have
a
separate
extractor
that
goes
ahead
and
fetches
the
data.
Then
we
have
the
transform
layer
where
we
transform
the
data
to
fit
our
needs.
You
know
normalize
it
or
clean
it
or
whatever,
and
then
we
load
it
in
our
case,
all
of
the
all
of
the
use
cases
for
the
load
is
just
persisting
the
database,
but
you
can
think
of
the
load.
A
So
how
would
a
pipeline
look
like?
A
Pipeline
we
have
a
custom
dsl
built
for
bulk
import
pipelines
and
you
can
see
it
in
in
this
module
here.
Bulk
imports
pipeline.
You
know
it
allows
you
to
define,
extract
transformer,
extractor,
transformers
and
loaders.
You
don't
have
to
use
the
dsl.
If
you
don't
want
to,
you
can
just
use.
You
know.
A
But
anyway,
some
some
pipelines
don't
use
the
full
dsl
some
do
here's,
how
here's,
how
an
example
pipeline
can
look
like.
So
this
is
the
group's
pipeline
that
is
responsible
for
importing
members
group
members
and
the
extractor
here
is
the
graphql
extractor.
A
If
you
haven't
seen
it,
it
might
be
useful
to
check
it
out,
but
the
major
thing
is
that
import
export
is
a
very
generic
to
the
point
where
it's
very
hard
to
understand.
What's
going
on,
and
in
order
to
understand
anything,
you
need
to
understand
everything
and
with
this
approach
we
wanted
to
steer
away
from
that
a
little
bit
and
provide
more
modularity
to
our
changes.
A
So,
for
instance,
if
you
take
a
look
at
relation
factory,
if
you
have
a
look
and
what's
going
on
here,
it's
quite
hard
to
understand
and
it's
it
is
a
factory
though
so
it
does
try
to
accommodate
for
all
types
of
data
coming
in.
But
we
wanted
to
have
a
modularity
with
our
with
the
new
approach
and
being
able
to
swap
in
and
out
modifications
on
demand
with
a
bit
greater
visibility.
A
A
So
we
have
a
an
example
pipeline
here
and
the
extractor
is
defined
for
the
graphql
extractor
and
there
are
currently
three
types
of
extractors
that
we
have
in
bulk
imports:
namespace
the
graphql
extractor,
it
just
fetches
the
data
from
graphqlm
api
and
we
use
that
data.
We
transform
it
and
then
we
create
we.
You
know
we
persisted
so
a
good
example,
actually,
maybe
to
show
not
the
members
pipeline,
although
that
is
a
good
one,
but
also
the
group
pipeline.
So
we
have
the
the
very
first
thing
that
we
do
during
group
import.
A
Is
we
create
an
empty
group
right
in
order
to
fee
if
to
fill
it
out
with
information?
And
this
is
where
it's
happening,
so
we
use
the
graphql
extractor,
meaning
we
fetch
data
from
graphql
api.
A
Abortion
failure
essentially
means
if
something
fails
and
we
couldn't
create
a
group
abort
the
whole.
The
whole
entity
market
has
failed
right,
which
kind
of
makes
sense,
because
you
don't
want
to
import
labels
or
import
anything
really,
if
you
don't
have,
if
the
group
creation
failed
right,
so
so
yeah,
some
of
the
pipelines
have
this
attribute,
which
help
prevent
some
sort
of
failing
imports.
A
And
the
next
type
of
extractor
we
have
is
rest
api
extractor
now
which
extracts
the
data
from
rest
api,
and
one
of
the
examples
is
a
subgroup
entities
pipeline
which
defines
its
own
extractor,
but
under
the
hood,
it's
it
just
uses
http,
client
and
essentially
what
it
does.
If.
A
You
can
see
that
there
is
a
tracker
for
it,
meaning
that
this
pipeline
is
going
to
be
to
be
executed
and
what
it
does
is
essentially,
okay,
it's
checking
if
the
group
that
we
just
processed,
if
it
has
any
subgroups,
if
it
does,
you
know
we,
we
actually
try
to
extract
this
data
from
source.
We
check.
Does
this
group
have
any
subgroups?
A
A
Then
we
clean
up
the
data
through
these
transformers
and
then
we
create
more
entities
for
it.
This
allows
us
to
preserve
this
structure,
where
this
group
has
two
subgroups.
A
Okay,
that's
how
we
gradually
kind
of
fill
it
out
with
information
top
down
starting
with
root
perform
all
the
import
there
check
are.
Are
there
any
new
other
any
subgroups?
Yes
create
entities
for
that?
Are
there
any
projects,
yes
create
project
entities
for
that,
and
then
you
know
gradually
we
work
through
everything
you
know
the
subgroup
gets
processed
would
perform.
The
same
thing:
are
there
any
subgroups
in
this
group?
Yes,
okay,
create
more
entities.
A
And
then,
finally,
we
have
a
nd
json
extractor,
which
technically
uses
rest
api,
but
what
it
does
is
it
downloads
a
file
that
is
that
ends
on
anti-json.gzip
gz
and
what
it
does.
I
guess
I
will
show
you
on.
Let's
say:
epics
pipeline.
A
Is
it
downloads
a
file
it
decompresses
it,
and
then
we
read
it
and
then
we
you
know,
yield
it,
that's
what
it
does
and
I
will
touch
on
into
json
pipelines
later
on,
but
yeah.
Currently
we
have
three
types
of
extractors,
I
mean
technically
two.
The
third
one
is
being
more
custom
for
the
rest.
Api
was
a
bit
more
logic,
you
know
like
downloading
the
file
decompressing
it
and
then
you
know
yielding
the
result.
A
A
A
Most
of
the
most
of
the
pipelines
don't
even
need
a
custom
class
with
its
own
load
method.
A
Let
me
see
what
can
I
like,
for
example
here
you
can
just
define
the
load
methodness
directly
in
the
pipeline
and
yeah.
I
guess
the
the
good
thing
to
do
is
to
check
this
pipeline
module,
which
includes
the
runner
module
and
that's
where
the
run
method
is
defined.
You
know
that's
where
we
call
the
extractor,
then
we
perform
on
the
extracted
data,
we
run
all
of
the
transformers
and
then
we
call
the
load
method
as
well.
A
There's
logic
around
performing
the
step
log
in
the
information-
I
guess
one
thing
to
note
is
a
good
a
good
place
to
check
the
logs
is
importer
log,
so,
for
example,
locally
it
can
be
located
in
on
ggk
in
log.
A
A
A
A
Use
graphql,
and
why
do
we
need
to
do
that
now?
So
the
reason
for
that
is
quite
complex
and
like
in
initially
we
did
want
to
the
idea
behind
on
this
approach
was
to
just
use
graphql.
A
A
Let
me
open
the
oops,
let
me
open
the
project
one,
you
know
it's
quite
extensive.
What
we
import
and
export
is
quite
a
lot
right,
so
this
is
what
we
cover
in
project,
import,
export
and
one
of
the
tasks
for
this
solution
was
parity
with
import
export
tool
and
if
we
want
to
use
graphql
api
first
of
all,
we
don't
have
everything
in
graphql
api
to
cover
all
of
this.
A
A
But
what
became
apparent
is
that
nested,
subrelation
complexity
is
what
would
give
us
the
most
trouble
and
what
I
mean
by
by
that
is
this.
For
instance,
if
we
want
to,
let
me
show
you
if
we
want
to
fetch
a
list
of
merge
requests
that
includes
nodes
and
every
node
includes
events
and
every
event
includes
push
event
payload.
A
You
can
see
how
how
many
layers
of
nesting
there
are
one
two,
three
four,
but
there
can
be
virtually
unlimited
number
of
necessary
relations
right
and
that
became
problematic
with
graphql,
because
there
are
clearly
complexity
limits
on
gitlab.com
and
just
in
general,
on
on
graphql
api
that
we
ship
with
gitlab.
It
has
clearly
query
complexity
limits
that
we
cannot
exceed.
You
know
the
graphql
api
will
just
simply
not
return
as
a
result.
A
And
the
other
problem
is
graphql.
Api
has
nested
pagination.
That
would
be
extremely
difficult
to
manage
and
I
have
an
example
for
you
here.
For
example,
if
we
have
a
project
and
we
want
to
face
a
list
of
issues
and
by
default
the
page
number,
the
the
page
size
for
gitlab
is
100
and
you
cannot
exceed
that
you
or
maybe
you
could,
I'm
not
entirely
sure.
Maybe
it's
500
regardless.
There
is
a
limit
right
and
every
issue.
A
We
want
to
also
fetch
notes
and
for
every
note
we
want
an
author
and
a
word
emoji
and
for
every
collection
that
is
returned.
There
is
a
cursor
that
we
have.
We
have
to
to
manage
right.
It's
page
info,
okay,
the
syntax
is
not
valid,
but
people
who
worked
with
graphql
before
know
what
I
mean
like
there's
a
before
cursor
after
cursor
on
this
level.
A
So
there's
something
like
that
here.
There's
something
like
that
here,
oops
here
and
there's
something
like
that
here.
Can
I
purify
this?
No,
anyway,
you
get
an
idea
that
you
have
three
different
paginators
pagination
cursors
to
to
manage,
which
is
first
of
all
quite
difficult
to
do,
but
it
would
be
slow.
It
would
return
results
that
you
don't
need.
A
A
It's
slow
and
maintenance
hell
essentially,
and
on
top
of
that
we
would
be
producing
and
pass
one
queries
and
what
I
mean
by
that.
If
we
do
not,
if
we
do
not
use
this
approach
with
nested
pagination,
then
we
have
to
fetch
epics
or
fetch
any
subrelation
one
by
one
and
an
example
being
if,
let's
say
we
have
a
group
or
actually
no,
let's
say
we
have
we
import
500
issues,
no
problem
and
then
for
every
issue.
We
want
to
fetch
nodes,
so
that
is
going
to
essentially
be
issue.
A
A
You
know
you
multiply
it
by
the
amount
of
sub-relations
that
you
have
and
that's
not
even
considering
that
there
can
be
more
than
one
layer
of
nesting
in
some
relations,
so
you
can
quickly
add
up
and
the
numbers
become
unmanageable
like
for,
for
instance,
here,
in
order
to
tend
to
import
10
000
epics
with
100
epics
per
page.
A
You
spent
100
network
requests,
but
then,
if
you
want
to
also
import
events
and
labels,
that's
10
000
networks
request
each
requests
each
in
order
to
avoid
an
asset,
pagination
problem,
which
is
also
not
ideal.
A
So
we
decided
to
have
like
a
more
of
a
hybrid
approach
where
some
pipelines
we
use
graphql
some
pipelines.
We
use
rest
api
and
some
pipelines
we
use
into
json
and
simply
because
it's
much
faster
and
like
here,
10
000
apex
with
events
and
labels
is
a
file
that
is,
you
know
this
is
complete,
guess
but
let's
say
one
megabyte
comparing
to
doing
to
20
000
network
requests.
A
Obviously,
that's
going
to
be
way
way
faster,
both
processing,
wise
and
just
the
amount
of
time
it
takes
to
to
perform
all
these
requests,
and
also
the
import
of
nested
data
is
already
handled
by
import
export.
Quite
well,
although
it
does
bring
me
us
back
to
the
original
point
right,
the
big
disadvantage
like.
Why
did
we
want
to
we
have
this?
I
mean
one
of
the
solutions
is
so
that
the
user
doesn't
deal
with
files.
The
other
one
is
that
we
wanted
modularity.
A
But
with
this
we
came
back
to
the
original
approach
right,
which
is
quite
hard
to
understand
and
again,
if
you,
if
you've
seen
my
other
overview
or
if
you
just
take
a
look
at
the
import
export
code
base,
you
will
realize
that
it's,
you
know,
there's
a
lot
going
on
with
terminology
like
relation
factory
object
builder,
the
tree
restorer,
all
this
kind
of
all
this
good
stuff.
A
So
that's
currently
what
we
are
doing
here
with-
and
there
are
so
there
are
a
number
of
nd
json
pipelines.
For
example,
epic's
pipeline
for
groups
is
nd
json.
A
But
members
pipeline
is
not,
and
the
rule
of
thumb
that
we
kind
of
landed
on
is
for
simple
flat
relay
for
flat
relations.
Like
members
labels
milestones,
you
can
probably
use
graphql
for
more
complex
stuff.
You
should
be
using
anti-json
pipeline,
just
nd
json
approach
right
just
because
it
handles
subrelations
so
much
better.
A
And
how
do
you
know
like
which
approach
to
use
mostly
you
just
take
a
look
at
the
import
experts
file.
If,
if
this,
if
the
subrelation
is
quite
flat,
you
know,
then
you
don't
even
you
might
not
need
it
to
use
the
indie
json
pipeline.
But
if,
let's
say
it's
something
like
this
right,
then
you
definitely
need
to
send
the
json
pipeline.
A
But
then
there
are
two
workers
that
are
enqueued
export
request,
worker,
an
entity
worker,
so
exporter
request
is
worker.
That
just
does
the
network
request.
A
That
essentially
says
you
know
to
to
another
instance
of
gitlab:
hey,
please
export
the
relations
and
we
have
export
relations
api,
which
can
be
viewed.
It's
currently
available
for
group
export
not
for
project,
but
project
1
is
ongoing,
so
here
here
it
is
and
what
it
does
it's
very
similar
to
regular
export
where
you
have
a
tarball,
but
instead
of
a
tar
tarball,
you
get
a
small
gzipped
files.
A
A
Instead,
you
have
smaller
files
like
this
milestones,
milestones.json.js.
A
And
boards
epics,
whatever
that
we
request
the
export
of
so
on
the
import,
starts,
we
go
to
source
and
we
ask
it:
can
you
export
all
of
your
data
and
then,
by
the
time
we
by
the
time
the
pipeline
starts,
the
nd
json
pipeline
starts?
We
fetch
that
information
from
from
source,
if
it's
available,
if
it's
not
available,
then
we
reschedule
the
job
in
the
future,
and
that
can
be
seen
here.
I
believe
right
here.
So
this
is
the
pipeline
worker,
and
here
we
check.
If
the.
A
A
A
A
A
I
guess
one
more
thing
that
I
kind
of
forgot
to
mention,
but
I
just
realized
is
the
pipelines
execution
order
matters
where
we
want
to
execute
group
pipeline
first
before
we
to
for
the
group
to
be
created
before
we
import
labels
in
it
right
for
that
there
is
a
a
file
for
called
stage
which
defines
the
order
of
pipelines,
meaning
this
pipeline
group
pipeline
is
stage
zero.
A
A
A
You
know
the
enqueue
jobs
for
the
next
stage
and
we
wait
until
all
of
the
stages
are
complete.
So
you
can
see
the
list
of
things
here.
So
the
first
thing
we
do
is
import
group,
after
that
it's
avatar,
subgroups,
members,
etc
and
then
the
stage
the
final
stage
is
the
finisher
pipeline.
A
A
A
Then
we
wait
for
the
entities
to
all
be
reported
back
and
then
we
finally
update
the
import
state
as
well
to
either
you
know,
finished
or
failed
and
yeah.
That's
that's
how
that's
how
it's
performed,
but
we
also
have
stage
file
for
projects,
but
currently
it's
under
development.
So
it's
it
doesn't
have
much
all
of
the
projects
that
I
show
I've
shown
you
here
are
empty.
They
don't
even
have
repository
yet
sorry,
that's
a
wrong
group.
A
It
doesn't
have
a
repository,
yet
nothing,
no,
no
contents,
but
you
can
probably
imagine
that
it's
going
to
be
repository
imports
next,
after
that,
all
the
data
like
labels,
milestones,
uploads,
lfs,
etc.
A
A
Lastly,
the
current
comparison,
like
I
said
well,
I
mentioned
before
so
the
group
migration
is
covered
and
should
have
feature
priority
with
group
import
expert
project.
Migration
is
under
development
and
mostly
not
covered
it's
behind
the
feature.
Flag,
bulk
import
projects
and
overall
overall
feature
flags
for
feature
flag,
for
this
feature
is
bulk,
import.
A
Yeah,
I
guess
that's
the
that's
an
overview,
obviously
there's
a
lot
to
take
in
there's
a
lot
of
stuff
going
on.
I
haven't
even
shown
the
contents
of
the
anti-json
pipeline
module
and
I
probably
should
have-
and
I
will
right
now-
but
the
nd
json
pipeline
module
defines
the
transform
method
and
it
uses,
like
I
said
before
it
uses
import
export
functionality
was
the
relation
factory
where
we
transform.
A
We
transform
a
json
string
into
an
object
right.
If
you
have
a
label
json
object,
we
transform
that
into
a
label,
object
activerecord
model
that
is
going
to
be
persisted
in
the
database.
A
A
There
are
a
lot
of
pieces
to
this
a
lot
to
take
in,
but
I
hope
this
overview
is,
you
know,
can
help
clear
some
of
these
concepts
up
and
help.
You
understand
this
feature
better.