►
From YouTube: GitLab geo - Secondary proxying discussion
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right:
hey,
catelyn,
thanks
for
volunteering
or
agreeing
to
to
talk
about
the
the
secondary
proxing
to
the
primary
feature
that
we've
built
so
I'll.
Let
you
kick
things
off
for
us.
B
Yeah,
I
think
somebody
sure
thing
the
geoprocessing
feature
I
tried
to
give
a
bit
of
the
backstory.
I
think
the
original
reason
we
wanted
to
implement
this
was
mostly
because
we've
heard
some
customers
that
want
a
feature
like
this
and
also
looking
through
the
metrics.
I
think
we
realized
that
people
weren't
really
using
the
geosecondary
web
interface
too
much
just
because
it
was
like
the
experience,
wasn't
very
optimal.
B
B
So
this
feature
essentially
wanted
to
bring
a
bit
more
usefulness
to
the
ui
of
which
secondaries,
just
because
we
know
they
have
the
data
in
there
and
they're
also
closer
geographically
to
to
some
of
the
users
to
at
least
a
subset
of
users.
B
B
All
the
web
traffic
to
the
primary
accept
a
very
few
specific
requests
like
git
requests
that
can
be
served
locally,
but
essentially
the
web
interface.
All
the
traffic
is
proceed
to
the
primary,
so
there
is
still
this
kind
of
concern
that
if
the
secondaries
are
very
far
away
and
there's
like
network
lag
between
them,
experience
might
still
not
be
the
best
just
because
you
have
to
go
through
multiple
hopes
like
through
the
secondary
and
then
to
the
primary.
B
But
we
are
thinking
it's
it's
an
improvement
over
the
previous
experience
because
it's
read
right,
but
it's
also
an
improvement,
because
the
user
doesn't
go
directly
to
a
primary,
but
it
goes
through
that
secondary
site.
So
usually
the
sites
would
have
kind
of
a
better
networking
between
them,
rather
than
the
use
of
going
to
the
primary
directly
from
from
our
region.
So.
A
So
we're
not
redirecting
the
user,
we're
proxying
the
connection
to
the
primary,
so
there's
no
redirection
happening
at
all
from
the
secondary
to
the
primary
okay
good.
That's
understood!
Okay,
and
you
mentioned
that
we
don't
do
we
don't
proxy
get
requests.
Could
you
talk
a
little
bit
more
about
that
and
what
what
the,
what
we
do
and
what
we
don't
with
respect
to
them.
B
B
I
can
try
and
share
my
screen.
While
I
look
for
those.
B
B
Yeah,
I
don't
think
we
did.
I
think
I
can
talk
a
bit
about
like
from
this
table,
because
this
is
essentially
the
list
of
things
that
we
procs
can
not
proxy
so
because
the
proximity
currently
implemented
in
workhorse.
Let
me
see.
B
So
the
workforce
on
geosecondary,
if
it's
configured
as
a
proxy
site
when
the
request
comes
in
tranginx,
it
will
reach
workforce
and
then
workhorse
will
proxy
directly
to
the
primary
site
right.
A
We
say
so
as
as
far
as
the
the
user
is
concerned,
they're
still
connected
to
the
secondary
site.
So
it's
the
connection
terminates
on
this
and
then
another
connection
is
established
to
relay
the
relay
the
request
to
the
primary,
but
is
that
the
right
way
to
to
look
at
it.
B
Technically
yeah,
so
the
user
will
still
see
the
same
request
like
workforce
will
initiate
another
request
to
the
primary,
but
the
user
will
be
served.
The
data
back
in
the
same
request,
so
they
won't
see
any
redirect.
They
won't
see
any
kind
of
hob
directly.
A
B
With
the
primary
because
it
goes
through
two
workhorse,
okay,
yeah
and
from
this
from
this
table,
so
essentially
because
we
know
that
the
just
secondary
has
data
replicated
already
to
it.
We
can
kind
of
accelerate
some
of
the
read-only
requests
by
serving
the
data
locally,
where
that
is
the
case
and
where
we
know
it's
possible.
B
So
in
this
case
we
have.
We
have
a
table
here
if
you're
using
the
web
ui
that
I
just
mentioned.
That's
kind
of
the
problem.
The
web
ui
traffic
is
all
the
way
traffic
at
the
moment
is
proxy
to
the
primary.
B
So
this
this
would
still
be
served
by
the
primary,
but
if
you're
using
git
like
if
you're,
actually
doing
git
clone
or
kit
pull,
if
the
repository
exists
on
the
secondary
and
we
detect
that
it's
not
out
of
date
like
it's
being
properly
synced
to
the
secondary,
then
we'll
serve
it
directly
from
that
secondary
science.
Boxing
to
the
primary
okay,
we'll
just
short
circuit
the
the
proximity
in
workhorse
and
serve
it
locally.
B
And
that
happens
the
same
with
personal
snippets,
which
are
still
repositories
now,
since
we've
converted
them
to
be
repositories.
If
using
the
web
ui,
they
will
be
proxy
to
the
primary
always
at
the
moment,
but
if
using
it
like
git
clone
git
fetch,
they
will
be
served
from
the
secondary
if
possible.
B
Exactly
the
same
with
the
group.
Wikis
they're
also
repositories,
and
then
other
data
types
currently
are
still
proxied.
So,
for
example,
if
you
try
to
get
an
avatar
or
like
an
image
which
is
a
user
upload
at
the
moment,
those
are
still
proceed
to
the
primary.
A
Okay,
so
so,
for
example,
if
a
user
uploads
an
image
as
part
of
a
comment
that
would
go,
go
to
the
primary
to
be
fetched
still:
okay,
okay,.
B
Yeah,
I
think
this
is
something
we
could
improve
in
the
future
and
kind
of
try
to
detect
whether
more
that
data
types
exist
locally
and
serve
them.
But
it's
it's.
B
It's
a
bit
more
complicated
for
workhorse,
because
workforce
doesn't
have
doesn't
really
have
any
any
inner
knowledge
about
the
actual
request
like
it
doesn't
really
parse
the
entire
request
to
know
exactly
whether
I
don't
know
we're
accessing
a
data
type
that
exists
and
be
synced
to
the
secondary
or
whether
it
needs
to
proxy
to
a
memory,
which
is
why,
at
the
moment,
we're
just
proxying
everything
to
the
primary.
A
Oh,
I
see
I
see
so
understood,
so,
if
you
you
could
have,
let's
say
the
uploads
are
available
on
the
secondary
or
some
of
the
uploads.
Let's
say
we
have
an
issue
and
some
of
the
uploads
already
available
on
the
secondary.
We
could
serve
those
and
then
potentially
reach
out
to
the
primary
do
do
fetch
the
the
missing
ones.
Is
that
the
way
you're
looking
at
this
or
go.
A
B
B
And
then
again
with
lfs
objects,
if
you
access
them
through
the
web,
ui
they're
still
gonna
be
procedural
primary.
It's
the
same
same
idea,
same
concept
as
we
discussed
when
you're
using
git.
It's
somewhat
interesting
because
it's
still
an
http
request,
but
because
it's
a
different
path
like
it's
a
git,
specific
path,
we
could
bypass
that
and
for
the
lfs
objects
we
can.
A
So
can
I
just
pause
you
there
when
you
say
we
can
serve
them
locally
if
it
hasn't
replicated
when
we
serve
them
locally,
we
only
said
what
we
have
right.
So
let's
say
the
primary
is:
has
a
few
objects
that
are
that
haven't
been
replicated
the
secondary
when
you
access
it
as
a
user,
you
only
see
what
was
replicated.
There's
no
indication
that
it's
behind
it
there's
no
indication
that
the
primary
has
extra
information.
A
What
what
is
the
current
experience
when
it
comes
no
yeah?
We
actually
try
to
so
what
do
we
intelligently?
Just
pull
it
from
the
primary?
If
we
don't
have
it
on
the
second
group,
because
we
know
it
should
exist,
but
it
hasn't
come
across,
so
the
request
just
begets
prop:
it
gets
proxied
to
the
primary.
B
Essentially,
if,
like
for
git
route,
we
would
redirect
to
the
primary
that's
selfie
direct
for
like
when
proxy
is
not
used,
but
when
proxy
is
used,
it
will
redirect
to
a
path
that
will
proxy
to
the
primary.
B
If
that
makes
sense,
okay,
so
essentially
what
yeah
sorry
go
ahead,
go
ahead
finish
it
yeah.
I
was
going
to
say.
Essentially
the
logic
here
was
basically,
if
we.
B
We
have
one
like
an
out
of
date
check.
So
if
we
like,
we
receive
a
request,
I
don't
know
a
pool
or
a
fetch
request
for
a
specific
repository,
a
specific
lfs
object.
If
we
detect
that
that
was
not
synced
to
the
secondary
or
is
behind
the
primary,
we
would
just
proxy
to
the
primary
directory
instead
of
serving
it
locally
because
we
know
it's
behind
and
we
know
like
the
primary
has
a
newer
version,
so
we
have
to
serve
it
from
from
the
primary.
A
Gotcha
and
just
since,
since
it
was
mentioned
in
the
code
there
redirect
right
under
what
circumstances
do
we
redirect?
In
what
circumstances
do
we
proxy.
B
Yeah,
so
I
think
we
always
redirect
it's
just
the
the
thing
with
proxying.
Let
me
see
if
I
can.
B
Yeah
george,
so
essentially
we'll
always
redirect
if
we
need
to,
however,
with
proxy,
like
when
proxy
is
enabled
so
we'll
redirect
to
a
route
like
this
from
secondary
slash
path,
and
that's
when,
like
a
repository
is
out
of
date
or
we
need
to
pro,
we
need
to
when
to
serve
it
from
the
primary.
B
When
proxying
is
not
used.
We
redirect
to
the
primary
url
okay.
A
B
Like
they
will
see
a
message
in
their
git
clone
or
something
you've
been
redirected
to
primary
slash,
push
from
secondary,
slash,
url,
okay,
when
that's
the
case,
but
when
processing
is
used-
and
I
think
especially
that's
the
case
with
unified
urls,
because
the
url
is
the
same,
they
will
be
right
if
we
direct
it
to
the
unified
url,
slash
this
path,
but
because
they'll
hit
the
secondary.
A
But
but
in
terms
of
where
the
connection
is
terminated
from
the
user,
if
it
is
proxy,
the
connection
terminates
on
the
secondary
right.
If
it's
redirected,
then
the
connection
actually
terminates
on
the
primary
exactly
okay,
understood.
B
B
Yeah
and
then
pages
it's
not
going
to
be
proxied
at
all,
just
because
it's
a
separate
service
altogether,
so
it
requests
requests
to
pages,
don't
get
through
workhorse.
They
they
get
served
by
the
killer
pages
binary,
which
is
separate
and
separate
service.
B
So
we
we
can't
really
proxy
those
anyway,
which
is
why
I
wave
with
my
send
it,
but
I
think
anyway,
they
could
use
the
same
url
just
that
access
control
is
not
supported
at
the
moment
for
just
secondaries,
but
they
could
still
use
the
same
url
and
if
they
reach
a
secondary,
they
would
still
get
the
files
because
the
files
get
replicated
to
your
application,
so
they
would
still,
like
pages,
would
still
work
for
them.
A
And
and
since
we're
talking
about
access
control
here
as
well,
could
you
mention
I
can
share
how
how
access
control
is
managed?
Can
you
is
it?
Is
it
separate
for
secondary
and
the
primary,
or
does
the
kind
of
authentication
access,
kind
of
policies
all
reside
on
the
primary
and
and
the
requests
when
you
try
to
log
into
a
secondary?
Does
it
get
proxied
to
the
primary.
B
Yeah,
so
we
don't
think
access
control
here
for
pages
for
pages,
a
bit
more
specific.
There
are,
I
think,
three
possible
types
like
we
have
every
one
which
is
kind
of
public.
B
I
think
you
have
everyone
with
like
project
members,
which
is
essentially
private
right.
I
think
you
might
have
internal
as
well
like
anyone
logged
into
the
instance.
Those
settings
are
stored
in
the
database,
so
they
would
be
replicated
to
secondaries
anyway
through
the
database
application.
So
the
database
does
geosecondary
would
know
about
the
same
access
control.
B
It's
just
that
when
the
access
control
happens.
What
how
essentially
it
works
is
that
when
the
request
hits
pages,
if
the
user
wasn't
authenticated
before
it
needs
to
get
redirected
like
it
doesn't
auth
to
flow
yeah
with
the
glob
api,
like
the
gila
rails,
the
actual
gillam
instance,
okay,
so
on
the
primary
that
works,
everything
is
okay,
but
because
it's
an
off
to
flow,
and
it
needs
to
like
write
session
data
right
kind
of
all
those
things
last
session
time
lapse,
logged
in
and
geosecondary
database
is
not
writeable.
B
The
oauth
to
flow
would
not
work
on
a
secondary,
which
is
why
access
control
doesn't
work
on
secondary.
So,
if
like,
if
the
project
is,
has
access
control
options
set
to
everyone
and
it's
like
public,
then
this
flow
wouldn't
happen
and
it
would
work
it
should
work
on
the
secondary
as
well.
Okay,
so
so.
A
Other
than
pages,
if
you
go
back
to
projects
and
and
wikis
and
all
that
stuff,
let's
say
I
tried
to
log
into
the
secondary
right.
What
would
happen
for?
How
would
I
get
all
the
let's
say
I
wasn't
was
not
authenticated.
A
I'm
coming
in
I've
hit
the
url
for
the
second,
oh
I've
hit
the
unified
url.
I
end
up
on
a
secondary
node.
What
would
be
the
authentication
flow
for
me
as
a
user?
At
that
point,.
B
B
I
think
this
is
also
be
different,
with
unified
urls
and
with
separators,
but
let's
take
the
unified
url
case
first,
so
the
url
will
be
the
same.
You'll
hit
the
the
secondary
site.
B
Yes,
it's
proxy
and
because
it's
proxied,
you
get
all
the
request,
headers
back
like
the
response,
headers
back
from
the
primary,
which
means
that
the
primary
essentially
sets
the
cookie
header
and
sets
the
like
the
session
header
and
everything
so
essentially
you're
logging
into
the
primary.
It's
just
that
your
request
to
the
primary
is
proxy
to
the
secondary,
like
you
hit
the
secondary
but
you're,
actually
using
the
primary.
A
And,
and
do
we
log
the
fact
that
the
the
login
session
was
proxy,
it
was
from
a
secondary
or
does
the
primary
just
handle
that
as.
B
I
just
logged
in
yeah
good
question.
I
don't
think
we
log
that,
so
what
we
do
is
we
we
generate
like
a
metric
when
a
user
is
logged
in
through
the
secondary
directly,
but
I
don't
think
we
do
that,
unlocking
like
on
the
actual
login.
Okay,
that's
a
good
idea,
cool,
that's
really
interesting,
yeah
and
for
repositories.
B
It's
also
very
similar
in
that
yeah
it's
similar,
but
it's
also
different,
because
we
need
to
figure
out
whether
we
can
serve
data
locally
for
repositories,
which
is
why
it's
different,
so
the
authentication
actually
happens
on
the
secondary
okay.
So
let's
say
you
do
a
git
clone
use
a
password
at
glab
instance.com
and
you
hit
the
secondary.
B
Let's
say
this
is
a
unified
url
yeah
and
the
secondary
will
verify
your
user
and
password
with
what's
in
the
database
or
your
token,
whatever
you
use
to
authenticate
okay
and
if,
let's
say
the
repository,
is
there
we
just
serve
it
locally,
we
serve
it
back.
We
don't
send
anything
to
primary.
B
If
the
repository
doesn't
exist-
or
I
don't
know
it's
out
of
date
and
we
need
to
proxy
to
the
primary
will
proxy
the
request.
As
is
to
the
primary,
which
means
the
primary
will
verify
the
credentials
as
well.
A
Okay,
so
I'm
just
thinking
this
through
sorry
bear
with
me:
you
hit
the
secondary.
It
means
the
secondary
has
everything
it
needs
to
authenticate
that
yes,.
B
Yeah,
that's
a
good
question.
I
think
we
do
like
normal
logging
like
actual
file
based
logging.
I
don't
think
we
can
log
them
in
the
database
because
just
because
the
database
is
read-only
sure
so
at
that
point
that
those
logs
will
only
be
on.
A
The
secondary
yes,
okay,
okay,
all
right
so,
okay,
great
so
username
and
passwords
get
replicated
to
the
secondary,
and
so
the
second
is
that
I
just
just
confirm.
I
want
to
confirm
my
understandings.
They
get
con,
oh
whatever
authentication
mechanism.
It
is
that
also
ssh
keys
get
get
yes.
B
Cool
and
that's
for
the
data
types
about
limitations,
I
think
we've
talked
a
bit
about
this.
Essentially
pages
is
a
separate,
separate
service
and
non-rails
requests
like
we
can't
proxy
them
just
because
we
implemented
the
proxy
at
the
workhorse
level,
which
means
pages
can
be
proxied,
but
they.
It
should
also
use
like
a
separate
domain
because
that's
a
requirement,
but
this
also
means,
like
the
container
registry-
cannot
be
proxied
either.
B
So,
if
you
try
to
do
like
a
docker
pool
or
something
for
an
image,
you
can't
use
a
unified
drill
for
that,
because
if
you
push
an
image
to
a
secondary,
let's
say
that
would
not
work.
So
you
need
to
use
like
either
a
separate
domain
or
use
different
domains
in.
A
Push
would
you
want
to
sorry,
let
me
just
confirm
my
understanding.
Are
you
you're
saying
you
could
push
a
container
image
to
the
gitla,
the
the
secondary.
B
A
Gotcha,
okay,
so
yeah,
okay,
that
that
makes
sense
I
was,
I
think
I
misunderstood,
so
you
can't
push
anything
to
the
secondary,
because
then
then,
obviously,
that
data's
not
tracked
properly
won't
get
replicated
so
okay,
great
yeah
we
can
get
into
those
later.
I
think
I'd
like
to
understand
a
little
bit
more
about
this
limitation.
B
Yeah
and
I
think
with
humanities,
it
works
kind
of
the
same
way
in
that
it
still
uses.
Work
course
and
processing
is
still
implemented
from
at
that,
like
the
workforce
level.
So
as
long
as
we
have
a
proper
url
set
for
the
primary,
the
secondary
would
know
how
to
how
to
proxied
even
okay,.
B
What
else
about
the
unified
urls
specifically?
Because
we
see
they
don't
talk
much
about
it
essentially
before
like
before
all
the
processing
work,
you
would
always
have
separate
urls
for
the
two
secondaries,
so
the
primary
would
just
have
to
leave
a
primary
test
euro
and
then
each
secondary
would
be
at
a
different
one.
B
With
proxing,
this
can
be
easier
like
we
can
make
it
work
with
unified
url,
just
because
the
the
requests
are
proxied
to
the
primary
and
kind
of
everything
in
the
web
happens
to
the
primary.
So
we
could
have
a
single
url
for
all
the
secondaries
as
well,
and
it
should
just
work.
Okay,.
B
I'm
not
100
sure,
like
we've,
we've
talked
about
it
a
bit,
but
we
didn't
really
find
like
a
good
use
case
where
someone
would
prefer
to
have
separate
domains
for
the
jet
secondaries,
but
it
would.
It
could
be.
A
A
B
Yeah
at
the
moment,
so,
if
you
have
unified
girls
yeah
the
only
way
to
force
a
kind
of
secondary
would
be
at
the
dns
level,
let's
say
either.
I
don't
know
the
use
like
a
location
over
dns
that
could
route
specific
ips
or
specific
locations
to
secondary
that
you
want,
or
you
could
just
override
it
yourself
manually.
If
you
don't
have
like
access
to
the
dns
settings,
for
example,
you
could
override
it
yourself
in
like
it
is
a
host
or
or
something
just
to
point.
C
B
B
A
That
yeah
there
was
a
separate
epic
for
that.
I
just
wanted
to
understand
if
there
was
any
subtleties
or
new
answers
like
that.
B
Yeah,
I
think
this
is
more
like
the
high-level
epic
that
we
discussed
before
actually
going
through
the
implementation.
It
was
like
the
idea
that
the
secondary
should
act
as
a
primary
in
why,
where
and
why,
one
way
or
another,
and
that
one
way
ended
up
being
proxima,
crushing.
A
All
right,
I
don't
have
any
other
questions.
I
actually
I
do
italy
and
how
it
interacts
with
you
with
your
proxy.
A
Could
you
talk
a
little
bit
about
how
that
might
work
when,
when
gitly
is
deployed
both
on
primary
and
secondary
sites
in
an
aj
architecture,.
B
Yeah
technically
the
kind
of
interface,
the
only
the
only
way
to
reach
italy
would
be
to
give
up,
workhorse
or
kill
up
show
and
then
gillab
workhorse
is
essentially
the
web
ui.
So
if
we
proxy
the
request,
it
will
not
reach
the
secondary
guitar
at
all
like
it
would
go
directly
to
the
primary.
B
Oh
wait
for
web
requests,
at
least.
B
For
so
for
web
request,
it
would
go
actually
through
puma,
which
serves
the
web
requests
like
the
web
interface
and
like
it
will
go
to
the
promo
of
the
primary
okay
right
for
git.
Requests
like
if
you
do
a
git
clone
over
http
here
is
where,
like
in
workhorse,
is
where
we
will
check
if
it's
available.
A
B
Yeah
like
if
we
would
check
with
the
local
rails
api
if
it's
available
locally,
if
it's
seen
like,
if
it's
not
out
of
date
and
so
on,
and
if
that's
the
case,
then
we
will
connect
directly
to
the
the
secondary
italy
and
serve
the
data
back
okay.
B
But
from
that
perspective,
yeah,
nothing
exactly
should
should
change
and
it's
the
same
for
gitlab
show.
That's
just
for
ssh
for
git
operator
switch
operations
with
gitlab
shell
again,
if
we'll
we'll
go
to
workhorse
and
rails
to
double
check.
If
repository
system
is
up
to
date
and
so
on,
and
if
it
is
then
we'll
just
contact
the
local
italy
and
if
it's
not
the
will
initiate
proxying
through
rails,
which
is
one
of
the
open
issues
that
we
have,
because
if
you
do
a
proxy
or
ssh
push
like.
B
If
you
have
a
large
push
and
it
needs
to
be
processed
to
the
primary,
because
you
can't
push
to
a
secondary
yeah,
then
this
crossing
would
happen
in
the
in
the
puma
level,
which
means
60
seconds
timeout.
B
A
So,
in
in
in
normal
setup,
when
it's
not
being
proxied
yeah,
the
the
the
push
would
come
to
get
shell
did.
Shell
would
talk
to
italy
and
that
would
that
would
not
involve
puma.
In
that
case
right
and
that's
why
the
time
the
push
doesn't
time
out,
but
when
you're
in.
B
A
A
To
push
onto
the
secondary,
so
workhorse
wouldn't
proxy
it
it
would
get
pushed
to
rails
puma!
Yes,
because.
B
Essentially
yeah,
so
all
the
checks
happen
in
puma,
because
workhorse
can't
really
like
doesn't
know
specifically
about
the
data
types
so,
and
it
also
doesn't
have
a
database
connection.
So
he
needs
to
to
do
all
those
checks
in
puma
and
at
this
point,
if
the
repository
is
like,
oh,
if
you
need
to
push
and
anything,
it
would
happen
from
puna
to
the
primaries.
A
So
we
couldn't,
we
couldn't
kind
of
create
a
job
for
workhorse
once
we
know
what
needs
to
be
done
so
that
it
can
complete
the
push,
no
yeah,
it's
it's
synchronous.
I
see
I
see
gotcha,
okay,
right
and
so
so
just
to
complete
the
story
there
puma
would
talk
to
italy
directly.
On
the
primary
note,
so
the
connection
would
go
gitlab
shell
to
work
on
the
secondary
gitlab
shells,
the
workhorse
workhorse
puma
puma,
to
getaly
on
the
secondary,
sorry
primary.
That's
where
it
will
go
straight
to
straight.
B
Even
if
yeah
that's
that's
the
kind
of
edge
case,
even
if
you
do
a
git
as
a
stage
push
to
the
secondary,
it
will
end
up
as
a
git
http
push
on
the
primary,
because
we
use
the
api
of
the
primary
like
the
internal
kind
of
geo
api
of
the
primary
to
to
do
the
kit
operations.
A
A
My
yeah!
It
might
be
a
good
idea
to
draw
some
of
these
diagrams
out.
It
would
be
quite
quite
insightful
actually
to
see
the
camera.
A
I
think
it'll
be
it'll,
certainly
be
useful
to
see
how
the
data
flows
in
case
this
need
to
troubleshoot
these
issues
for
someone
who
doesn't
doesn't
isn't
as
familiar
as,
for
example,
yourself
with
the
code,
a
diagram
might
help
understand
where
things
are
moving
to,
but
yeah.
That's
that's
great
yeah
great,
that's.
That
was
a
that
was
a
really
good
good
session.
I
I
don't
have
any
questions
at
the
moment,
any
more
questions.
Thank
you
very
much
catelyn.
I
really
appreciate
it.
Yeah
anything
else,
anything
you!
B
A
Agenda
so
we'll
we'll
leave
those
for
for
another
session.
Let's
do
a
follow-up
session
if
anything
we
haven't
covered
so
far,
but
it's
a
great
great
intro.
So,
thank
you
awesome
thanks.
Let
me
stop
the
recording.