►
From YouTube: 2021-12-01 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
D
I'm
late
supply
chain.
A
No
problem
at
all
so
welcome
welcome
everyone.
This
is
first
december
kubernetes
demo.
So
scovic
you
have
the
first
demo
item.
D
D
D
You
know
I've
ramped
up
the
weight
just
to
see
some
traffic
flow
through
watched,
everything
for
10
minutes,
and
then
I
turned
the
weight
down
to
what
is
going
to
be
the
default
in
the
future
after
the
virtual
machines
got
removed.
So
it
was
just
temporarily
taking
more
traffic,
but
during
that
period
of
time
like
when
we
were
seeing
just
shy
of
100
requests
per
second,
like
everything
was
kind
of
flat
and
inside
of
our
ratio
aptx
and
error
ratios.
So
I'm
thinking
we
just
need
a
little
bit
more
traffic.
You
know.
D
Maybe
we
see
so
little
traffic.
The
goling
process
is
like
kind
of
spinning
down
its
memory
is
being
freed
up
so
that
it
needs
to
go
out
and
request
new
data,
or
something
that's
only.
I
hypothesis
at
this
moment,
but
beyond
that,
like
the
one
problems
that
we
had
during
one
of
our
initial
implementations
was
that
the
main
stage
error
rate
skyrocketed.
D
It
was
this
chart
down
here
just
like
this
chart
here
here,
just
skyrocketed
during
the
last
attempt
at
implemented
the
canary
stage.
So
that's
good,
so
we're
good
on
this
front,
and
then
I
wanted
to
showcase
a
little
bit
on
the
pod
metrics.
So
if
I
go
back
to
yesterday
again
we'll
see
that
cpu
usage
right
now,
it's
very
low
because
we
get
a
whopping
10
requests
per
second,
but
yesterday
you
know
we
were
hovering
around
500
millicourse.
D
So,
like
you
know
we're
serving
traffic,
we're
doing
good
right
and
lastly,
I
want
to
quickly
but
amy
rushed
me
add
split
on.
D
So
we
can
see
that
we
do
get
requests
so
for
this
bucket
of
30
seconds,
we
saw
380
requests
to
the
canary
stage
while
we're
still
serving
a
whopping
40
000
requests
per
30
seconds
on
the
main
stage.
So
that's
where
we
are
with
canary
and
get
lab
pages
jarv
found
an
issue
with
my
production
change
request.
So
I'm
going
through
that
and
maybe
later
today
we
can
start
rolling
the
rest
of
pages
for
the
main
stage.
A
D
It
wasn't
difficult
because
we
already
had
our
hd
proxy
cookbook
set
up
to
be
able
to
accept
canary
it's
just
some
minor
tweaks.
We
had
to
do
specific
to
gitlab
pages,
because
ports
and
such
did
not
line
up
between
our
omnibus
installs
and
kubernetes
installs,
which
is
fine.
We
could
work
through
it.
It
just
made
the
implementation
a
little
bit
more
difficult,
but
like
the
fact
that
he
proxy
already
supported
this
made
it
tremendously
easy.
D
It
was
just
a
matter
of
creating
the
stage
and
then
sending
traffic
to
it,
which
wasn't
bad
at
all.
I
think
if
we
were
starting
with
canary
from
scratch,
it
would
have
been
a
completely
different
story.
So
yeah
it
wasn't
terrible
at
all.
D
B
So
yeah
I
have
a
question
starbuck.
So
oh,
which
route
did
we
end
up
implementing
here?
So
we're
doing
https
termination
over
https
termination
and
we
are
just
doing
tiny
fraction
of
our
own.
The
regular
domain,
the
gitlab.io.
D
Yeah,
there's
no
routing
based
on
domain;
it's
just
strictly
traffic
and
percentage
weights
going
to
canary.
So
we
don't
care
who
hits
it.
There's
no
there's
nothing
inside
of
qa.
That
says:
hey!
Please
direct
me
to
a
canary
instance
for
canary
testing.
Nothing
like
that
exists
right
now.
If
we
do
want
that,
we
need
a
new
issue
to
create
it
and
get
that
sorted
out,
because
I
don't
know
how
to
do
it
quickly,
and
that
is
probably
something
we
want
to
explore.
D
We
may
want
like
a
test
project
for
that
specific
thing
which
makes
qa
very
very
specific
to
com
in
the
canary
stage,
which
may
not
be
wise,
but
that
is
kind
of
that's
where
we're
staying
right
now,
like
no
testing
exists,
it's
just
the
canary
stage.
It
exists.
I
figured
that's
better
than
nothing
and
it's
a
good
start
towards
iterating
towards
a
better
pages
test
model.
B
Yeah
sure
now
just
to
to
to
double
check.
So
we
do,
we
do
have
a
https
load
balancer
that
is
routing
to
an
https
endpoint,
which
is
provided
by
pages.
We're
not
doing
tcp
balancing
here
for.
C
D
B
And,
and
how
do
we
make
sure?
Are
we
doing
sni
inspection?
How
do
we
make
sure
that
we
are
only
doing
this
for
for
our
own
domain,
or
we
do
this,
regardless
of
the
domain
name.
C
A
Awesome,
and
do
you
have
a
rough
idea
of
like
what
what's
your
kind
of
pace
of
roll
out
over
production,
that
you're
you're,
roughly
hoping
for.
D
A
A
week
is,
is
super
fast
right.
You
definitely
got
release
management
stuff.
That's
that's
higher
priority,
so
yeah,
that's
totally
totally
fine
awesome,
great
progress.
D
And
I
guess
the
last
thing
I
should
mention
is
that
we
do
still
have
the
final
piece
of
work
where
you
need
to
implement
rate
limiting
inside
of
pages,
we're
still
waiting
on
improvement
to
our
helm,
chart
to
land.
I've
got
a
merge
request.
It's
in
review.
D
It's
been
kind
of
going
back
and
forth
a
little
bit
because
of
the
disconnect
between
how
pages
gets
configured
and
the
way
the
distribution
team
has
like
wired
things
together,
no
fault
of
anyone.
It's
just
you
know.
Conversation
has
kind
of
drifted
a
little
bit,
which
is
fine
so,
but
once
that
gets
in
place,
we
could
start
shifting
the
or
we
could
upgrade
our
home
chart
and
be
able
to
support
that
feature
set
of
gitlab
pages.
So
that'll
be
exciting
too,
especially
for
some
domains
that
like
to
hit
us
kind
of
hard.
A
No
so
ahmad
over
to
you.
E
E
E
Like
so
and
get
clone,
this
is
on
the
pre
environment,
basically
the
pre-production
and
the
accuracy
the
project
there.
So
when
I'm
trying
to
clone
it,
it
should
be
here
somewhere.
Hopefully.
E
Yeah
it's
here
with,
so
I
think
it's
running
one
other
thing.
Actually
we
did,
I
enabled
the
metrics
and
for
this
it's
gonna
be
like
we
don't
consume
it
yet
on
prometheus.
E
But
if
I
hit
here
the,
if
I
forward
the
port
and
get
the
metrics
it's
already
there,
because
it's
enabled
ta-da
so
so
I
think
what
comes
next
is
consuming
this
on
prometheus
and
yeah.
Maybe
I
don't
know
like
when
we
push
this
production.
Actually
I
don't
know
the
roadmap
for
this,
but.
E
To
be
honest,
I
don't
know
I
mean.
Maybe
scarborough
could
answer
this.
D
Yeah,
so
my
expectation
at
this
point
is:
we've
done
the
work
for
pre-prod,
so
we
and
I
see
two
things
that
we
kind
of
need,
because
we
don't
know
the
answer
to
these
questions
unless
we
want
to
try
to
tackle
this
work
and
do
it
ourselves,
but
one
we
need
to
know
how
to
configure
the
performance
or
not
performance.
We
need
to
know
how
to
configure
the
resources
we
need
for
this
new
service.
This
isn't
open
ssh.
This
is
something
that
we've
created.
We
don't
know
how
well
this
will
perform
under
load.
D
D
The
other
thing
we
need
akbar
is
going
to
work
on
getting
metrics
going.
It
sounds
like
we
just
need
the
necessary
dashboards
and
charts
and
all
that
good
stuff,
but
I
would
want
to
lean
on
someone
else
to
create
those,
because
they
know
what
metrics
are
important
to
them
and
they
know
what
metrics
we
need
to
monitor.
D
A
C
A
D
So
from
my
viewpoint,
I
think
the
next
thing
we
could
probably
let's
get
metrics
going
and
let's
get
them,
let's
get
the
metrics
available
inside
of
prometheus
and
thanos
and
such
and
then
we
could
probably
start
working
on
staging,
because
I
don't
unless
there's
something
fun
happening,
I
don't
see
anything
that
would
hold
us
up
from
moving
forward
to
staging
the
readiness
review
is
so
open
and
I
think
the
performance
stuff,
like
load
testing
is
still
an
open
thread.
D
Last
time
I
looked
at
it,
I
think
that
was
the
last
thread
that
was
still
open
and
then,
after
all
of
that
is
done,
we
could
start
production.
F
C
F
This
is
just
a
follow-up
to
a
slight
conversation
I
was
having
with
scarbeck
and
some
other
people,
so
we
are
looking
at.
Let's
say
we,
as
in
both
of
the
three
and
scalability,
are
looking
at
putting
redis
into
kubernetes,
and
I
was
trying
to
do
some
performance
testing
comparing
vms
to
kubernetes,
so
igor
had
already
set
up
kubernetes
cluster
I
set
up
some
vms.
F
The
idea
is
that
the
helm
chart
automatically
provides
sentinel
for
you
to
match
what
we
have
in
production
set
that
up
on
the
vms,
I
made
a
few
mistakes
which
I've
been
fixing
on
the
vms,
but
on
the
kubernetes
side,
I'm
not
100
sure
how
this
is
gonna
work,
because
for
redis
sentinel
the
clients
need
to
know
need
to
be
able
to
reach
the
each
sentinel
node.
F
C
F
Just
share
a
terminal
to
start
with,
so
I've
got.
This
is
using
the
load
balancer
service
for
the
sentinel,
so
I've
got
a
single
external
ap
that
I
can
connect
to
which
I've
done
over
here.
First
of
all,
I
connected
to
the
wrong
thingy
and
if
I
ask
what
the
first
of
all
I
can
only
connect
to
one
like
so
that's
already
not
gonna
work,
because
I
need
to
be
able
to
connect
to
all
three.
If
I
do.
F
This
we
can
see
that
we
should
have
three.
Oh
well,
apparently
this
one's
failing
for
some
reason
whatever,
and
this
is
using
a
load
balancer,
but
that's
not
gonna
work
because
a
I
need
to
be
able
to
connect
to
all
three
and
b
when
I
ask
it
what
the
address
of
the
primary
is.
It
gives
me
an
internal
address,
that's
only
rootable
inside
the
cluster,
but
we
we
need
external
clients.
I
think,
for
at
least
two
reasons.
F
One
well,
no
three
reasons.
One
is
that
during
the
migration
we
need
to
be
able
to
talk
from
vms
to
kubernetes,
and
vice
versa.
Two
is
that
our
console
nodes
are
still
on
vms
and
they
need
to
connect
to
redis,
and
that
should
probably
use
sentinel,
because
otherwise
we'll
get
weird
failures.
When
we
do
things
like
imports
and
three,
I'm
not
so
sure
on
this
one,
but
don't
we
already
have
multiple
kubernetes
clusters
for
our
rails
deployment
in
production,
in
which
case
we
need
to
talk
across
clusters
anyway.
F
So,
basically,
that's
me
saying:
I
think
we
need
this.
I
don't
know
how
so
there's
I
think
scarborough
has
seen
it
there's.
F
Quite
a
long
issue
on
the
I'll
put
it
in
the
in
the
dock,
there's
quite
a
long
issue
in
pr
about
some
of
this
stuff
in
the
chat
upstream,
you
post,
that's
the
pr,
so
it
says
that
you
can
use
node
port
to
access
externally,
and
I
think
what
that's
doing
is
letting
you
the
the
second
thing
I
showed
with
the
redis
client
like
that,
would
give
you
something:
that's
externally
rootable
when
you
ask
redis
what
the
sentinel
is,
but
I
don't
know
how
to
make
this
externally
readable.
F
Yet
there's
a
workaround
issue
linked,
which
uses
aj
proxy
eagle,
is
also
suggesting
we
just
have,
instead
of
a
single
service
with
three
parts.
F
You
know
so
three
three:
we
want
three
nodes
now
sentinel
three
redis
nodes.
Instead
of
a
single
service
with
three
pods,
we
have
three
services
and
then
we
can
export
expose
each
of
those
services
externally,
you
know
using
load
balances
and
whatever
that's
fine,
what
else
the
chart
itself
lets
you
expose
primary
and
secondary
or
with
the
service
type,
but
again
that
doesn't
work
because
we
need
both
secondaries,
not
just
one
for
both.
I
don't
even
know.
D
F
Haven't
been
trying
these
things
without
sentinel,
that's
a
good
point
yeah,
because
because
we
want
sentinel,
I
haven't,
I
haven't
been
trying
these
without
so
it
is
possible,
those
don't
even
make
sense
with
sentinel
yeah.
B
C
C
I
think
one
of
the
reasons
is
that
we
will
probably
come
up
with
a
lot
of
more
radius
clusters
and
manually.
Maintaining
different
vm
clusters
for
red
is
a
lot
of
pain
and
a
lot
of
manual
work,
and
if
we
have
a
way
to
automate
this,
that
would
be
preferable
and
quinnitus
is
one
way
to
you
know
manage
workloads
and
clusters
with
the
common
tool
set.
So
I
think
that
was
one
of
the
motivations
behind
that.
B
A
We
want
to
scale
stuff
instances
so
additional
redis
instances,
so
we
have
six
at
the
moment
and
they're
all
different.
I
believe
so.
The
the
sort
of
thinking
is
that
at
least
trying
to
make
some
of
these
consistent
on
kubernetes
will
make
it
easier
to
operate
them
and
then
the
next
time
we
add
one.
We
kind
of
have
a
model.
B
Yeah
but
the
design
of
sentinel
seems
to
clash
with
how
kubernetes
behave,
because
we
want
to
basically
it's
designed
it's
something
like
that.
You
connect
to
a
node
and
just
pins
you
out
the
ip
address
of
all
the
other
nodes,
and
you
have
supposed
to
have
direct
connectivity
with
all
of
them
and
kubernetes
is
all
about
if
it's
inside
you
can
get
whatever
you
want.
If
you
are
from
the
outside.
This
is
another
land,
so
you
have
to
expose
stuff
so
we're
just
making
things
harder.
B
So
my
question
here
is
more
about:
do
we
really
need
the
orchestration
that
kubernetes
is
giving
us
here
or
will
just
a
terraform
well
made
module
and
vms
deployed
in
the
proper
vpc?
B
Do
what
we
need,
because
I
mean
when,
when
you
do
something
like
this,
even
if
you
think
about
from
more
of
a
general
platform
as
a
service,
this
is
the
type
of
service
that
you
buy
directly
from
the
platform
provided
and
then
you
connect
to
it
and
it's
not
something
that
you
and
you
can
still
force
it.
But
the
point
is
that
I
mean
it
doesn't.
A
It's
definitely
a
great
question,
so
I
am
trying
to
like
dig
out
some
of
the
issues,
so
I
know
we
spent
some
time
looking
through
their
the
kind
of
options.
The
big
ones
are
like
standardizing
and
having
the
kind
of
ability
to
to
easily
spin
up
new
redis
instances.
A
That
was,
as
I
understood,
that
was
kind
of
the
main.
The
main
benefit.
F
I
think
another
benefit,
but
I'm
not
sure
if
this
is
actually
a
benefit.
Now
I
looked
at
the
charts
again,
so
we
were
also
considering
using
redis
cluster
in
some
places
and
we
figured
if
we
move
to
reducing
kubernetes.
That's
going
to
be
easier
to
do
so,
redis
cluster,
as
opposed
to
a
kubernetes
cluster
overloaded
term
yeah.
That
also
needs.
F
The
clients
to
know
the
addresses
of
all
the
servers
there
is
a
chart
for
that,
but
it's
not
the
same
chance
or
sentinel.
So
I
don't
know
how
using
the
bitnami
charts
we
would
have
sentinel
and
cluster.
At
the
same
time,.
F
Sentinel
is
you
have
a
single
primary
and
you
know
you
can
fail
over
to
a
secondary
and
have
a
different
primary
cluster
is
shouting
essentially
so
you,
you
have
multiple,
okay,
the
client,
the
client
says:
okay.
Well,
this
key
will
be
on
this
redis.
This
key
will
be
on
this
renaiss.
F
F
C
F
Yeah,
I
don't
know
which
is
more
boring
there,
because
you
can
either
say
that
using
plain
redis
in
kubernetes
is
boring,
because
then
you
use
kubernetes
features
to
automatically
figure
out.
You
know
you
can
only
access
the
primary
at
any
one
point,
because
kubernetes
will
just
tell
you
where
the
primary
you
know
just
connect
you
to
the
primary.
F
Sentinel
is
baked
into
regis
itself,
so
I'm
going
to
assume
that
that
has
like
maybe
some
specific
things
that
would
not
be
replicated
in
kubernetes
in
those
cases
wouldn't
work
well.
But
I.
F
I
was
trying
to
run
some
performance
tests.
That's.
B
What
I
was
trying
to
do
when
I
started
this?
What
I
mean
is
that
to
me
this
sounds
more
like
you
want
to
have
dedicated
clusters.
That
only
runs
your
redis
instances
and
then
there
you
define
the
expectation
from
the
outside
in
terms
of
sen
every
no,
so
you're,
not
really
thinking
about
scaling
parts
but
you're
more
thinking
about.
If
something
happens
to
my
pods
kubernetes
would
just
recycle
it
and
give
me
a
new
one,
but
every
so
let's
say
you
want
to
have
five
nodes
in
terms
of
sentinel
thing.
B
Then
this
means
that
you
define
that
you
want
to
expose
five
redis
nodes
out
of
it,
so
they
have
the
routable
address
and
basically
you
never
reach
to
them
inside
the
cluster.
You
only
operate
through
the
the
cluster
service
definition
or
load
balancer,
so
that
in
the
end,
you
you
end
up
with
the
same
result:
I'm
not
sure
about
how
much
complexity
it
is
up
and
what
you
gain
from
this
compared
to
running
straight
on
vm.
Maybe
you
standardize
on
the
common
tool
set,
but
yeah.
F
Functional
redis
so,
for
instance,
for
the
cash
redis
at
the
moment
we
have
three
vms.
That
would
be
three
services
in
kubernetes,
not
one
service
with
three
pods.
I
think
that
possibly
is
the
way
we'll
have
to
go,
which
might
mean
we
lose
some
of
the
benefits
of
the
chart
which
will
you
know,
handle
that
for
us,
but
it
doesn't
matter
if
the
chart
handles
that
for
us
if
we
can't
actually
connect
to
it
from
outside
the
cluster.
F
So
yeah,
I
don't
know,
I
think
yeah
I'll
speak
to
igor
again
tomorrow
and
see
where
we
want
to
take
this
because.
F
I
mean
this
is
the
kind
of
thing
you
know.
Obviously
I
know
how
to
do
stuff
with
vms
better
than
I
know
how
to
stuff
to
do
stuff
with
kubernetes
anyway,
but
it
definitely
feels
easier
on
vms,
because
each
one
is
its
own
thing
in
the
first
place
that
you
know
can
be
rootable
or
not.
A
What's
the
best
issue,
where
are
you
going
to
be
discussing
this
sean,
because
what
I,
what
we
might,
what
might
be
worthless
doing
is
there?
Is
this
reddish
scaling
strategy
dock,
which
was
the
kind
of
the
deep
dive
into
how
to
scale
redis
and
the
conclusion
from
that
was
to
go
to
kubernetes?
So
it
was
probably
worth
us
all
having
another
read
through
557
and
checking.
A
Covers
everything,
or
at
least
that
we
bring
that
into
the
the
discussion.
F
Higher
level,
epic
and
then
this
is
the
the
one
that
we're
currently
working
on.
So
I
think
it
could
go
in
either.
To
be
honest,
because.
F
F
A
A
A
That
make
more
sense,
but
I
think
it
might
be
worth
us.
It's
going
to
be
interesting
to
look
at
the
strategy.
That's
the
sort
of
scaling
strategy
darken,
one
of
the
other
things
we
considered
and
actually
based
on
now.
What
we
know
are
any
of
those
looking
like
stronger
options
or
are
we
kind
of
back
to
zero.
F
Yeah,
I
mean
the
thing
with
the
thing
with
scaling:
regis
as
well
is
like
yeah,
we
kind
of
have
to
go
horizontally
because
it
you
know
we're
already
using
the
most
powerful
vm
node
type
that
we
can
right
so
kubernetes.
F
You
know
the
point
of
me
doing
this
performance
test
is
just
to
make
sure
that
we
don't
like
suddenly
lose
like
25
off
that
which
would
be
bad.
But,
like
you
know,
if
it's,
if
it's
close
enough,
then
that's
probably
fine,
but
we
can't
really
go
up
vertically.
So
we
have
to
have
to
slice
horizontally,
which.
A
F
A
And
we
should
definitely
expect
like
there
will
be
more
reddises
so
like
whatever
we're
kind
of
looking
at,
like.
What's
going
to
be
easy
to
add
new
registers
in
the
future
and
and
scale
there
as
well,
yeah.
F
I
think
functional
partitioning
does
have
diminishing
returns,
at
which
point
we
would
like
to
go
to
radius
cluster
as
well,
because
you
know
for
for
the
cash
say
you
know
going
redis
cluster
you
know
could
potentially
make
a
lot
of
sense
and
basically
mean
that
we
never
have
to
think
about
the
catch,
never
have
to
split
anything
out
of
the
cache
manually,
because
that's
what
redis
does
like
we
don't?
F
E
A
F
So
anything
we
have
that
does
multi-key
operations
that
can't
match
that
we
can't
put
on
redis
cluster.
So
realistically,
that's
we
know
for
sure
that
sidekick.
We
can't
do
that,
because
that
will
do
multi-key
operations
that
wouldn't
be
compatible.
We
don't
know
for
the
others.
We've
got
some
code
that
will
fail
in
tests
ruby
tests
if
it
tries
to
add
some
new
code.
That
does
that,
but
there's
a
bunch
of
existing
cases
that
are
already
ll
listed.
F
I
think
at
the
time
we
did
figure
out
that
it
was
easier
to
if
we
were
going
to
do
redis,
cluster
and
kubernetes.
Eventually
it
was
easier,
simpler
to
do
kubernetes
then
cluster.
That
might
not
be
the
case.
D
F
Exactly
like
setting
it
up
like
is
a
breeze
like
this
values.yaml
file
is,
let
me
see
37
lines
long.
That's
fine,
like.
F
So
yeah
thanks
everyone,
it's
good
good
to
know.
There
wasn't
something
obvious
I
was
missing,
but
I'm
kind
of
sad.
There
wasn't
something
obvious.
A
Yes,
so
what
do
we
got
next
steps
and
unblocking
this
right
so
other
things
we
can
test
out
to
try
and
get
connection,
or
do
we
need
to
discuss
more
about
like
kubernetes
or
this
particular
redis
instance,
or
what
do
we
want
to
take
his
next
steps.
F
With
where
we
are
like,
where
we're
thinking,
because
I
think
you
know
we
need
to
figure
out
what
point
we
just
haven't,
seen
the
delivery
of
say,
can
you
people
figure
it
out
please
but
yeah?
I
think
I
think
we
we
definitely
want
to
try
a
service
per
except
three
separate
services.
F
And
yeah
I'll
I'll
I'll
post
it
there
and
then
I'll
pop
in
the
delivery
channel
as
well.
Once
we've.
C
D
D
If
we
do
a
multi-service
configuration,
I
don't
know
if
that
induces
any
additional
load
on
kubernetes,
because
it's
trying
to
manage
the
underlying
pod
and
making
sure
they're
wired
together
at
all
times.
So,
if,
like
a
readiness
pro
fails,
when
a
redis
node
fails
for
whatever
reason
there
might
be
some
ship
and
stuff
like
that,
and
because
we're
still
relying
on
sentinel
theoretically,
the
election
of
a
new
member
will
happen
quickly.
D
F
B
I
would
say
I
think
it
also
had
some
an
extra
layer
of
complexity,
because
if
you
had
just
vms
let's
say,
then
you
have
this
leader
election
and
failure,
detection,
which
happens
at
sentinel
level,
but
all
of
a
sudden
now
you're
doing
this
over
on
an
ip.
That
is
just
a
service.
So
it's
there
is
a
kubernetes
layer
behind
this
that
may
detect
a
failure
and
just
swap
apart.
F
F
So
yeah
I'll
talk
to
eagle
and
we'll
see
we'll
try.
F
Out
where
we
are
because
I
think
there's
a
few
considerations
here.
A
Awesome
all
right
yeah,
thanks
for
doing
that,
and
I
recommend,
like
everyone
else,
who's
on
this
call
have
a
read
through
the
ready
skating
strategy
and
then
let's
join
in
that
conversation
on
619
and
figure
out.
Some
next
steps.
A
Thanks
for
coming
along
a
hot
discussion
topic
there,
so
it's
certainly
good
to
hear
these
things
brilliant.
Is
there
anything
else
anyone
wants
to
question,
discuss
or
comment
on.