►
From YouTube: Weekly Mac Shared Runners Sync - 20200901
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So,
on
my
side,
yeah
I've
just
finished
the
the
work
requires
to
install
all
the
different
versions
of
xcodes.
A
Each
of
them
is
on
average,
like
six
gigabytes,
so
we're
looking
at
like
60
gigabytes
just
from
xcode
or
more
because
that's
the
compressed
size
yeah,
I'm
currently
running
a
the
first
job
that
will
include
all
of
them.
So
hopefully
that
will
go
through
I'm
seeing
a
lot
of
timeouts
in
the
in
the
max
stadium,
vpn
connection.
A
I
don't
know
if
there's
anything
to
do
with
with
the
upgrade
that
they
did
last
week,
I
will
be
having
a
meeting
with
customer
service
later
today.
B
C
Tell
people
it's
how
they
build
an
ios.
You've
got
people
like
doing
backbone,
compatible
builds,
you
know,
so
that's
that's
why
it's
a
little
bit
more
complex,
okay,
there's!
Actually
one
user
on
the
he's
not
he
has
been
commenting
on
the
image
issue.
I
think
he's
been
commenting
on
the
closed
beta
issue,
but
he's
been
giving
us
a
lot
of
feedback
as
to
some
of
the
reasoning
on
the
rationale.
Okay,.
B
And
does
it
have
to
be
on
the
same
machine?
Can
we
provide
multiple
machines
with
different
export
versions.
C
A
B
Instead
of
one
runner,
but
that
is
something
that
we
can
iterate
on
and
have
the
auto
scaler
specify
different
kind
of
images
depending
on
the
job
execution.
B
B
The
matrix
builds
that
we
just
released
to
do
that
as
well.
So
I'm
not
sure
if
that
would
be
a
better
option
and
for
close
beta
just
say:
hey.
We
support
xcode,
1
and
2
just
two
versions,
and
that's
it
instead
of
having
multiple
xcode
versions
installed,
because
me,
as
a
user.
How
am
I
going
to
specify
which
versions
to
use
pedro?
Oh.
A
B
Okay
and
there
and
you
might
notice,
how
do
other
competitors
allow
you
to
specify
the
xcode
version?
Is
it
a
matter
of
choosing
which,
how
how
do
you
choose
the
exclusive.
C
Version
yeah,
that's
a
good
question.
Let
me
look
into
that
and
add
it
to
the
agenda
afterwards.
I
haven't
that
specific
question.
I
don't
know
the
answer
offhand
so
I'll.
Look
into
that
question
and
also
double
check
the
sort
of
the
other
question,
which
is,
I
believe,
for
example,
that
in
the
microsoft
image
they
have
all
the
xcode
versions,
now
double
check
if
they
have
all
of
them
or
if,
like
they're
all,
I
think,
that's
all
one
compared
that
may
be
doing
what
you're
recommending,
which
is
offering
like
different
things.
B
If
pedro
can
leave
a
comment
about
this
in
the
issue
and
specify
that
one,
we
need
to
look
at
possibilities
of
having
multiple
image
versions
and
two
looking.
If
we
can
have
auto
scaler
support
different
kind
of
images,
depending
on
the
types
that
we
specify.
B
Because
at
the
moment,
the
auto
scaler
does
not
support
us
the
way
we're
doing
it
for
the
windows,
because,
for
example,
we
are
using
auto
secure
for
windows,
1809
windows,
1904
and
windows
1909,
it's
literally
just
three
different
runners
registered
using
the
same
autoscaler
binary
and
different
autoscaler
configuration.
B
But
ideally
we
should
try
and
support
multiple,
multiple
images
out
of
the
box.
That's
not
something
that
you
configure,
for
example
like.
Ideally,
we
should
try
and
use
the
image
keyword
instead
of
the
kepler
cima
file,
where
the
user
just
specifies
which
image
to
use,
and
then
we
can
just
use
that
image.
So
it
doesn't
even
have
to
use
tags
right.
They
just
specify
hey.
I
want
the
image
for
text
codes
for
13
and
we'll
we'll
just
load
that
image
from
our
disk.
Basically.
A
B
B
B
Yeah,
it
does
not
affect
the
execution
itself,
but
it's
gonna
affect
the
disk
size
right.
So
imagine
we
have
a
70
gigabyte
image
right
per
jump,
so
that
means
we
have
70
gigabytes
per
job
and
then
some
gigabytes
for
the
user.
So
imagine
30,
gigabytes,
so
per
job
costs
us
100,
gigabytes
right.
That
is
really
really
expensive.
So
if
we
have
a
disk
size
limit
which,
with
mac
stadium,
we
do-
I
don't
know
that
limit
that
we're
going
to
be
limited
by
this
disk
size.
B
But
if
we
just
have
one
xcode
version
installed,
that
image
is
going
to
shrink
down
to
30
gigabytes
right,
so
that
is
around
60
70
per
job,
so
that
that
is
solving
like
saving
us
40
gigabytes
per
job.
So
we
can
run
more
jobs
concurrently
and
not
be
limited
by
this.
Does
that
does
that
match
make
sense?
Yeah.
C
Yeah
and
I'll
check
into
both
of
those-
and
I
added
a
comment
to
the
doc
that
links
to
to
rene
the
hustle's
comment,
but
he's
actually
commenting
on
the
closed
beta.
So
we
could
take
a
look
at
that
and
we
could
probably
also
speaking
from
some
feedback
as
we
iterate
on
these
ideas.
It
seems
to
have
a
good
handle
on
some
of
this
stuff.
Okay,.
A
B
Yeah,
so
if
patrol
can
write
everything
we
just
discussed,
I
think
and
the
issue,
so
we
keep
track
of
this.
I
think
it
would
be
beneficial.
A
Okay,
the
next
point
is
xc,
build
that
I
saw
included
in
the
in
the
issue.
It's
I
don't
see
it
anywhere
in
a
package
manager,
so
it
seems
we'll
have
to
to
build
this
with
cmake
and
and
ninja.
So
it's
probably
good
I'll
I'll
make
a
separate
issue
for
that.
A
A
B
But
this
is
for
incremental,
build
well
yeah.
I
guess
for
the.
A
B
B
C
Okay,
and
which
issue
do
you
want
to
discuss
that?
One
in
the
in
the
image
issue
a
different
issue.
A
So
this
is
already
covered.
I
found
out
another
issue
with
the
image
names
it
seemed
since
last
week,
with
their
latest
upgrades,
they
started.
Limiting
the
size
of
you
know
the
length
of
the
image.
B
A
It
shouldn't
be
because
I
was
able
to
generate
you
know
there.
There
was
a
green
build
last
week
exactly
the
same
name
and
I
reran
it
through
this
week
and
it
started
failing
with
with
this
issue.
So
maybe
they
didn't
document
this.
That's
for
sure.
The.
A
A
I
wasn't
sure
if
it
was
like
the
vm
name,
or
it
was
already
with
the
image
names.
Okay,
I
think
so
yeah,
but
your
limit
was
higher.
I
think
it
was
something
around
38
characters.
A
Yeah
I'll
discuss
this
later
with
with
with
them
yeah
then
there's
the
disconnects
okay.
So
there
and
up
to
you
now.
C
Before
we
go
into
my
things,
pedro,
if,
if
you
after
you
have
your
meetings
with
the
mac
stadium
folks,
if
you
aren't
getting
like
progress
on
it
during
this
week,
just
paying
you
on
snack,
because
I
have
my
follow-up
meeting
with
your
company
every
friday
at
10
and
every
so
often
I
can
also
pull
john
cabanas
on
that
as
well.
So,
if
we
need
like
you,
know
to
get
some
traction
and
getting
it
solved,
let
me
know
if
it's
not
happening
by
friday.
C
Yeah,
so
cool
all
right,
so
on
the
closed
beta
processor,
I
don't
want
to
kind
of.
I
was
just
this
is
kind
of
like
my
brain
dumped
getting
stuff
out
of
my
hand
when
just
run
back
running
back
and
running
right
everywhere
in
this
car,
I
gotta
make
sure
that
it
makes
sense
and
kind
of
sync
up
on
it.
So
in
terms
of
the
closed
beta
process,
here's
what
I
have
so
far
right.
C
C
So
the
next
step
in
the
process
is
what
I've
upgraded
is
that
darren?
The
pdm
will
review
the
request.
The
issues
update
our
capacity
tracking
worksheet
I've
created
a
worksheet
for
that
it
needs
a
little
bit
of
work
right
and
then
sort
of
approve
that
the
next
step
is
provision
of
vm.
So
that's.
My
first
question
is
okay,
who
wants?
Who
is
going
to
actually
provision
the
vm
for
the
customer?
B
That
that
is
a
rabbit
hole.
I
went
down
last
week
as
well
very
few
few
things
that
we
need
to
put
in
consideration,
so
the
first
thing
probably
is
going
to
be
me
or
better
provisioning,
the
vm.
So
that's
something
first
of
all,
it's
probably
me
or
pedro-
that's
kind
of
packaging,
the
vms.
That
is
something
memorable
that
we
have
to
do
and
something
that
is
part
of
the
closed
beta.
That's
why
we
have
a
closed
beta
right
two.
B
We
want
to
limit
this
vm
in
the
sense
that
we
might
have
to
create
a
norca,
username
and
password
for
that
specific
customer.
Just
so
everything
is
isolated
right.
That
means
they
can't
see
the
vm
of
other
people
and
things
like
that.
In
any
case,
like
the
only
way
they
can
see,
other
people's
vm
is
through
the
orca
api,
but
like
it's
better
to
just
limit
them
right
and
then
we
would
need
to
know
the
project
they
that
they
need
the
runner
attached
to.
C
B
C
C
B
C
Right
so
listen
to
those
three
steps.
My
other
kind
of
question
was
the
vm
specs
for
the
closed
beta.
So
I
was
thinking
we
offer
only
one
vr
time.
Github
actions
is
a
2v
cpu
7
gigs
around
14k
ssd.
Do
you
guys
have
any
other
thoughts
about
the
size
at
the
moment
and
I
apologize?
I
haven't,
tested
the
performance
in
the
last
week.
I'm
definitely
going
to
log
in
today.
B
I
think
it
has
to
be
eight
gigabytes
for
us,
not
seven,
I'm
not
sure.
If
we
can
control
the
amount
of
memory
on
next
stadium,
we
can
control
the
amount
of
cpus,
but
I'm
not
sure
about
the
memory
I
can
remember,
but
with
14
14
gigabytes
ssd.
B
That
is
a
bit
too
low
for
us,
but
we
can
make
it
in
the
cells
that
they
have
14
gigabytes
available
to
them.
Just
because
our
image
itself
is
32
gigabytes
right.
C
A
So
far,
what
I've
seen
is
that
it's
proportional
to
the
amount
of
cpus
that
you
you
add
yeah,
so
yeah
yeah.
B
Yeah
because
I'm
not
sure
we'll
we'll
have
to
investigate
that.
That's
a
good
question,
but
I
think
the
only
the
only
option
for
to
orca
cli
is
from
specifying
the
amount
of
cpus
and
that
that
duplicates
like,
for
example,
eight
gigabytes,
ram,
16
gigabytes
so
on
and
so
forth.
C
C
All
right,
my
other
question
on
the
provisioning
was
to
run
a
config
for
closed
beta.
So
in
our
image,
am
I
by
that
I
mean,
because
you
guys
gave
me
a
really
good
re-education
yesterday,
since
I
was
re-educated
yesterday
in
terms
of
the
runner,
config
and
concurrency,
are
we
just
going
to
be
doing
one
worker
config
by
default?
You
know
configure
tamil
for
this.
What
are
our
thoughts.
B
Yeah,
ideally,
like
I
would
say
for
the
close
like
we
can
improve
it
later
on
right,
like,
but
let's
start
with
the
most
simple
thing
right,
like
one
worker
installed
and
set
the
concurrency
to
10,
so
it's
going
to
run
10
jobs
at
the
same
time,
but
we
have
one
worker
and
then,
if
we
see
that
it's
not
working
well
or
the
concurrency
is
too
high
or
so
on
and
so
forth,
we
can.
B
C
Cool
so
then
number
four
is
no
defining
the
user,
so
we
kind
of
talked
about
that
we'll
clean
it
up
a
little
bit.
My
last
question
is:
when
do
we
think
we
really
want
to
start
onboarding
these
week
of
september,
7th
september
14th?
What
does
all
god
feel
right
now?
14Th.
B
A
A
You
know
it's
not
as
comfortable
to
it
could
be
done,
but
it
it
would
change
priorities.
I
think
we
don't
have
to
shift
priorities
to
get
this.
Let's
not
do.
C
That
what
I'm
suggesting
at
the
end-
let's
start
the
week
of
september
14
when
maybe
we
can
even
do
a
cellular
rollover
like
so
I
know
that
I
hadn't
followed
up,
but
there
was
that
one
internal
git
lab
where
they
had
gitlab
should
be
like,
maybe
onboard
them
first,
and
you
know
like
first
two
days
and
then
get
guitar
team
right,
yeah
right.
Whoever
that
was
you.
B
C
Me
so,
even
though
we
started
the
week
of
the
fourteen,
we
can
like
go
slow,
like
you
know
the
first
user
they
won,
and
next
you
know.
So
that
would
be
how
you
know
just
like
turn
it
around
everything:
okay,
cool.
My
last
question
was
I
hadn't
really
given
this?
I
don't
know
if
you
guys
have
given
thought
to
this
night.
C
B
Yeah
we
have
zero
zero
say
from
a
technical
perspective,
because
it's
a
matter
of
creating
vm
from
a
product
perspective,
like
my
opinion,
would
be.
We
leave
it
there
until
we
create
the
open
beta
product
right.
I
think
that's
having
some
kind
of
auto
security.
B
C
B
Yeah
and
then,
and
then
we
can
say
hey
you
have
like
if
there
are
breaking
changes
which
99
there
is
going
to
be
breaking
changes
from
close
data
to
open
data.
If
there
are
breaking
changes,
hey
you
have
two
weeks
to
migrate
your
jobs
to
the
new
platform
and
that
will
just
delete
the
vm.
C
C
I
agree
that
makes
sense,
okay
and-
and
some
of
that
was
in
my
head,
but
I
just
wanted
to
just
get
some
other
feedback.
You
know
you're
just
doing
those
mental
brains
by
yourself,
you're
doing
it
in
a
vacuum
and
my
last
section
was
usage
guidance
for
closed
beta.
So
this
is
kind
of
what
I
was
like
that
this
is
just
a
brain
dump.
B
We
only
set
up
a
vm
per
job
for
security
reasons,
performance
reasons
just
because
we
don't
have
to.
We
can't
have
one
two
users
running
on
trusted
code
on
the
same
vm.
That
is
the
only
reason
we
use
one
vm
per
job
and
we
delete
it
just
for
security
reasons,
not
performance
or
anything
like
that.
Yeah,
that's
fine!
C
A
C
Right
and
that
job
could
be
across
for
each
governor
pipeline
gets
a
new
video
yeah.
What
I'm
suggesting
is,
if
the
premise
is
we
spin
up
a
vm,
but
it's
meant
to
execute.
It's
like
sticky
running
is
meant
to
execute
all
of
the
jobs
in
that
pipeline
and
then
only
then,
after
all
the
jobs
in
that
pipeline
are
done.
Then
we
tear
down
that
vm
flipping
the
premise,
in
other
words,.
B
So
that
is
something
that
works
based
on
sticker.
Rx
will
allow
us
to
do
having
like
a
sticky
like
one
vm
for
per
pipeline
and
then
tear
down
that
vm
after
that
pipeline,
but
yeah.
That
is
something
that
both
the
auto
skater,
the
docker
machine
executor
and
things
like
that
will
have
to
support
when
the
sticky
runners
come
into
play
right.
So.
C
That's
that's
what
I
was
saying
so
stepping
back
kind
of
like
the
closed
beta.
Do
we
just
kind
of
let
people
know
from
the
get-go,
even
though
the
gitlab
has
all
this
great
out-of-the-box
parallel
jobs?
What
we
recommend
for
the
closed
beta
is
that
what
what
are
you
getting
for
the
post
video
just
to
be
right,
all
right,
so
you're
only
getting
a
vm
one
vm,
but
maybe
do
we
want
to
go?
Basically,
I
guess
what
I'm
saying
is.
C
We
want
to
say:
look,
let's
be
more
prescriptive,
let's
only,
let's
kind
of
only
for
on
the
mac
os
runners
just
only
spin
up
a
vm
per
pipeline,
even
though
I
know
we're
still
doing
runners
work,
do
we
want
it
to
be
proactive
and
do
that
up
front,
but
I
know,
but
I
don't
have
the
data
for
anything.
I
just.
B
Yeah
yeah,
that
is,
that,
is
something
that
we
cannot
do
from
a
technical
reason.
Just
because
gitlab
the
rails
application
will
not
always
assign
the
job
to
that
runner
right
when
we
go
with
the
autoscaler
right
now
like
for,
like
for
production
setup,
we'll
have
multiple
catalog
runner
managers
for
mac
os
that
all
of
them
can
pick
up
jobs
individually
right
at
the
moment.
There
are
no
way
to
tell
hey
for
this
pipeline,
always
pick
this
runner,
and
then
the
runner
will
say.
Okay.
C
B
B
C
B
That
is
something
that
the
user
has
to
do
manually
like.
That
is
something
that
the
user
sets
on
the
runner
configuration
bus
yesterday
using
the
concurrency
limits,
because
at
the
end
of
the
day,
it's
the
user,
who
knows
if
they
want
concurrent,
builds
on
their
machine
or
not.
So
we
should
not
limit
it.
If
it's
a
sticky
runner,
we
run
run
job,
they
can
run
multiple
jobs
if
they
want
right.
So
I.
C
Guess
it's
kind
of
like
a
different
way.
What
I'm
looking
for
from
a
product
perspective
is
this
so
ignoring
the
user's
config?
I
am
looking
for
a
simple
way
to
make
sure
that
we
can
efficiently
scale
those
vms
and,
of
course,
effectively
scaling
those
vms.
So
in
my
simplistic
diary
in
mind,
if
I'm
only
like
provisioning
a
vm
for
a
quote-unquote
pipeline-
and
maybe
it's
an
hour
pipeline
2
hours,
30
minutes-
but
at
least
it's
that
might
be
a
simpler
and
cheaper
scaling
than
saying.
B
Very
fast
does
that
something
I'm
not
I'm
not
sure.
That
is
something
that
we
can
hide
from
the
user,
but
but
it's
something
that
the
user
has
to
enable
like.
C
Right,
the
user
has
to
label
it,
but
I
guess
and
then
that's
fine,
but
I'm
just
kind
of
like
leaning
towards
saying
we
rely
on
the
user
to
enable
it.
But
our
point
of
view
is
hey:
that's
how
you
have
to
use
it
now
the
user
could
kind
of
break
it.
C
B
Best
practices
is
to
you
to
use
workspaces,
so
then
you
can
use
sticker
runner,
so
then
you
can
use
even
incremental
builds
might
be
used
for
that.
That
case
right,
you
have
one
one
job
of
your
pipeline
building
specific
app
and
like
one
job
of
your
pipelines,
building
a
specific
component
and
the
other
like
a
relators
change,
building
another
specific
component
where
they
share
components
right.
So
that
is
where
incremental
builds
come
into
play,
and
maybe
that
is
why
they
need
sticker.
B
They
have
incremental
builds,
so
their
whole
pipeline
uses
the
same,
build
cache
overall,
and
that
is
why
android
development
and
macos
development
is
so
hard
just
because
we
don't
have
the
artifacts
from
one
stage
to
another,
and
that
is
why
most
likely
most
people
will
use
sticky
runners
in
this
case
and
that's
why
we
should
like
in
our
example
that
we
should
tell
them
hey
it's
best
practice
to
use
sticker
on
ours.
B
C
B
You
know,
I
would
say
no,
but
when
they
are
using
the
ci
job,
they
have
root
access.
So
it's
like
their
association
with
the
machine,
but
we
will
not
provide
credentials
just
because
for
us
to
do
that,
we
would
have
to
give
them
the
orca
vpn.
We
have
to
give
them
oracle.
Logins,
like
that
is
that
is
not
ideal,
so
we
just
give
them
the
vm
and
they
just
use
it
they're
using
the
shell
executor,
so
technically
they're
ssh
inside
of
the
vm
throughout
through
the
ci
job,
and
that's
it
and
that's.
C
B
I
I
mean
that
is
up
to
you.
Pedro
might
have
a
better
answer,
but
I'm
not
sure
if
we
can
resize
vms
on
the
fly,
we
would
have
to
create
a
new
vm
for
the
user,
so
I
don't
think
we
can
like
change
the
specs
without
re-registering,
a
new
runner.
Okay,.
C
B
C
Quick
from
the
last
kind
of
high
level
question
and
we're
not
going
to
get
all
the
details
in
a
day,
but
in
terms
of
the
week
of
the
14th
we
get
customers
on
there.
Why
do
we
want
them
to
pick
bugs
or
feedback?
I
created
that
mac
os
beta.
A
C
C
B
And
it's
feeling
back
it's
it's
feedback,
not
support
yeah,
which
is
right
right,
okay,
cool
one
question
I
have
is
regarding
support:
who
is
going
to
support
the
customers
in
the
sense?
If
there
are
problems
he
was
going
to
work
on
them.
C
B
C
B
A
No
okay,
so
so
you
want
me
to
take
this
for
creating
the
base.
Image
yeah
sure
makes
sense.
B
B
The
admin
password
is
already
and
the
one
password
vault
but
yeah
you
can
either
generate
a
new
password
or
do
whatever.
A
A
Yeah
sounds
good,
so
I
see
the
the
the
biggest
part
here
is
documenting
the
the
process
and
okay,
it's
really
documenting
the
process.
The
rest
is
already
seen.
A
Yeah,
that's
what
I
was
going
to
ask:
what's
the
best
location
for
this
documentation?
Is
it
this
issue
or.
B
I
would
say
in
the
readme
file
of
the
orca
project,
since
does
the
base
image
that
the
orca
project
is
going
to
use
right.
The
reason
why
I
want
this,
why
I
think
we
need
this
sorry
is
so
one
the
admin
password
is
no
longer
the
default
password
and
two
our
ci
jobs
will
be
a
little
bit
quicker
because
we
don't
have
to
do
the
all
the
updates
right.
B
So
right
now
inside
I
forgot,
I
think,
inside
of
our
epilepsy
imma
file,
we
we
specify
90
catalina.ing
right
to
use
that
image
as
a
base
image.
Basically,
what
we
want
to
do
is
specify
runner
ci
slash,
base
dot
img,
so
we
use
that
instead
of
the
91
mit
catalina,
and
that
is
just
to
save
a
bit
more
execution
and
not
have
the
default
password.
B
B
Yeah,
it
can't
be
part
of
the
mac
os
setup
pro
right,
so
the
mac
os
setup
role
is
specifying
image,
update,
rebooting
the
image
and
specifying
some
other
permission,
the
time
zone
right.
B
A
A
C
Okay
check
this
out,
I
haven't
well,
let
me
find
it
really
fast
to
just
dump
them.
Don't.
C
Of
ram
brown
right,
we
did,
but
this
let
me
also
give
you
guys
the
the
link
to
the
spreadsheet,
so
you
can
play
around
with
it
as
well.
I'll
find
a
link
to
the
special
just
dump
it
in
here,
and
I
think,
and
you
can
kind
of
like
take
that
spreadsheet
and
like
deal
with
it.
Let
me
I
lost
track
of
it,
but
I'll
find
it
in
this.
Oh.
C
Yeah
here
we
go
pedro.
C
Yeah
and
you
wanted
to
there
so
on
the
first
title:
that's
spreadsheet.
There
is
a
an
initial
and
anything,
that's
spreadsheet
you
can
knock.
You
can
like
hack
up
duplicate,
modify,
create
your
own
version
if
it
doesn't
work,
but
this
is
my
first
passes
I
kind
of
like
figuring
out
like
vm
sizing
and
cost
10
for
that
first
tab
and
again,
if
there's
a
different
way
to
look
at
it
on
different,
like
algorithm,
create
your
own
spreadsheet
type
and
hack
it
up.
A
And
last
question:
do
you
know
anything
more
about
the
gdk
team
if
they
wanna,
you
know,
be
one
of
the
first
customers
for
this.
B
Executor,
you
can
try
and
install
parallels
inside
of
the
mac
os
vm
and
run
those
tests
inside
of
parallax,
because
at
the
moment,
virtualbox
and
parallels
are
not
something
that
we
run
inside
of
our
ci
right.
So
those
tests
are
kind
of
written
but
they're
not
actually
wrong.
So
we
can
try
and
use
them
because
short
runners
to
run
the
parallels
executor
tests.
If
we
want
to.
B
A
That's
certainly
is
an
interesting
scenario.
I
was
just
thinking
of
you
know
having
a
real
world
project
that
leverages
like
the
rubies,
and
this
and
that's
you
know
different
frameworks
to
see
that
to
validate
that
the
setup
we
have
in
terms
of
changing
versions,
everything
everything
is
working
as
expected.
A
In
that
sense,
gdk
is
already
well
documented.
You
know
in
terms
of
what
the
setup
is
and
what's
needed
to
get
it
building,
so
it
could
be
a
useful
exercise
to
try
to
run
that
project
didn't
run
that
job
in
in
our
shares,
runner
to
validate
that
we
have
it.
C
B
Yeah,
if
you
can
reach
out
like
how
they
would
use
it
like,
would
they
just
I
know
they're
some
of
their
pipeliners
to
actually
set
up
the
jdk
inside
of
the
via?
So
maybe
they
can.
You
do
the
same
for
mac
os,
but
I
think
we
need
to
talk
to
the
jdk
team
to
see
like
how.
C
C
C
Guys,
I'm
just
gonna
add
one
more
thing
when
you
guys
look
at
the
spreadsheet,
there's
the
vm
sizing
and
cost
calculator
tab,
which
is
the
one
I
created
and
then
the
q
planning
tab
came
from
android
nougat,
that's
what
they
use
for
dot
com,
so
kind
of
like
two
different
ways
to
think
about
how
we
think
about
how
to
plan
for
scaling
all
right.
All
right
cheers.