►
From YouTube: Kubernetes SIG Testing - 2020-03-10
Description
A
Hi
everybody,
my
name
is
Aaron
curtain
burger
today
is
Tuesday
March
10th
and
you
are
at
the
kubernetes
safe
testing
bi-weekly
meeting.
We
adhere
to
the
kubernetes
code
of
conduct,
which
basically
boils
down
to
don't
be
a
jerk.
This
meeting
is
a
public
meeting
and
it's
being
recorded
and
will
be
posted
to
YouTube
shortly
on
today's
agenda.
A
B
A
So
the
first
thing
I
wanted
to
call
out
was
the
fact
that
jet
stack
has
a
number
of
proud
jobs
that
they
want
to
show
up
on
tested
and
they're
being
good
open-source
citizens.
So
posting
test
results
for
things
related
to
syrup
manager,
which
is
a
really
useful
thing
for
managing
your
search
for
kubernetes
via,
let's
encrypt
to
the
has
been
decades
the
I/o
instance.
But
the
way
this
is
being
done
now
is
by
using
a
script
called
transfigure,
which
we've
had
out
for
a
couple.
A
Allows
people
who
are
running
their
own-
it's
of
prowl,
to
generate
a
test
grade
configuration
file
for
use
with
our
instance
of
crowns.
If
you
want
to
see
how
that
was
done,
this
PR
is
a
great
example
of
how
to
do
so,
and
this
readme
will
tell
you
more
about
how
to
use
transfigure,
including
giving
you
an
example
of
how
to
replicate
the
crowd
job
that
they're
using.
B
If
that's
what
you
want
to
do
exclusively,
but
also
theoretically,
if
like
some
of
your
stuff
is
in
GCS
and
some
of
your
stuff
is
in
s3
you'd
be
able
to
have
some
jobs.
You
know
right,
s3
and
other
jobs
right.
Yes,
that's
interesting,
and
you
know
the
interface
thus
far
is
pretty
clean.
You
know
sort
of
the
people
who
are
using
this
interface
don't
have
to
paste,
know
what,
where
the
writing
is
happening,
this
is
sort
of
amplify
the
package
and
yeah.
B
So
you
know
the
three
of
us
I
guess
on
this
PR
sort
of
we're,
tidying
about
this
and
did
a
bunch
of
reviews
I'm
pretty
happy.
You
know
this
is
a
cool
contribution
from
somebody
outside
of
the
core
set
of
people.
So
maybe
this
is
a
you
know
good
example
of
someone
who
is
contributing
new
and
exciting
functionality.
Who
isn't
you
know
one
of
the
standard
people
working
on
proud?
So
that's
really
cool
and
the
intent.
B
With
the
sort
of
initial
components
like
tide
and
then
extend
that
to
the
rest
of
the
system
like
pod
utilities,
etc,
so
that,
basically
anybody
who's
using
trout,
can
you
know
use
s3
or
Google
Cloud
storage
and
the
s3
support
is
the
way
it
was
done.
It
should
be
easy
to
add
other
providers
in
the
future
as
well,
so
yeah
so
I
think
that's
very
excited.
A
B
So
I
think
he
actually
has
a
fork,
or
you
know
he
has
his
own
version
of
it.
That
is
using
the
rest
of
the
system
using
s3,
but
we
sort
of
wanted
to
rather
than
replacing
hat
what
the
sort
of
seems
safer
to
surf
or
to
start
small
and
make
sure
it
works
on
tight
and
and
extend
it
to
the
rest
of
the
components
rather
than
in
a
single
release,
have
the
way
that
we
are
doing
all
of
our.
You
know
writing
for
pod
utils
and
whatever
do
you
have
everything
switchover
all
at
once?
A
Next
year,
I
wanted
to
call
out
was
something
from
Travis
Clark
he's
added
support
for
github
added
a
few
new
things
to
its
branch
protection
API,
including
the
ability
to
require
a
linear
history,
which
means
you
know
no
large,
commits
being
pushed
to
a
branch
and
a
more
granular
access
to
how
how
people
are
allowed
to
manipulate
branches.
So
that's
something
that
our
brain
protector,
prowl
component
and
then
I'll
supports.
A
So
here
the
default
decoration
config.
This
is
something
that
proud
uses
to
decorate
jobs,
I
think
it's
mostly
used
by
planky.
If
I
remember
correctly,
and
so
this
has
information
like
what
images
are
we
going
to
use
to
implement
the
Poggi
tales,
but
also
some
good
defaults
that
can
be
overridden
on
a
per
job
level?
If
we
want
like
how
long
is
the
time
out
for
the
job?
A
What's
the
grace
period
before
we
consider
the
job
scheduled
if
I
remember
correctly,
things
of
that
nature,
and
so
there's
PR
and
you'll
notice
here
this
is
all
under
a
star.
So
this
is
the
default
decoration
config
for
all
repos,
all
orgs.
We
have
the
ability
to
specify
this
on
a
per
org
or
repo
basis.
These
days
and
right
hand
is
now
added
the
ability
to
have
default
resource
requests
and
limits.
A
So,
if
you
want
to
just
say
by
default
any
crowd
jobs
that
is
spawned
by
prowl,
you
know
the
the
pod
will
end
up
requesting
such-and-such
CPU
instructions
such
memory.
That
is
something
that
we
can
support
now
again
because
it
uses
the
decoration
config
format.
This
could
be
done
on
a
per
repo
reporter
or
basis.
A
D
So
the
way
Bosco's
used
to
work
up
until
this
public
quest
was
that
it
has
a
global
lock
that
basically
was
acquired
for
each
and
every
request
ever
done
to
it
and
which
consequently
basically
means
that
all
week
rests
got
civilized
and
that,
in
turn
meant
that
the
whole
thing
was
pretty
slow
and
and
what
was
done
and
for
which
this
progressed
was.
The
last
bit
is
that
it
changed
Oscar's,
which
is
internally
based
on
custom
resource
resources
that
get
updated
to
internally.
D
Also
use
these
custom
resources,
which
means
that
if
resource
has
changed
because,
for
example,
a
client
tries
to
acquire
it
and
another
routine
changed
it
in
the
meantime.
So,
basically,
after
the
the
we
first
came
in,
but
before
the
the
handler
updated
the
resource,
then
the
communis
API
server
rejected
to
bequest
because
yeah,
because
what
happens
in
the
meantime
and
and
this
allows
to
basically
just
remove
the
locking
and-
and
we
lie
on
on
the
query.
D
I
saw
rejecting
requests
for
things
that
that
could
appear
in
the
meantime
and
to
make
sure
that
client
don't
see
a
lot
of
this,
because,
but
this
basically
means
that
we
expect
sort
of
regularly
update
because
to
fail
is
yeah.
This
is
made
possible
by
just
be
trying
on
the
specific
era
yeah.
So
this
all
in
oil
required
some
more
refactoring
within
Bosco's,
but
makes
a
huge
difference
for
its
latency.
A
B
B
There's
been
more
improvements
to
Bosco's
lately,
which
I
think
is
a
nice
change.
So
thanks
for
doing
that,
let's
see
if
the
workload
identity
is
I.
Put
some
links
on
and
I
guess
are:
are
you
gonna
aaron
with
you
or
is
there?
Would
you
be
able
to
pull
up
any
of
those
links
or
do
you
want
me
to
do
that
or
how
did
you
prefer?
We
do
this
or.
B
Yes,
I,
don't
know,
I
guess
maybe
show
the
first
one,
the
sort
of
just
general
there's
a
link
on
the
test.
Infer,
there's
a
workload
identity,
so
we're
close
identity
is
a
feature
that
is
part
of
gke
that
allows
you
to
bind
sort
of
associate
a
kubernetes
service
account
with
a
GCP
service
account.
So
historically,
the
way
that
we
have
been
identifying
you
know
authenticating
with
GCP,
is
by
you
have
like
a
secret
JSON
file
and
then
there's
some.
B
You
know
fancy
shanigan
is
where
you
pass
that
secret
to
Google,
and
they
give
you
back
a
short-lived
token,
and
then
you
can
send
your
token
to
various
things
like
cloud
storage
to
authenticate,
as
whatever
user
you
are,
and
this
is
a
way
to
sort
of
make
that
happen
in
a
different
way.
By
sort
of
saying
that,
like
this
service,
account
should
always
authenticate
automatically
as
this
GCP
service
account,
and
so
you
can
see
that
there
there's
sort
of
the
documentation
on
that
page
that
talks
about
how
to
declare
that
which
is
I,
think
yeah.
B
One
of
the
things
I'm
excited
about
is
that
it
sort
of
is
declarative,
and
so
it
allows
more
self-service,
II
stuff
to
where
you
can
sort
of
just
rather
than
like
right
now,
if
there's
a
secret
that
on
someone
wants
to
use
in
one
of
their
jobs,
they
have
to
there's
not
a
self
great
self-service
way
of
doing
that.
They
have,
to
like
hand
the
on-call
team
that
secret
and
hope
that
we
upload
it
correctly.
This
is
a
way
where
you
can
just
declare
in
a
file
with
the
PR
that,
like
here,
is
the
service.
C
B
I
want
to
use
on
my
pod,
and
here
is
the
GCP
account
I,
want
it
to
authenticate
as
like
in
the
middle
of
the
screen
there.
You
can
see
that
service
account
name
is
something.
So
that's,
essentially
what
you
do
in
your
pod.
We
need
to
find
respect,
as
you
say
here
is
the
kubernetes
service
account
I,
want
it
to
act
as
and
then,
when
you
declare
that
service
account,
you
say
the
GCP
binding
that
you
want
it
to
have.
B
So
those
are
kind
of
the
two
things
that
you
need
to
do
you
need
to
declare
a
service
account
and
annotate
it
with
the
DCP
service
account
and
there's
a
command.
You
need
to
run
on
T
cloud
to
make
sure
that
the
service
account
is
actually
authorized
to
act
as
that
TCP
user,
and
then
you
need
to
make
sure
that
the
pod
is
actually
using
the
kubernetes
service
account
that
is
able
to
act
as
the
TCP
service
account
and
so
yeah
the
past.
B
You
know
few
months,
I
guess
I
guess,
since
the
beginning
of
this
year,
maybe
a
little
bit
before
that
I
have
been
with
varying
amounts
of
success.
Converting
our
crowd
deployments,
our
eh
prowl
deployments
to
use
workload,
identity,
and
that
is
mostly
finished
and
yeah.
So,
let's
see
so
you
know.
For
the
most
part,
it
was
pretty
simple.
B
B
We
haven't
really
migrated
any
of
the
bulk
of
jobs
like
most
of
our
indian
jobs
are
not
using
it
yet,
but
by
turning
on
workload
identity,
you
wind
up
using
a
DC
based
metadata
server,
and
that
turns
off
the
GC
metadata
server
and
sort
of
an
assumption
I
had
is
that
this,
isn't
we
weren't
using
this
anywhere,
and
so
that
has
you
know
crude
false
in
a
couple
of
cases
like
this
is
one
example
of
an
outage.
That's
turning
on.
B
You
know
on
the
cluster,
I
turned
on
the
workload
identity
and
that
wound
up
breaking
this
code
path,
which
I'm
not
exactly
sure
what
it
I
mean.
It
was
like
trying
to
find
I
think
we
know
who's
doing
something,
but
it
was
explicitly
talking
to
the
metadata
server
and
if
you
do
not
specify
a
certain
ad
service
account
for
your
job
and
you
turn
on
workload,
identity,
then
metadata
server
won't
give
you
access
tokens
anymore,
like
before.
B
B
Curling,
the
metadata
server
there's
a
g-cloud
command
to
print
the
access
token
on
those
green
lines
and
that
will
know
whether
or
not
to
use
the
secret
file
or
the
metadata
server
based
on
what
configured.
And
so
since
we
are
passing
the
secret
file
into
this
job.
I
was
a
little
bit
surprised
that
we
were
not
using
that
to
get
their
credentials,
because
then
that
probably
also
wound
up.
It
was
like
authenticating,
that's
the
wrong
person
or
whatever
so
potentially
good
to
discover
that.
But
ideally
we
would
have
discovered
that
ahead
of
time,
yeah
and.
B
You
know
so
here's
may
be
more
complicated
change,
so
Internet
like
inside
of
Google,
we
have
some
private
results
and
so
spyglass
has
an
ability
as
a
way
to
ship.
It
can
show
you
the
logs
for
private
results,
and
that
involves
giving
you
a
sign
URL,
where
it
uses
the
secret
JSON
file
to
sign
the
URL
and
obviously,
if
you
do
not
have
a
secret
JSON
file
to
sign
the
URL
right
now,
there
are
ways
to
use
the
metadata
server
to
sign
the
URL.
B
But
the
storage,
client
libraries
don't
support
that
yet
and
that's
actually
something
where
we'd
have
to
I.
Don't
you
know,
essentially
that
didn't
work
so
for
right
now
there
are
some
scenarios
where
we
are
still
using
the
secret
file,
and
this
is
an
example
of
me
migrating
it
to
support
cloud
storage
can
also
authenticate
with
a
cookie
like
if
you're
logged
into
Google.
B
It
can
just
use
your
authentication
to
download
the
file
and
that's
obviously
going
to
work
inside
of
Google,
since
we
are
all
logged
into
Google,
but
that
may
or
may
not
be
the
best,
a
great
scenario
for
yeah
for
other
people.
So
this
is
an
OP
that
you
can
now
turn
on
and
then,
hopefully,
eventually
we
will
update
the
client
libraries
will
get
updated
so
that
they
can
sign
the
URL
without
eating
the
secret
file.
B
So
you
know
I
think
the
next
steps
you
know
so
one
I
think
important
thing
to
learn
is
that
you
know
my
assumption
is
that,
since
all
our
jobs
are
using
the
secret
file
that
this
wasn't
caused
any
outages
and
so
didn't
need
a
heads
up
was
wrong
and
you
know
so.
Maybe
we
should
be
a
little
bit
more
careful
about
yeah
those
sort
of
list
of
various
things
that
have
been
happening
to
make
this
happen
and
yeah.
B
Need
to
continue
to
be
careful
about,
you
know
things
that
we
think
should
not
cause
outages,
obviously
can
and
then
the
next
step
will
be
to
start
migrating,
our
jobs
and
yeah.
So
there's
a
bunch
of
you
know,
there's
the
readme
to
follow
and
then
also
the
you
know,
there's
a
bunch
of
TRS
there
of
some
various
other
things
that
have
been
migrated.
So
at
some
point
we'll
need
to
you
know,
create
a
campaign
to
help
people
convert
those
over.
B
B
A
A
B
Yeah
I
mean
I,
think
you
can
solve
that,
but
wouldn't
what
I
don't
think
necessarily
work?
Yeah
I
mean
we'll
need
another
solution
with
a
the
idea,
for
you
know,
I
think
the
main
benefit
with
workload
identity
is
that
since
there
isn't
a
secret
file
that
you're
injecting
into
the
container,
it
is
impossible,
for
you
know
a
rogue
job
to
steal
that
secret
and
then
go
moving
into.
You
know
its
botnet
or
whatever
and
start
authenticating
to
somebody.
B
B
B
A
E
E
Cool
okay,
so
basically,
there
is
a
substantial
amount
of
interest
in
the
community
in
removing
basil
from
kubernetes
kubernetes,
only
not
testing
bread,
but
kubernetes
kubernetes
arm
as
it
causes
a
whole
bunch
of
friction,
which
is
not
necessarily
required
when
updating
things
and
also
when
people
are
just
going
about
their
days
and
suddenly
thanks
github
for.
B
E
Anyway,
people
want
to
get
rid
of
kuben
a
would
have
basil
in
kubernetes
kubernetes,
which
sounds
nice
potentially,
but
there
are
a
whole
bunch
of
reasons.
Basil
is
useful
to
us
and
we
would
have
to
deal
with
some
of
those
one
of
the
ones
which
I
am
more
familiar
with
is
the
fact
that
we
use
it
extensively
in
our
CI
infrastructure
in
order
to
speed
things
up,
and
it
is
quite
fast
we
look
here.
This
is
our
ab
cool,
kubernetes
basil,
test
job.
It
runs
on
every
kubernetes
PR.
E
It
takes
ten
to
twenty
minutes
most
of
the
time,
which
is
a
reasonable
amount
of
time
and
compared
to
some
of
our
CI
jobs
is
much
faster,
but
if
you
run
make
tests
on
its
own,
then
you
lose
a
bunch
of
for
fun
caching,
but
we
get
out
of
basil
and
combined
with
eva
greenhouse
or
RBE
from
a
basil
execution,
and
then
you
end
up
with
really
slow
test
times,
which
can
be
north
of
an
hour
depending
on
the
node
that
you
get
Don
turn
on
the
build
cluster,
so
we
can
MIT
since
we
started
off
all
this
armed
basil
adventures.
E
Some
years
ago,
goal
has
grown
its
own
caching
mechanisms
which
we
can
use
in
place
of
s.
If
we
can
persist
in
between
CI
drops.
So
I
basically
built
a
very
simple
thing
that
will
as
a
proof
of
concept,
it
will
generate
basically
a
make
test
cache
taking
the
full
time
it
takes
to
build
tests.
So
here
it's
taking
30
minutes
to
an
hour.
Sometimes
it's
slower
an
hour
in
17
minutes
there.
E
It's
also
always
failing,
but
that's
just
because
there
happens
to
be
a
bad
unit
test
in
kk,
but
always
fails
and
not
because
these
jobs
about
it
I,
don't
how
to
fix
fence.
So
this
is
just
generating
the
test
cache
and
then
later,
if
the
zoom
UI
stops
being
void
of
my
tabs.
If
I
go
away.
E
So
then
we
can
have
jobs
but
use
these
pre-generated
build
caches
and
they
run
a
much
faster.
They
are
comparable
with
or
even
faster
valve,
er
Basel
tests,
often
running
in
less
than
10
minutes.
If
there
is
a
substantial
Delta
where
a
bunch
of
tests
have
been
invalidated,
since
the
casuals
maze,
they
will
be
slower,
I
don't
see
any
reason.
Oh
here's
one.
E
This
one
took
26
minutes,
but
in
the
general
case
for
most
PRS
it
will
be
at
least
as
fast
as
it
was
before,
which
is
really
the
important
box,
and
you
can
see
it's
actually
with
the
exception
of
his
one
bad
test,
fairly
stable.
We
have
one
flake
here,
but
I'm
back
we're
pretty
good
and
we're
going
to
move
or
remove
this
bad
test,
which
assumes
it's
in
a
trivalent
container.
E
What
was
this
next
one?
So
yeah
I,
don't
know
why
I
had
this
tab,
you
can
see
what
it
worked.
Arms.
The
actual
implementation
here
is
super
trivial.
Basically,
we
have
this
job
that
generates
for
cash
arm,
so
it
literally
just
runs
make
test
and
burn
zips
up
the
cash
and
sends
it
off
to
GCS.
In
some
predetermined
location
and
then
when
we
want
to
run
the
job,
we
pull
the
cash
back
out
of
GCS
unzip
it
in
place
and
run
make
tests,
and
it's
much
faster
now,
because
we
already
have
the
cash.
E
So
this
is
built
and
will
be
relatively
easy
to
apply
to
our
actual
PRS.
The
other
thing
that
we
actually
care
a
lot
about
is
the
time
it
takes
to
build
kubernetes
images,
and
that
is
hard
to
speed
up,
so
that
is
basically
the
equivalent
of
make
quick
release
and
is
used
in
PRS
both
for
our
end-to-end
testing.
Assuming
we
keep
doing
that
and
also
for
an
equivalence
to
harm
the
could
pull
kubernetes
basil
build
job
we
have
which
make
sure
everything
actually
builds
speeding.
E
That
up
is
more
work,
it's
very
slow
even
on
local
machines,
which
would
be
frustrating
to
be
few
people
who
actually
use
basil.
So
there
is
work
needed
to
speak
without
more
generally,
perhaps
and
cashing.
It
is
somewhat
trickier
as
well,
especially
since
it
spends
a
whole
bunch
of
its
time,
just
tearing
images
and
the
Basel
version
of
that
is
really
fast.
E
Really
fast,
relatively
fast
and
the
make
test
version
can
take
like
half
an
hour
which
is
not
so
good
in
particular,
when
we
use
for
cash
in
make
with
image
ECE
tests.
It
goes
through
the
stage
in
about
five
minutes,
which
is
much
faster
than
the
30
or
so
minutes
it
takes
to
build
Kate's
mci.
C
Sharing
them
I
just
want
to
also
mention
that
the
basel
build
job
also
includes
a
lot
of
things
that
are
being
built
that
are
kind
of
unclear
that
they
should
continue
to
be
a
few
of
which
are
probably
slated
for
removal
from
the
repo.
Even
so,
at
least
some
of
that
problem
will
just
sort
of
sell
itself.
A
So
this
is
maybe
a
question
on
just
how
to
go
test
cache
works.
Does
the
test
cache
prevent
us
from
re
running
unit
tests
that,
like
already
passed
when
you
made
the
cache
it.
E
Does-
and
these
so
does
the
basel
cache,
which
is
why
we
can't
have
a
bunch
of
flaky
tests.
We
don't
know
a
flaky,
because
we
cache
from
passing
once
and
never
enter
again.
I,
don't
think
that
we
want
to
get
rid
of
this
behavior
NPR's,
which
would
just
be
deeply
aggravating
I,
think
there
is
value
in
having
a
CI
job
that
just
runs
the
tests
with
no
cache
regularly
and
indeed
runs
from
a
bunch.
So
we
can
see
which
ones
are
flaky
and
send
any
emails
to
people
or
something.
A
E
E
A
E
Think
to
start
with
I
would
start
by
having
them
be
non-reporting
crease
of
myths
and
have
waffle
and
make
free
cash
generation
happen
more
frequently
and
rely
on
with
previous
cash
generation.
I'm,
not
sure
what
I
necessarily
want
to
say.
Bees
are
B
primary
unit
tests
unless
we
actually
agree
that
we
want
to
remove
basil,
which
has
not
been
finalized
and
a
substantial
part
of
the
reason
I
did
this
and
doing
this
work
is
to
determine
whether
removing
basil
is
feasible
at
all.
A
E
C
E
C
A
C
B
I
remember
that
for
a
while
part
of
the
reason
why
we
build
things
in
the
containers
because
it
was
annoying
dealing
with
everybody's,
like
we
initially
tried
to
support
Mac
immediately
and
they
have
super
old
versions
of
like
things
like
bash,
but
I
think
these
days
we
just
require
people
to
install
more
updated
versions
of
those
utilities.
So
I,
don't
maybe
they're
similar
I,
feel.
B
C
B
Motivating
factor
of
irritation
is
like
the
build
files
which
are
in
every
directory,
which
really
don't
need
to
exist
because
they're,
just
they're
literally
the
way
we
create
him
is
with
gazelle
and
that
just
calls
out
to
go
to
get
the
list
of
like
what
imports.
What
and
then
converts
that
into
a
build
file.
So
there.
B
Yeah
I
mean
yeah,
you
know
but
I
think
in
an
ideal.
You
know
I
think
in
an
ideal
world,
if
basil
prioritize
these
types
of
scenarios
like
theoretically
it
could,
you
know,
be
better
and
not
so
obnoxious,
I
think
the
nice
thing
about
it.
Testing
for
us
is
that
you
know
it
allows
building
different
languages
so
yeah,
but
yeah
for
the
you
know,
given
that
kubernetes
kubernetes
is
just
go,
it's
it
has
substantially
less
marginal
value.
There.
C
C
E
E
E
B
E
B
A
Yeah
I
agree:
that's.
That
is
a
very.
A
B
A
E
Could
skip
the
cash
creation
job
if
you
just
had
every
job
update
the
cash
after
it
finished.
You
have
I
think
there's
more
room
for
interesting
races
where,
but
it
would
more
or
less
work,
and
it
is
how
like
Travis
and
be
like
handle
bear
caching.
They
also
just
take
the
cache
directory,
zip
it
up,
zip
it
up
to
every
every
s,
free,
pull
it
back
and
reuse
it
in
with
extra
they.
D
B
C
I
I
protect
something
like
this
in
kind
and
trying
to
meet
you
kind
of
Jarek
I.
Think
the
main
thing
is,
it
just
depends
on
exactly
how
how
the
rest
of
your
jobs
going
cuz
you
need
to
hook
kind
of
before
or
after
doing
something
and
like
maybe
someday
with
Tecton,
or
something
will
have
a
way
to
do
that.
But
right
now
there
it's
not
a
really
like
proud
native
way
of
doing
that
sort
of
thing.
Pokey
toes
can
do
it.
D
C
B
By
the
way,
Stefan
is
now
on
meeting
and
I,
don't
think
he
was
initially,
but
we
were
talking
about
your
PR,
an
s3
support
earlier.
So
thanks
for
that,
it's
pretty
cool
and
thanks
also
for
working
through
lots
of
different.
You
know
around
reviews,
so
that
is
cool
yeah.
F
No
problem,
yeah
just
fun,
always
just
recognized
the
meetings
already
going
on
yep.
Yes,
I
will
have
some
follow-up,
Perkis
I'm
I
didn't
really
have
it
prepared
for
now,
so
I'm
not
sure
if
II
can
discuss
some
of
it
or
maybe
I.
Just
put
some
discussion
points
on
next
progress
and
discuss
it
here,
probably
better
prepared.
This
way.
F
But
I
think
the
general
idea
would
be
in
the
mind
right
now.
The
next
progress
would
take
a
look
at
GCSE
upload
package
and
cite
current
in
it
upload
how
to
reflect
to
this
part
and
for
requests
after
that
interaction
of
deck
and
spyglass
and
yeah.
All
this
complicated
stuff
with
path
and
yeah
mapping
from
from
deck
where
else
to
source
have
something
for.