►
From YouTube: Implementations Sync: 2021-02-04
Description
Meeting notes: https://bit.ly/38pal2Z
B
Awesome
status
updates.
C
B
Finish,
I
don't
have
much
to
share
in
my
way
of
you
know,
contributed
pr's
just
been
spending
some
time
doing,
reviews
and
I
hope
to
share.
Maybe
today
the
work
that
I
did
last
week
on
the
refactor.
A
A
D
I
think
I
might
be
one
of
the
people
holding
that
up.
I
can
take
a
look
at
that
today.
I
also
don't
have
a
lot
of
updates,
as
I
warn
you
all
been
working
on
other
things,
but
I'm
almost
done
digging
myself
out
from
under
the
pile
of
other
things.
I
need
to
do
so.
I
should
be
more
available
in
coming
weeks.
E
In
response
to
all's
feedback
from
the
windows-
sid
rfc,
I
was
poking
around
at
how
to
implement,
run
as
and
ensure
ensure
ownership.
Those
functions
that
aren't
implemented
in
windows
yet
and
have
some
interesting
discoveries
along
that
path.
But
it
does
look
like
it's
going
to
be
feasible
and
thank
you
all
again
for
your
questions
and
the
rfc
has
really
helped
it's
all.
I
got.
B
All
right,
I
guess,
moving
on
to
release
planning,
I
think
much
has
changed
from
last
week,
but
do
we
want
to
talk
about
it
or
should
we
proceed
as
we
did.
B
C
B
C
It
relates
to
the
default
process,
so
in
case
an
all
build
pack
add
the
default
equals
true,
we
decided
were
that
we're
going
to
have
a
warning
and
also
override
it,
and
I
mean
set
it
to
false
so
a
few
days
ago,
I
put
this,
but
I
think
that
now
we
know
the
answer.
B
I
think
just
I
it
might
be
helpful
for
us
to
kind
of
like
extrapolate
and
articulate
what
our
guiding
principle
is
in
in
situations
like
this,
because
I
I
could
be
wrong,
but
I
think
we
have
some
inconsistencies
already
where
you
know.
Sometimes
we
just
ignore,
like
I
think,
profile
d,
if
you
old,
build
pack
contributes
one
or
I
can't
remember,
but
it's
like
some
some
cases.
We
just
silently
proceed
as
if
they
didn't
do
anything
and
other
times
we
might
even
fail.
So
it
would
be
helpful
to
nail
it
down.
C
I
love
the
idea
of
worn
and
override
in
case
we
need
that.
I
don't
think
we
should
fail
if
we
didn't
fail
before.
C
A
C
A
That's
supposed
to
say,
I
think,
as
long
as
we
make
a
decision
here,
it
seems
like
the
the
best
path
forward.
Yeah
yeah.
I
wonder
if
there's
any
places
where,
like
picking
up
new
features
and
processing
them,
even
though
they're
on
the
old
api
version,
api
version,
like
you
said
now
like
it's,
someone
emitting
a
profile
to
you
from
like
built
back
0.3,
and
is
it
getting
executed
or
something
like
you
know,
have
we
do?
We
have
all
the
proper
guards
around
already.
D
I
think
we've
done
every
permutation
of
the
options
for
the
least
possible
consistency.
I
know
some
we've
just
respected,
even
though
it's
not
included
in
the
old
api
just
to
be
convenient
to
people,
because
it's
a
totally
additive
feature,
but
I
don't
think
that
that's
the
right
thing
to
do.
D
A
A
C
I'm
just
afraid
that
build
the
buildback
authors
will
see.
I
mean
we'll
see
that
their
bills,
like
have,
I
mean,
for
example,
in
our
default
process
case.
They
will
see.
Okay,
I
have
default
equals
true,
I
mean
they
will
kind
of
forget
that
they
are
supporting
like
the
old
api,
but
they
will
wonder
why
is
it
not
working
and
for
me?
So
this
is
why
I'm
like?
C
Anyway,
we
need
to
override
it
in
in
this
case,
but
about
the
warning,
I'm
not
100
sure.
D
That's
why
I'm
torn
as
well.
I
can
definitely
see
that
point.
Does
anyone
else
have
a
opinion
between
the
two
or
run
away
in
here.
A
Should
be
like
a
common
error
code
of
like
you
know,
you've
provided
inputs
that
are
not
valid
for
this
bill
pack
version
or
is
it
specific
to
like
you've
done
default?
And
that
is
not
valid
like
how
I'm
just
wondering
like
you
know,
because
in
some
cases,
I
guess
if
there,
if
a
build
packs
emitting
files
in
profile
d,
we're
not
or
some
some
future
folder
that
we're
not
processing.
A
Today,
we're
not
going
to
go
like
look
at
all
the
files
in
that
folder
and
be
like
hey
look
at
these
files
for
this
version
that
you're,
not
you,
know,
you've
put
files
here
for
a
version
that
you
haven't
determined
to
be
your
version
of
build
pack
api,
like
you
know,
I
don't
know
like.
I
can
see
it.
D
D
I
think
it
would
be
wrong
or
in
some
ways
you
have
to
be
careful,
because
you
could
easily
imagine
adding
logic
to
check
for
build
tommle
and
then
warning.
If
it's
passed
in
an
older
api,
but
in
an
older
api
you
could
have
a
layer
named
build
that
had
a
layer
tom
all
named
build
tommle.
So
when
it
comes
to,
I
know
the
keys
in
a
file
is
pretty
straightforward,
but
when
it
comes
to
the
existence
of
files,
like
trustee
was
saying
it's
a
little
bit.
D
Trickier
but
it
sounds
like
maybe
we
want
to
just
go
ahead
with
the
warning
as
our
philosophy
here
and
just
keep
in
mind.
They
have
to
be
careful
about.
Some
cases
might
be
something
worth
bringing
up
in
a
working
group
just
because
this
sort
of
affects
a
broad
range
of
people
who
use
the
life
cycle,
and
if
someone
had
a
really
strong
opinion,
it'd
be
worth
hearing.
B
E
So,
as
I
want
to
do,
I
was
playing
around
with
the
windows
implementation
discovered,
something
that
I
feel
kind
of
bad
for
missing
in
that
during
the
build
phase,
everything
is
still
running
as
administrator,
because
windows
isn't
actually
dropping
privileges,
don't
quite
know
how
I
missed
that,
but
everything
obviously
works
the
build
packs
that
we
had
at
least
made
to
to
exercise
this
we're
apparently
fine
running
as
administrator.
E
So
we
just
didn't,
you
know,
didn't
really
have
something
that
exercised
the
need
to
have
a
lower
privileged
user.
So,
as
I
was
going
through
and
validating
specifically
questions
that
emily
had
thrown
in
there
and
the
rfc
around
whether
we
have
enough
inputs
to
properly
drop
privileges,
I
went
ahead
and
just
put
tried
to
put
through
a
mock
implementation
for
for
run.
E
As
that
does
drop
privileges,
it
does
seem
like
it's
going
to
be
feasible
for
windows,
I'll
spare
you
some
of
the
crazier
details,
but
I
was
kind
of
wondering:
does
the
and
thank
you
jesse
youtube
for,
for
instance,
the
question
I
just
was
going
to
dig
a
bit
deeper
into
the
preparer
model.
E
Is
the
expectation
for
preparer?
Is
it's
going
to
be
running
in
creators
as
another
in
another
phase
all
executed
the
same
binary?
Is
that
the
case.
A
So
to
set
the
stage
prepare
has
not
formally
been
rfc
and
improved.
I
don't
think,
and
even
if
it
was,
I
think
in
stephen's
mind
prepare
is
an
optional
thing.
Maybe
creator
does
it
but
like
prepare
would
be
responsible
for
processing,
maybe
extension
things
like
project
tunnel,
but
we
are
proposing
moving
analyzer
before
detector,
but
prepare
itself,
I
don't
think,
is
intended
as
as
we
know
it
today
is
being
required,
and
thus
you
know
we
probably
can't
rely
on
it
to
do
anything
that
you're,
maybe
thinking
about
doing
upfront
unless.
E
A
E
Yeah,
that
makes
sense,
but
it
would
probably
be
like
a
an
rfc
on
top
of
rfc
to
to
add
more
stuff
to
prepare,
eventually.
Okay,
so
then
answer
my
question.
Lisa
like
we
need
a
pretty
extensive
renaissance
implementation
that
is
almost
equivalent
to
what
happens
on
linux.
E
E
You
have
to
drop
your
privileges
that
the
administrator
had
before
you
got
to
change
environment
variables
and
stuff
too,
but
but
that's
that's
good
context,
and
as
as
was
mentioned
too,
we
got
to
change
privileges
on
the
sockets
or
do
all
the
socket
connections
beforehand,
so
yeah
all
that
should
still
work.
I
think
I
think
it'll
also.
Let
us
remove
some
of
the.
E
A
Does
that
already
right
so
hopefully
fall
into
the
common
code
path
like
it
already
fixes
permissions
today,
if
you
run
as
administrator.
E
Right
right,
actually
yeah
brian
made
it's
that
second
function.
That's
inside
of
there,
the
insure
insure
ownership.
Yeah
thanks,
that's
good
background.
I
think
that
answers
my
question
I'll
keep
plodding
along
to
see
if
I
can
get
something
equivalent
there,
but
it
does
seem
like
it's
going
to
be
feasible.
So
it's
a
light
at
the
end
of
the
tunnel.
D
That
I'd
love
to
kill
the
droning
stuff,
which
I
feel
like
should
be
feasible
like
one
of
the
only
problems
that
happens
is
when
at
one
point
we
ran
certain
phases
as
roots.
There
was
sort
of
like
a
migration
issue
where
we
had
to
tone
things,
because
we
created
volumes
with
the
wrong
permission,
but
that
was
so
long
ago
that
I
think
we
can
safely
tell
people
that,
if
they
run
into
that,
you
know
just
delete
the
volume
and
bust
your
cash.
That
doesn't
really
worry
me
anymore.
D
It
creates
volumes
that
are
unreadable
right
out
of
the
gate,
but
I
think
if
you
know,
when
this
builder
spec
lands
or
something
we
can
put
into
the
platform
spec
if
we
required
that
on
the
base,
image
sort
of
the
directories
already
existed
with
the
right
permissions,
like
whatever
cmd
catheter,
is,
if
you're
going
to
use
it,
it
has
to
already
exist,
and
things
like
that
we
could
get
rid
of
the
toning
stuff
which
would
make
the
life
cycle
simpler.
D
F
Yeah,
I
would
agree
from
the
platform's
perspective
that
if
there
was
a
contract
of
very
specific
permissions
set
for
certain
directories,
it
should
be
the
platform's
responsibility
and
not
expected
that
the
life
cycle
mutates
those
things,
because
I
think
right
now,
if
there's
a
higher
potential
of
an
issue,
if
it
modifies
something
to
an
unexpected
state
right.
D
E
I'll
throw
in
like
for
what
it's
worth
the
chuning
part
of
the
windows.
Implementation
will
probably
be
the
easier
part,
and
maybe
even
I
don't
know
how
y'all
would
feel,
but
I
might
deliver
that
independently
of
the
renaiss,
because
it
would
let
us
it
would
speed
up
windows
a
bit
and
that
it
would
get
rid
of
one
extra
container
that
runs
before
but
yeah.
I
might
ping
off.
That
actually
seems
feasible,
see
what
do
what
you.
B
B
All
right,
I
guess
we
can,
we
could
move
on.
We
have
one
more
item
on
the
agenda
which
I
was
gonna,
hopefully
take
us
through
the
refactor
pr
that
I
put
up
so
that
it's
easier
to
get
feedback
and
also
just
kind
of
sanity
check
what
I
did
share
my
screen
unless
there's
anything
because
this
might
take
a
little
while
is
there
any
other
like
quick
items
to
discuss?
C
C
There
is
something
that
we
wanted
to
talk
about:
the
validation
process.
Right
I
mean,
if
someone
put
a
pr
who
is
right
now
you
brought
this
up,
so
maybe
you
would
like
to
introduce
oh.
B
Yeah,
this
is
relating
to
a
conversation
that
we
had
in
our
cnb
can
trip
at
vmware
stand
up
today.
You
know
regarding
the
validation
that
we
do
for
pr
is
like
you
know,
I
know
for
some
more
conflict,
complex
features.
We
might
actually
do
a
pack
build
with
like
a
dev
version
of
the
life
cycle,
to
exercise
certain
features
and
really
get
the
user
experience.
You
know
testing
covered
and
I
had
this
question
of
like
who's
responsible
right,
like
you
know,
does
it
like
the
person
submitting
the
pr?
B
F
Is
there
an
expectation
that
I
regurgitate
what
I
said.
F
Yeah,
so
this
is,
you
know,
strictly
speaking
from
my
experience
on
pac
and
kind
of,
I
guess,
maybe
more
or
less.
The
expectation
that
I
would
think
would
be
standardized
within
the
open
source
community.
F
Is
that
a
lot
of
that
really
hinders
and
is
more
or
less
the
responsibility
on
the
maintainers
right
and
how
adverse
they
want
to
be
to
like,
essentially
the
risk
assessment
made
by
the
pr.
So
you
could
think
about
the
individual
prs
if
there's
a
minimal
change
that
in
that
particular
case,
the
ex
the
test,
the
user
acceptance
test
could
be
kind
of
overlooked.
F
F
One
of
the
things
that
I
I
mentioned
that
we
didn't
pack
that
was
helpful
to
this
particular
process
was
that
we
added
a
section
for
the
in
the
pr
template
for
the
user
to
provide
the
before
and
after
outputs
and
that
allows.
You
know
me
as
a
maintainer
to
be
able
to
see
exactly
what
sort
of
testing
they've
done
beforehand
to
at
least
get
a
better
understanding
of
whether
there's
additional
edge
cases
or
use
cases
that
weren't
tested
that
probably
need
to
be
tested
right
to
get
a
better
understanding
of
again.
F
C
C
Me,
first
of
all,
it's
not
my
result,
I'm
kidding
anyway.
I
think
that
whoever
puts
the
pr
it's
it
will
be
great
if
they'll
run
some
acceptance.
I
mean
it's
not
acceptance.
C
F
F
I
think
my
my
concern
and
the
reason
why
I
kind
of
left
it
a
little
bit
ambiguous
is
because
I
don't
think
that
you
could
expect
other
contributors
that
are
not
in
this
room
to
be
able
to
really
comprehend
the
impact
and
the
sort
of
testing
that
is
involved
and
to
set
that
sort
of
expectation
could
be.
You
know
quite
detrimental
to
contributions.
D
I
was
thinking
that,
because
even
compared
to
pack,
it
can
be
a
little
bit
hard
to
validate
some
of
these
changes
like
if
you're,
using
in
adding
a
new
feature
to
a
new
platform
api,
but
no
one's
written,
a
platform
that
uses
it
yet,
like
you
know,
sometimes
they
go
hack,
changes
into
pack
to
really
test
something
out
or
run
things
in
a
container,
but
it's
a
little
laborious.
D
So
what
I
like
the
idea
of
putting
some
of
these
questions
in
the
template
like
creating
a
pr
template,
because,
first
of
all,
we
don't
have
one
and
like
maybe
putting
a
prompt
in
there.
I
think
maybe
in
the
life
cycle
should
be
more
open-ended
like
what,
if
any
validation
have
you
done
outside
of
the
tests
contributed
to
the
code.
Just
like
give
people
a
place
to
put
what
they've
done.
So
we
know
how
much
is
left
to
do.
D
E
A
B
B
I
like
this
idea:
does
anyone
want
to
take
the
action
of
of
setting
up
a
template,
if
not.
B
B
There
we
go,
did
anyone
want
to
take
the
the
the
ask
in
working
group.
C
I'm
not
sure
that
they'll
go
to
the
working
group
today,
but
if
I
do
I
don't
mind
doing
this
or
we
can
keep
it
for
next
week.
Awesome.
B
All
right,
in
my
case
I
can,
I
can
do
a
quick
share
of
my
refactor.
B
Let
me
let
me
pull
it
up
and
and
oh
no,
I
go
land
is
quitting
because
I'm
not
on
the
vpn,
so
I
have
to
do
some
stuff
here.
Let
me
let
me
paste
the
link
in
our
chat.
B
C
B
I
guess
I'll
just
preface
this
with
we're
hoping
to
get
this
accomplished.
You
know
this
release
cycle
because
we
think
it
will
make
it
easier
for
stock
packs
to
land
and
let
me
share.
B
I
wonder
if
maybe
it
makes
sense
to
kind
of
like
toggle
back
and
forth
between
these
two,
maybe
not
in
the
last
release.
C
B
But
oh
there's
a
lot
of
changes.
I
guess.
B
B
You
know,
group
detect.
That
was
very
difficult
because.
B
What
it
meant
that
all
of
the
code
related
to
detect
really
needed
to
be
in
the
same
package,
so
the
kind
of
the
biggest
change
I
would
say
in
this
refactor
is
moving
that
code
to
be
more
independent
and
allowing
for
certain.
B
This
is
really
confusing.
Sorry,
this
is
not
helping
me
allowing
for
them
to
be
kind
of
composed
independently.
So
I
I
don't
know,
is
this
making
any
sense?
What's
the
best
way
for
me
to
proceed.
D
A
Yeah,
I
think
I
think
it
was
starting
to
make
sense
when
you
were
talking
about
showing
all
the
methods
that
live
off
the
strut
and
it's
kind
of
like
to
see
where
those
messed
methods
ended
up
and
sort
of.
You
know
how
you
know
how
much
more
composable
they
may
or
may
not
be
now
in
their
new
new
spot.
Whatever
that
means
you
know.
D
D
B
I
think
that's
a
good,
that's
a
good
place
to
jump
in
what
was
jessie's
suggestion.
I
think
maybe
just
looking
at
the
new
package
and
then
we
could
kind
of
back
out
to
like
what
the
changes
actually
were.
B
I
guess
I'll
just
start
at
the
highest
level,
with
the
descriptor
and
and
kind
of
rename
some
stuff
make
it.
You
know
not
stutter
and
stuff,
but
you
know
this
is
all
could
be
changed,
so
this
is
kind
of
representing
like
the
build
pack
tommle
file,
and
you
know
if
it's
a
meta
build
pack,
then
it
will
have.
B
You
know
an
order
within
it,
and
so
this
is
just
kind
of
moving
all
of
the
structs
into
one
place
that
might
be
referenced
from
the
from
the
tamil
and
then
bill.go
is
really
containing
all
of
the
stuff
that
used
to
live
in
bill
pectomell
in
the
lifecycle
package.
It
really
was
just
like
a
copy
paste
to
move
it
over
and
the
primary
method
here
is
the
the
build
method
on
a
single
tamil
file.
B
All
of
the
other
stuff,
you
know,
the
methods
that
used
to
live
on
the
struct
are
still
in
the
life
cycle
package,
and
that
is,
I
kind
of
made
these
like
detect
order,
detect
group,
that's
really
just
pulling
them
off
the
struct
and
putting
them
on
this
detector
thing.
B
The
detector
is
initialized
with
a
detect
config
that
it
passes.
So
maybe
it
makes
sense
to
split
these
when
it
calls
detect
on
an
individual
build
pack.
It'll
pass
that
config
in.
B
Oh,
my
goodness,
sorry
am
I
looking
at
the
same
nope
here,
it'll
pass
that
detect
config
to
the
individual
one,
and
then
it
also
has
this
thing
called
a
resolver
which
is
really
just
containing
all
of
the
logic
that
has
to
do
with
log
aggregation
and
plan
resolution,
which
really
sounds
like
two
responsibilities,
and
so
I
kind
of
you
know
could
be
further
decomposed,
but
for
now
this,
actually
it
lets
the
tests
mock
certain
stuff
out
that
that
was
like
before
needed,
to
be
fully
implemented
in
our
fixtures.
B
So
I
can
show
the
test.
The
tests
are
actually
like
a
significant
amount
of
effort,
and
so
maybe
just
to
remind
everyone
who
didn't
work
on
the
detector
test.
Recently
we
have
these.
We
had
these
fixtures,
which
is
like
a
bunch
of
build
packs
that
all
referenced
each
other
and
was
used
to
implement
this
sort
of
like
crazy
chain
of
resolution
and
trying
to
find
it
just
to
show
like
yeah.
B
All
of
these,
like
test
data
by
id
a
b,
c
d,
e,
f
g,
which
is
now
it's
possible
to
represent
that
relationship
in
our
mocks.
So
if
I
go
here-
and
I'm
not,
I'm
not
sure
if
this
is
honestly
better
because
you'll
notice,
that,
like
okay
here,
are
all
the
a
b
c
d
e,
f
g
and
I
have
to
I
have
to
I-
can
now
declare
in
the
test
that
you
know
e
actually
is
a
f
and
b
a
is
an
individual
build.
B
Pack
f
has
c
g
and
d,
which
I
like,
because
it's
right
here
in
the
test.
I
can
see
all
of
the
relationships,
but
it
is
quite
a
lot
of
a
lot
of
expect
calls
and
it's
kind
of
not
clear,
like
where's
the
setup
and
where's
the
test.
B
All
their
exit
codes
is
going
to
be
a
c
and
b,
and
then
you
know
more
stuff
happens,
and
then
I'm
going
to
call
resolve
on
a
and
b
and
this
this
used
to
be
tested
by
looking
at
this
huge
log
output
right
a
c
and
b,
then
a
and
b,
then
a
c
d
and
b,
which
is
now
being
tested
instead
of
actually
calling
detect
on
these
fake,
build
packs.
I'm
testing
it
by
asserting
on
the
interfaces,
and
it
took
me
a
really
long
time
to
write
this.
A
Where's
the
build
pack
store
passed
into
is
it,
would
you
stand
up
the
resolver
or
something.
A
A
A
It
feels
way
less
brutal
than
the
current
log
one
for
moving
stuff
around.
That
is
yeah.
That
seems
definitely
true
to
me
all
right,
I'm
less
familiar
with
doing
gomox
like
a
ton,
but
I
do,
I
think
it's
easier
for
me
to
get
used
to
that
than
it
is
to
unders
understanding
like
how
all
these
build
packs
relate
to
each
other
and
like
trying
to
keep
that
in
my
head
every
time
I
come
back
to
these
tests,
so
I
I
like
them
being
in
the
test.
I
think.
A
C
B
One,
I
guess
one
place
that
I
I
direct
your
attention
to
in
in
looking
through.
This
is
the
the
resolver
sorry,
as
I
mentioned
kind
of
feels
a
little
funny
in
that
it
it's
responsible
for
aggregating
logs
from
all
of
the
you
know
from
a
group
right,
I'm
going
to
aggregate
all
the
logs
together
and
so
there's
tests
around
like
this
is
output
at
info
level
versus
debug,
and
it's
kind
of
really
handling
that
that
piece.
But
then
there's
also
stuff
like
all
right.
B
Well,
let's
you
know,
let's
try
to
resolve
a
build
plan
and
it's
kind
of
doing
that
piece
of
work
as
well,
which
it
kind
of
reads.
Okay,
so
I
I
mean
I'd
be
open
to
reworking
it
as
well,
but
maybe
it
makes
sense
to
just
look
at
the
the
high
level
right
we
have
these
detected
resolve
at
the
top
level,
the
text
really
responsible
for
like
orchestrating
the
build
packs
and
executing
them
in
the
right
order,
and
so
actually
in
this
setup
it
gets
a
mock
resolver
right.
B
So
that's
the
whole
appointment
to
be
able
to
assert
on
what
what
the
resolver
gets
called
with
and
when.
But
then
here
the
resolve
are
really
just.
This
is
the
test
on
the
real,
the
real
one,
and
it
has
these
output
assertions
as
well
as
build
plan
assertions.
A
B
Yeah
yeah,
it
emits
the
plan
back
to
the
detector,
and
so
that's
why
you
know
in
in
mocking
this
resolver.
I
guess
for
this
test.
You
can
not
this
test.
The
one
that's
interesting
is
the
the
conversion
of
the
I
mean
I
don't
know.
Maybe
this
should
be
the
responsibility
of
the
resolver,
but,
like
I,
I
mocked
that
the
return
value
here
was
a
specific.
You
know
life
cycle,
build
plan
and
then,
with
you
know,
certain
characteristics.
A
D
Like
I
know,
you're
talking
about
how
it's
awkward
to
test
log
in
the
middle
of
resolving,
but
honestly,
this
all
used
to
be
tested
in
one
go
with
like
every
permutation
of
it,
like
probably
some
of
those
not
tested,
because
it's
hard
to
decompose
the
combination,
so
I
feel
like
this
is
definitely
a
big
improvement
to
at
least
separate.
These
two
don't
worry
about
logs
some
other.
D
C
I
have
a
question.
I
think
you
already
said
that,
but
what
was
I
mean?
What
was
the
reason
for
doing
that?
All
of
this
I
mean,
in
addition
to
I
mean,
make
the
code
clear
and
more
testable.
I
remember
that
you
said
something
about
stack
bags.
So
can
you
please
operate
on
this.
D
I
can
give
some
context
because
I
think
I
push
this
a
little
bit
testing
these
through
the
interface
we
had
was
already
complex
and
there
were
a
lot
of
permutations
of
what
could
happen
with
stack
packs.
It's
only
going
to
get
more
complex
right
to
the
extent
where
I
feel
like
our
current
organization
was
like
at
the
limit
of
becoming
hard
to
maintain
and
adding
all
the
complexity
of
stack
packs
in
there.
I
think,
would
have
broken
it
and
made
us
want
to
refactor.
D
A
A
Looking
at
some
of
the
stuff
that
sort
of
happened
because
it
was
so
hard
to
you
know,
detect
now
has
it
was
emitting
multiple
files
right
and
it's
the
you
know
the
order,
the
order
that
we
that
we
admit
today
or
the
groups
that
we
admit
today
and
the
plan.
But
now
I
need
to
do
multiple
of
those,
and
so
it
meant
that
you
sort
of
had
to
do
this
major
refactor,
where
you
had
to
have
a
result,
and
so
like
the
resolver
kind
of
breaking
that
out
into
its
own
sort
of
ability.
E
B
I
think
part
of
the
I
guess
review
process,
or
you
know,
like
the
the
checking
of
what
I've
done
here,
will
will
involve
like
looking
at
the
tests
how
they
were
before,
and
I
guess
just
confirming
that
what
we're
testing
now
is
equivalent-
and
I
think
there
are
some
there-
are
some
particularly
involved
tests.
That
became
you
know
two
or
more
smaller
tests.
I
can
try
to
find
examples
just
to
kind
of
guide
everyone's
eye.
I
suppose.
A
B
E
A
Like
the
result,
the
resolver
here
it
returns
back
like
a
group
and
then
a
collection
of
build
plans
or
whatever
it
is
built
like
entries,
I
I
kind
of
wonder
if
we
should
start
working
towards
having
like
a
single
result
set
that,
like
the
commands
themselves
operate
on,
so
that
you
can
sort
of
have
additive
changes
like
you
know,
an
additional
group
like
a
root
privilege
group
or
whatever
it's
going
to
be
for
stack
packs
right,
because
otherwise
I
think
it's
still
potentially
going
to
explode.
Like
this
resolve
statement.
A
B
A
C
A
D
B
So
I'll
just
note
that
whenever
this
does
get
merged
in
it's
going
to
be
a
really
fun
merge
conflict
too,
I
know
jesse,
you
had
the
same
problem
with
the
stack
text.
A
B
I've
already
I've,
I'm
I'm,
I
I'm
gonna
gonna
update
this
yeah
and
dan
with
your
work,
but
if
there's
any
way
that
this
could
get
merged
in
before
we
do
any
other
large
effort.
That
would
be.
You
know,
awesome.
B
B
All
right
there's
nothing
else
on
that,
I
guess
we
can
adjourn
and
get
five
minutes
back.