►
From YouTube: Working Group: 2021-06-02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
on
to
release
planning
and
updates
platform
javier,
you
want
to
start
us
off.
B
Maybe
sure
I
wanted
to
see
if
dan
was
here
he's
not
pac
released
recently,
oh
190.
I
believe
that
was
either
monday
or
tuesday.
I'm
not
sure
should
have
been
tuesday,
but
I
feel
like
I
saw
a
monday
but
yeah,
I
think
that's
shipped.
There
are
no
other
releases
scheduled
for
right
now
on
the
platform,
so.
A
I
think
I
call
that
natalie
she's
down
here
I'll,
give
us
an
update,
I
think
we're
just
working
towards
the
next
release
sort
of
the
stage,
one
of
the
bill,
packs
work
on
life
cycle
and
continually
pushing
forward
on
that.
No
other
release
is
planned.
B
Now
well,
there's
the
new
build
pack
authors.
A
A
This
is
that
this
must
be
a
workflow
thing.
That's
not
an
actual
rc.
I
started
work
on
add
proposal
for
shared
layers
directory
I'm
going
to.
I
changed
all
right
now.
This
is
not
the
one
that
I
said:
yeah,
it's
not
going
to
be
a
part
of,
oh
sorry,
add
a
proposal
for
a
shared
layers
directory.
This
is
one
of
sam's
rfcs.
D
Before
we
skip
off
on
that,
we
that
needs
to
be
tagged
is
that
going
to
be
core
team
for
the
tagging
there.
A
Think,
thank
you.
Please
drive
through
bill
pack,
author
sub
team.
This
one
has
all
the
approvals
it
is
assigned
to
terrence
and
does
this
need
a
let's
just
need
a
rfc
or
final
comment
period.
Tag
at
this
point.
A
A
A
A
C
A
A
The
last
rfc
is
disambiguate
layer,
metadata
files
from
app
metadata.
This
is
the
one
that
I'm
taking
over.
I've
started
work
locally
on
updating
this
one,
there's
nothing
to
discuss
right
now.
I
need
to
circle
back,
I
think
with
emily.
She
had
some
concerns
about
the
final
pathing
that
I
sort
of
listed
here,
and
so
I
will
re-request
reviews
once
this
gets
updated.
B
All
right,
so
I
put
that
on
there
because
last
week
we
discussed
you
know,
rescheduling
this
meeting
to
be
a
little
bit
earlier
to
be
a
little
bit
friendlier
for
some
of
the
team
members
that
are
overseas
and
the
community
of
course,
so
we
threw
out
a
doodle.
We
got
at
minimum
seven
responses
because
the
top
three
locations
or
time
slots
had
seven
votes
as
opposed
to
three
or
five,
and
they
were
on
wednesday.
B
10
am
eastern
time,
thursday
10
a.m,
eastern
time
or
thursday
11
a.m.
Eastern
time,
because
we
do
have
office
hours
on
thursdays,
although
we
could
move
those.
If
we
wanted
to
keep
it
on
this
date
or
this
day
of
the
week,
we
could
just
aim
for
10
a.m.
Eastern
time.
B
D
D
Let's,
let's
move
it
so.
E
B
Yeah
yeah
yeah;
no,
he
did
not
vote
for
it.
E
A
C
I
think
this
was
a
discussion
terence
and
javier,
and
I
had
in
the
last
office
hours,
but
I
I
don't
know
if
it's
worth
bringing
it
up
again
with
larger
group,
the
the
gist
of
it
was
that
there's
some
ambiguity
in
the
project,
descriptor
spec
around
certain
fields
and
that
should
probably
be
solved.
That
was
one
of
the
action
items.
C
The
other
thing
that
was
sort
of
open-ended
is
how
should
platforms,
apart
from
back
support
the
project
descriptor,
whether
as
a
whole,
whether
they
can
support
parts
of
it
if
the
supporting
parts
of
it?
Does
that
mean
we
are
going
away
from
this
model
where
you
can
take
a
builder
and
expect
it
expect
the
expect
different
platforms
to
build
the
same
image
if
it
doesn't
support
the
project.
Descriptor
spec.
Should
the
other
platforms
just
fail
and
what
happens
like
how?
How
can
we
increase
like?
A
Definitely
want
life
cycle
to
support
project
descriptor.
I
know,
there's
been
some
pushback
and
making
it
optional
from.
I
think
stephen
has
been
sort
of
vocal
about
it
before
but
yeah.
I
think
we
should
at
least
push
forward
to
having
it
as
an
optional
phase
as
a
starting
point,
but
it
still
does
have
the
concerns
that
you
talked
about
as
far
as
not
being
able
to
transport
your
source
code
from
google
to
heroku
to
salesforce
to
paquetto,
or
you
know
the
build
systems
there.
So
I'm
not
sure
I
think.
C
C
A
C
A
C
So
that
if
people
are
building,
let's
say
an
an
image
using
inline
buildbacks
on
back
and
they
have
that
configuration
in
their
project
descriptor
and
they
go
to
another
platform
that
doesn't
support
it.
Certainly
the
image
they
produce
is
no
longer
the
same.
So
that's
where
the
concerns
were
coming
from
like
what
happened
to
these
cases.
B
Yeah
I
mean
I
definitely
have
a
a
strong
opinion
that
if
the
project
tamil
is
to
be
seen
as
either
a
configuration
or
a
set
of
instructions
for
a
build
process-
and
you
build
it
using
a
tool
that
is
built,
packs
compatible
like
pack
and
then
throw
it
onto
something
like
kpac
or
google
cloud,
the
same
output
should
be
produced
from
the
same
inputs
right
and
as
soon
as
we
stray
from
that,
because,
let's
say
certain
feature
isn't
supported
right
in
the
worst
case.
It
doesn't
even
warn
it
doesn't
tell
you
anything.
B
E
I
feel
like
having
the
exact
same
output
from
every
platform
is:
maybe
not
something.
That's
100
feasible
right,
because
you
can
imagine
platforms
having
opinions
about
you
know
like
maybe
a
build
pack
they're
gonna
add
to
every
build,
or
you
know
you
could
configure
a
set
of
allowed
stacks.
I
don't
think
I
don't
think
we
should
force
every
platform
to
offer
the
same
options
or
fail
if
they
don't
it's
not
the
right
fit
for
every
use
of
cloud-native
build
packs
out
there.
E
B
I
think
I'm
I'm
looking
at
it
from
an
end
user's
perspective
right
from
the
app
developer.
That's
going
to
try
to
use
this
project
descriptor,
and
I
have
it
working
exactly
the
way
that
I
want
it
to
on
my
machine
right
and
then
I
ship
it
over
to
circle
ci
right
and
all
of
a
sudden.
It
doesn't
produce
right
and
again
the
worst
case
is
it
doesn't
tell
me
anything
about
the
fact
that
it
completely
ignored
a
set
of
configuration
and
it
produced
an
image
which
now,
I
think,
to
behave.
B
B
So
I
think
that's
one
maybe
slightly
separate
conversation
right,
but
even
if
you
just
think
about
the
fact
that
if
you
do
support
project
tamil,
I
think
one
of
the
initial
proposals
was
that
at
the
very
minimum
or
sorry
that,
like
if
a
certain
property
right,
a
certain
build
pack,
scoped
property,
like
builder,
isn't
supported,
then
it
should
fail
right
like
it
should
say
that
it
does
not
support
this
configuration
and
then
what
you
can
have
is
you
could
have
two
project
tamil
descriptors
right,
one
for
your.
C
E
E
You
don't
want
it
to
fail.
You're
configuring
things,
the
other
way
a
different
way
in
that
platform
or,
like
you
know,
this
is
your
builder
that
you're
pulling
when
you're
using
pack.
But
you
know
if
you're,
using
a
different
platform
you're
using
their
default
builder
or
something
right,
and
you
don't
need
to
specify
it.
C
I
think
that's
where
I
wanted
some
clarity
in
the
spec
like
maybe
we
leave
it
to
the
platform
to
ignore
certain
properties,
but
if
it
ignores
them,
it
should
warn
that
it's
not
using
those
things
it
at
the
very
least,
and
if
it's
a
completely
new
version
of
project
descriptor
that
doesn't
even
know
what
properties
that
project
descriptor
has.
It
should
say
that
this
is
a
completely
new
version.
I
don't
know
whatsoever
about
this,
so
things
will
just
randomly
fail.
C
D
D
Borders
on
project
descriptor
being
only
somewhat
optional
as
an
extension
spec,
if
it's
in
the
platform
api
of
whether
you
support
the
file
or
not,
that
you
have
to
do
something
about,
it
is
interesting.
We
also
discussed
the
last
time
as
well,
there's
also
extensions
to
the
project
descriptor,
which
itself
is
an
extension.
So
the
one
example
is
the
builder
extension
right
like
you
could
support
project
descriptor,
but
not
builders.
On
your
platform.
D
C
C
How
do
you
keep
up
with
it
whether
the
implementation
team
or
the
platform
team
should
be
responsible
for
maintaining,
like
utilities,
that
other
platforms
can
reuse
to
do
to
replicate
the
behavior,
for
example
like
if
you
have
something
like
build.include,
whether
it's
following
the
same
git,
ignore
behavior
across
all
the
platforms
when
it's
adding
or
exploding
files,
and
whether,
like
this
functionality
itself,
should
be
under
feature
flag,
so
that
if
a
particular
platform
doesn't
want
like
inline,
build
back
support,
it
can
just
turn
it
off
or
whether
it
should
be
shipped
as
a
whole.
A
There
there
is
a
proposal
for
a
prepare
phase
for
the
life
cycle,
or
at
least
there's
an
issue
for
it,
and
we
actually
have
someone
participating
in
google
somewhere
of
code
who's
going
to
be
looking
at
it.
So
I
know
that
this
isn't
exactly
what
we're
talking
about,
but
that
would
be
a
good.
It
would
be
good
to
include
that
information
in
the
conversations
that
happen
around
that
that
work.
A
Yeah,
I
think
that's
something
to
discuss.
I
think
that
could
be
something
for
tomorrow
as
well.
The
summit,
if
you
want
to
add
that
to
the
agenda
continuing
with
you,
sam
you've
got
shared
layers
directory
rc
number
163.
C
C
C
So,
like
we
discussed
last
time,
I
separated
out
that
build
right,
rfc
into
something
that
just
has
a
read-only
portion
as
a
separate
rfc,
and
this
shared
layers
thing
as
a
separate
rfc.
The
idea
is
that
we
would
introduce
a
new
shared
layers
directory
similar
to
the
layers
directory,
which
would
have
a
somewhat
similar
structure
as
the
layers
directory,
except
other
build
like
multiple
buildbacks
could
participate
in
populating
the
contents
of
this.
C
These
layers,
each
sort
of
shared
layer
directory,
is
owned
by
a
specific,
build
pack,
so
just
for
accounting
purposes,
and
also
to
make
sure
that
if
the
specific
build
pack
is
removed,
this
layer
is
expunged
from
the
next
build,
mostly
just
for
counting
and
cleanup
purposes.
C
It
does
not
follow
the
the
the
typical
layer
directory
structure
in
that
the
lifecycle
would
not
export
variables
for
paths,
ld,
library,
path,
etc,
except
for
the
environment
folder,
where
the
the
environment
folder
here
would
behave
the
same
way
as
the
layers
folder.
The
only
reason
to
include
this
alongside
the
shared
layer
is
so
that
the
environment
variables
set
reflecting
that
shared
layer
have
the
same
life
cycle
as
a
shared
layer.
So
a
build
pack
certainly
doesn't
reference
an
environment
variable
to
a
location
that
doesn't
exist
anymore.
C
So,
apart
from
that,
it
has
a
layered
ml
file,
which
has
this
type
start
cache
flag
for
similar
purposes.
As
for
simply
making
sure
that
this
is
not
accidentally
kept
in
across
rebuilds.
D
C
Like
probably
not
the
whole
reason
why
we
I,
like
I
introduced
that
subdirectory
was
for
this
specific
cleanup,
so
that
if
you
have
a
buildback
that,
let's
say
sets
up
like
let's
say
you
have
a
pivotal
pack
that
sets
up
a
shared
webcast
directory
if
it's
removed.
D
D
A
C
I
think
the
only
reason
I
wanted
to
namespace
this
also
that
a
random
buildback
couldn't
just
come
in
and
remove
the
entire
like
clean
up
the
entire
shared
layers
directory.
It
would
be
limited
in
the
amount
of
damage
it
could
do
or
like
damage
it
could
accidentally
do,
and
these
subsequent
build
backs
could
only
view
the
layer
through
an
environment
variable,
so
even
they
wouldn't
be
able
to
just
go
and
randomly
delete
stuff
or
put
more
things
into
it.
So
that
was
the
only.
C
C
A
C
C
D
Yeah
I
started
looking
when
I
was
reading
this
proposal
like
how
common
it
was
to
use
mbars
to
set
that
stuff.
I
couldn't
find
something
in
ruby,
but
I
think
like
mpm
and
a
lot
of
kind
of
other
ecosystems
use,
you
can
use
and
mbars
to
kind
of
override
where
that
tool
will
put
stuff.
So
I
think
like
having
a
bill
pack
check
for
that.
Mbr
is
not
a
crazy
kind
of
proposal,
just
naturally
independent
of
this
kind
of
idea
right.
It's
like
if
I'm
mucking
around
with
like
that
store
like
you
can
imagine.
D
C
Automatically
work
with
that,
like
they
don't
even
have
to
check
it.
They
just
know
the
existence
of
a
specific
nvar
and
work
with
it.
Like
the
the
the
buildback
author
doesn't
necessarily
have
to
even
account
for
it.
A
separate
buildback
could
enhance
the
performance
by
simply
introducing
this
cache
directory
that
gets
restored
and.
A
A
A
D
Touch
on
the
other
rfc
that
sam
changed,
I
think
overall,
the
change
is
good,
but
the
layerizing
of
like
doing
kind
of
this
export
type
phase
during
the
middle
of
build
is
a
pretty
big
change.
It
may
work,
maybe
worth
bringing
up.
E
There's
a
bigger
change:
what's
interesting,
though,
is
it
sort
of
fits
with
the
way
stack
packs?
The
plan
for
stack
packs
works
so
in
some
ways
it's
consistent
with
the
direction
we're
already
going
for.
One
use
cases
like
this,
like
we'll
need
to
figure
out
the
what
spec
changes
fall
out
of
it,
but
the
idea
seems
it
seems
feasible
to
me
to
move
that
logic
from
exporter
to
builder
and
keep
the
exporter
dumber
exporter
just
puts
things
places.
D
E
I
guess
you're
getting
the
exact
set
of
changes
you
want
in
the
laundry
image,
which
is
the
case
where
we
want
to
be
very
careful
and
for
build
layers,
we're
restoring
the
exact
set
of
things
the
build
pack
created,
so
the
build
pack
that
authored
the
layer
doesn't
have
to
worry
about
dealing
with
random
stuff
on
a
rebuild
that
didn't
intend
to
be
there.
So
I
think
it
solves
the
big
problems
and
if
another
build
pack
writes
in
there
during
build.
D
It
feels
like
a
silent
fail.
I
guess,
which
is
the
problem.
It's
like
a
silent
fail
of
a
thing,
and
I
guess,
like
the
other
thing,
is
it's
not
just
build
director
right
like
it's
like
these
are
just
layers
like
it
could
be,
build
and
launch
right
like
it's,
not
one
or
the
other
in
that
flag,
so
the
bill
pack
expecting
because
we
aren't
stopping
them
from
doing
it
today
like
it
does
mean
you
are
changing
the
launch
image,
even
though
it's
not
spec
compliant.
D
Well
like,
if
I
have
a
layer
like
samus
explicitly
said
like
nothing's
stopping
you
from
in
life
cycle,
that's
actually
preventing
you
from
writing
into
it
right.
D
So
if
you
lay
a
rise
at
that
point,
it
is
actually
like
a
breaking
change
now,
even
though
the
spec
isn't
changing
because
like
just
because
I
can
do
it
means
I
could
be
doing
that
without
potentially
knowing
or
whatever,
and
so
now
the
launch
that's
produced
after
the
fact
could
be
different
or
even
like
build
and
subsequent
things
won't
have
those
artifacts,
because
there's
no,
like,
I
guess,
feedback
to
those
build
packs
that
are
violating
the
spec
it
just
we
silently.
C
Breaking
change
either
way,
because,
right
now
I
was
expecting
the
life
cycle
to
keep
those
layers
intact
and
exactly
the
way
the
original
buildback
left
them.
But
I
was
facing
issues
where
a
subsequent
buildback
was
unintentionally
modifying
a
previous
layer
like
when
executing
a
binary
which
had
some
side
effects
like
creating
random
cache
files
in
the
place
where
the
binary
exists.
C
So
now,
even
though
the
original
buildback
didn't
modify
anything
a
change
or
a
rom
of
a
subsequent
build
back
now
resulted
in
the
original
layer,
changing
new
things
being
pushed
out
like
the
entire
cache
being
invalidated
and
not
being
able
to
trust
the
metadata
of
the
layer
anymore,
because
some
other
build
pack
did
something
which
it
didn't
mean
to
do
like.
It
was
just
executing
a
binary,
hoping
it
would
be
a
no
side
effect
operation
on
the
original
layer.
D
Yeah,
I
I
mean
I
get.
I
get
the
original
use
case
just
I
feel
like
we're,
providing
no
feedback
for
people
who
are
potentially
violating
the
spec
and,
in
theory
like
I,
I
think
the
ideal
scenario
for
me
would
be
like.
D
There
is
some
type
of
error
warning,
some
type
that
pushes
people
to
use
like
the
shared
cache
or
something
or
even,
if
you're,
the
one
violating
it
and
you
didn't
know
like
now.
You
know
that
you're
doing
a
thing,
and
maybe
you
you
should
have
picked
up
on
something
else,
to
use
a
shared
cache
or
something
maybe
you're
like
doing
something
that
isn't
violation,
but
right
now
like
it
will
just
function
and
it
may
not
give
you
the
end
result
you're
expecting
which
it
would
have.
D
Potentially,
if
what
sam
said
was
what
you
wanted
to
do
of
it
actually
introducing
side
effects
instead
of
it
being
the
wrong
behavior.
Maybe
that's
the
right
behavior
you
wanted
right
and
now
it
just
doesn't
work,
and
you
don't
know
why,
like,
I
think.
C
E
Yeah,
we
try
really
hard
not
to
break
things
right,
but
this
is
an
unusual
case
where
the
build
pack
is
violating
the
spec
and
there's
like
interactions
between
build
packs.
So
we
can't
totally
isolate
the
behavior
of
one
build
pack
from
the
behavior
of
other
things
in
the
system
and
keep
it
consistent
for
that
build
pack
ramp.
E
E
I
think,
if
you're
writing
to
a
layer
that
is
launch
true,
build
false,
there's
already
reasons
you
wouldn't
be
doing
that,
because
that
layer
might
not
even
be
there
on
a
rebuild
to
write
to
right.
It
could
just
be
totally
avoiding
recreating
it
the
upstream
build
pack,
so
I
think
a
layer
that
is
launched.
True
cash,
false
build
false.
E
You
really
can't
get
yourself
in
this
situation.
Anyways
the
danger
is
for
and
anything
that's
billed
and
cash
should
be
able
to
be
recreated.
So
the-
and
it
won't
matter
if
it
gets
exported
with
those
changes-
will
just
be
a
little
bit
slower.
The
danger
is
only
for
build
true
launch,
true
layers.
E
I
don't
know
if
I
see
that
as
better,
because
now,
all
of
a
sudden,
if
you
have
a
bill
pack,
that's
doing
this
you're
going
to
get
failures.
C
E
E
D
C
D
On
how
much
slower
is
slower
right,
like
I
think
that
varies
from
milliseconds
to
depending
on
what
you're
doing
you
know
minutes
right?
So
if
you
have
no
idea
why
this
thing
is
never
working,
I
think
it
is
probably
very
frustrating
for
a
vote
pack
author
to
debug
that
then,
if
you
get
no
feedback
from
the
system,.
D
Yeah
I
mean
I'm
not
dying
on
this
hill.
It
was
a
thing
I
noticed
when
kind
of
reviewing
the
changes
from
the
rc.
E
D
D
Either
because
if
you,
I
guess
layer
eyes
and
do
whatever,
then
it
is
independent
of
kind
of
what
happens
to
the
rest
of
the
build
system.
C
A
C
C
Apparently
this
ndai
working
group
that
meets
weekly
by
weekly
that
does
exactly
this
compare
difference,
form
formats
and
try
to
figure
out
which
one
is
which
one
is
good
and
how
they
can
be
interoperable
and
what's
the
tooling
ecosystem
for
each
of
them,
and
they
also
have
a
bunch
of
this
thing
called
plugfest,
which
are
like
sample
projects,
which
they
then
run
the
tools
from
each
of
the
escom
ecosystem
on
to
see
what
how
complete
the
output
is
or
whether
it's
even
aware
that
it's
generating
incomplete
output
and
things
like
that.
C
So
that's
a
fairly
comprehensive
study
of
the
different
s4
formats
and
the
conclusion
from
that
is
that
there's
no
there's
no
like
s1
format.
That
is
the
best.
C
So
if
you
go
through
that
document,
it
compares
three
different
three
major
formats,
swift
tags,
cyclone,
dx
and
spdx,
and
it
also
goes
on
to
point
out
the
tooling
support
for
it.
It
divides
the
tooling
into
three
different
formats,
create
transform
and
consume
or
like
produce,
transform
and
consume.
C
So,
like,
I
think,
in
terms
of
the
buildback
project,
we'd
be
most
concerned
with
tools
that
create
or
transform
rather
than
consume.
I
don't
imagine
us
consuming
s
bombs,
consumption
in
the
sense
that
the
security
analyzer
takes
a
look
at
it
and
figures
out
like
which
cvs
or
licenses,
etc,
etc.
C
So,
I'm
assuming
we
are
not
gonna
deal
with
that
side
of
the
sponge
story
and
we'll
mostly
be
dealing
with
creation
and
at
times
transformation
between
different
formats,
if
needed,
that
working
group
also
specifies
mappings
between
different
formats
and
how
to
convert
them
or
what's
the
equivalent
between
the
different
test
bomb
formats.
C
And
it
finally
highlights
what
are
the
key
use
cases
for
each
of
the
formats?
There
also
seems
to
be
some
sort
of
political
background
here
between
spdx
and
cyclone
dx.
Cyclone
dx,
just
a
couple
of
hours
ago,
announced
that
it
would
be
trying
to
aim
to
become
like
one
of
the
was.
It
called
like
a.
C
To
sort
of
make
it
more
of
an
attractive
option
as
compared
to
spdas,
which
has
the
backing
from
linux
foundation,
and
apart
from
that
in
terms
of
the
actual
use
cases
and
key
features
it
looks
like
spd-x
does
indeed
have
like.
As
of
we
2.2,
it
has
incorporated
purls
as
a
way
of
specifying
packages,
so
that
sidesteps
some
of
the
issues
that
spd
xp
2.1
had
around
identifying
packages
and
associating
them
with
cvs.
C
It
also
is
pretty
comprehensive
in
defining
relations,
although
it
is
like
a
flat
document
it
it
has.
This
field
called
relation
which
has
like
20
different
types
of
relations
between
different
components
that
you
can
describe
the
key
things
that
are
lacking
in
spd-x
as
of
right
now
are
concepts
such
as
provenance
or
changes.
C
C
It's
it's
again
mostly
like
these
two
things
like
vulnerability,
remediation
and
pedigree.
So
pedigree
is
like
saying
that,
okay,
this
is
a
derivation
of
this.
With
these
patches
and
vulnerability,
remediation
is
saying
that
this
patch
fixed
the
cv.
C
C
C
Say
I'm
not
really
sure,
but
overall,
it
looks
like
spd-x
is
getting
a
big
huge
push
from
linux
foundation.
They
are
they
recently
rammed
up
a
lot
of
efforts
into
creating
more
tooling,
for
it
there's
also
some
points
of
intention
there,
where
there
are
like
some
weird
allegations
around
spd,
extroves
copying
cycle
index
tools
or
whatever,
I'm
not
sure
the
authenticity
of
those
claims
or
like
what's
happening
there,
but
that's
something
I
saw
in
terms
of
the
actual
tooling
as
it
currently
stands.
C
Cyclone
dx
is
does
have
very
good
tooling,
but
spd-x
also
seems
to
be
catching
up.
The
the
only
thing
was
up
until
recently,
which
is
cyclone
dx
version,
1.3
cyclo
dx
didn't
have
a
way
of
saying
that
it
generated
an
s
bomb
and
it
was
incomplete
like
it.
It
didn't
have
a
way
of
saying
that
this
bomb
may
be
incomplete.
C
They
recently
added
that
feature
which
sort
of
brought
it
to
parity
with
spd-x
in
terms
of
security,
related
or
s1
completeness
related
stuff,
and
I
have
completely
ignored
swind
in.
B
C
Whole
discussion
because
I
couldn't
find
a
lot
of
tooling
for
it
and
it
seems
to
be
more
of
like
an
enterprise
or
u.s
government
sort
of
thing
rather
than
an
open
source
project.
So
I
I
think
we
want
to
go
with
the
more
open
source
forward
project.