►
From YouTube: Security WG meeting - Feb 25 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
Alright
cool,
so
there's
been
a
discussion
generally
about
like
widow.
We
can
open
it
for
discussion
of
what
we
do
and
how
would
we
set
up
the
criteria
for
like
which
bounties
would
go
would
be
like
valid
valid
or
not.
So
if
you
I
think
overall,
we
should
probably
figure
this
out
in
an
open
manner.
I
would
just
generally
before
we
do
that.
B
What
I
think
I
just
said
that
one
of
my
life,
like
this
common
story,
is
if
we
are
good
on
this
plate,
which
I
think
we're
getting
now
all
the
thumbs
up
on
this
plate
of
the
budget.
I
think
that's
good
enough
to
first
of
all
get
discussion
now
back
with
coinbase
rolling,
so
they
would
be
able
to.
You
know
to
just
put
that
money
through
hacker
one
and
it's
an
so
as
far
as
I
understand
for
marker
one.
B
It's
entirely
anyway,
a
manual
process
so
like
when
you
know
about
about
comes
in,
and
we
want
to
issue
a
bounty
for
it.
That's
in
a
manual
process
so
until
we
figure
that
out
in
terms
of
the
criteria
to
set
out,
we
have
time
so
I
don't
want
to
like
a
block
block
it
until
we
get
those
two
things
out.
If
there
is
no
other
stronger
than
opinion
against
my
comment
from
nine
days
ago,
which
three
three
comments
up
from
the
last
one
or
if
you
want
to
discuss
it,
I'm
open
for
that.
A
B
A
B
A
B
It
wasn't
just
that
I
think
it
was
so
journey
to
your
question.
It's
I
think
Tracy
Lee
was
in
the
discussion,
I'm
pretty
sure,
and
it's
not
illegal
about
the
logo.
It
was
like
if
we
do
want
to
print
it
so
like
it
has
to
go
through
the
node
foundation.
Like
there.
Our
collaboration
with
some
I
don't
know
entity
to
to
print
it
or
whatever.
A
B
B
B
A
B
A
B
B
B
So
again,
if
there's
no
one
any
strong
opinion
against
that
and
like
I'll,
just
probably
anyway,
just
connect
with
them
tomorrow
or
you
know
the
rest
of
the
week,
some
company
I'll
just
reach
out
to
their
Mercedes's.
You
know
what
we
have
figured
out,
we'll
figure
out
the
criteria
for
the
how
to
award
the
bugs
in
a
separate
thread-
and
you
know
that's
it
I'll,
just
close
the
this
one
and
open
anyone
about
the
criteria
and
then
that's
it.
Okay,.
A
F
F
E
F
A
F
E
A
E
So
just
three
contexts
for
those
of
you:
if
you
haven't
followed
along
I,
created
a
issue
and
then
created
a
PR
to
create
index
index
podracing
files
that
effectively
just
pull
together
all
of
the
files
that
are
in
the
in
each
vulnerability
directory,
someone
is
in
NPM
and
what
is
in
core
and
that
pulls
them
all
together
into
a
single
index.
Subject
in
in
those
directory,
both
Lauren
and
Michael
expressed
a
concern
about
like
duplicate
data,
which
I
I
don't
have
that
concern,
but
I
know
that's
a
good
sir.
You
all
have
and
so
I.
E
You
know
totally
happy
to
work
with
the
head
and
work
around
it
and
see.
You
know
what
we
can
do.
That's
my
goal
is
mostly
just
to
have
this
in
a
way
that
is
consumable
by
general
users.
It's
out
in
you
know
user
land,
so
they
can
use
this
data
at
this
massive
pool
of
data
that
we
have
I
have
a
bunch
of
things.
I
want
to
build
around
it
and
I've
already
worked
on
one
project.
E
I
think
it
might
be
one
of
the
only
projects
from
the
ecosystem
that
uses
it
know
to
release
lines
that
basically
did
the
work
that
I
did
in
this
PR,
but
inside
a
module
rather
than
you
know,
just
pulling
from
the
source.
So
you
know
it'd
be
great
to
have.
This
is
a
consumable
kind
of
asset
that
you
know
you
don't
break
custom
personal
object
around
we're,
like
you,
know,
concatenate
all
of
the
yourself.
B
D
Actually,
I
think
it's
a
good
idea.
In
my
opinion,
even
though
the
data
is
duplicated
and
there
is
not
a
unique
sources
through,
but
the
thing
is,
the
data
is
also
generated,
so
there's
not
gonna,
be
human
error
on
there
and
it's
gonna
be
consistent
all
the
time.
So
at
the
moment,
what
I
do
is
I
just
pull
the
zip
file
from
github.
They
compress
the
file
and
do
our
path.
D
D
So
whether
he
was
thinking
is
I
mean
the
JSON.
Individual
Jason's
are
not
super
human,
readable.
Okay,
because
I
mean
there
is
now
we
are
at
the
stage
where
we
have
many
Jason's
on
the
same
folder.
So
it's
super
hard
to
iterate
over
there,
so
I'd
say
even
having
a
single
JSON
like
that,
or
even
an
NPM
package
with
a
single
JSON
with
all
the
burn
areas
will
be
super
helpful.
A
E
E
I
mean
I,
think
you
know
in
incident,
and
this
is
what
I
was
thinking
in
approaching.
I
was
is
having
the
individual
ones.
Let's
you,
you
know
it
and
it,
which
is
how
we
approach
everything
in
node,
which
is
in
a
modular
way.
You
can
kind
of
manage
those
individually
and
not
need
to
care
about
those
like
you,
don't
have
to
build
a
monolith
of
data.
B
If
we
would,
you
know,
break
the
API
or
the
format
in
some
way.
It
wouldn't
necessarily
break
for
you.
Unless
you
upgrade
the
package,
it's
you
could
consume
it
for
know,
putting
it
in
different
places.
So
generally,
I
think
that,
like
this
is
the
idea
that
I
wanted
to
take
with
you
further
and
kind
of
develop
on,
and
then
this
is
also
not
something
that,
like
would
specifically
add.
You
know
an
implementation
detail
to
how
we
manage
the
vulnerabilities.
It
will
be
just
something
that
we
can.
B
You
know
I'll
do
all
of
this
magic
like
outside
it.
So
we
don't
need
to
duplicate
all
of
this
data.
We
don't
need
to.
You
know,
add
stuff
that
you
know
which
general
you
may
maybe
make
things
harder
later
on,
because
adding
these
indexes
is
kind
of
a
contract.
So
people
would,
you
know,
use
that
as
a
contract,
and
then
you
know
they
would
rely
on
it,
and
if
you
want
to
change
it,
it's
now
another
breaking
change,
etc.
So
I
don't
know
journey.
How
does
the
NPM
package
it
sound
to
you?
E
Not
something
I'm
happy
to
kind
of
take
a
stab
at
that,
especially
with
you
know
the
consideration
of
the
Dean
being
existing
its
own
repository.
If
we
want
to
actually
use
it,
a
repository
as
the
source
for
the
vulnerability
or
for
the
model
as
well,
that
would
probably
be
a
way
to
kind
of
reduce
confusion
in
the
long
run
for
end-users
of,
like
here's,
the
source
of
the
disdain
and
here's.
How
you're
consuming
like
I,
think
that
makes
a
lot
of
sense.
E
B
So
so,
let's
parent
working
on
that
I
think
you
would
need
two
things
to
make
that
happen.
Probably
one
thing
is:
maybe
we
need
to
open
another
repo
under
the
node
or
to
manage
that
package
like
I'm,
not
sure,
maybe
it
will
be
part
of
either
the
advisories
or
not
like
I'm,
not
really
sure
like,
let's
see,
but
the
other
thing
is,
how
would
we
distribute
that
NPM
package
like
it
is
going
to
have
like
the
nodejs
namespace
on
NPM,
or
is
it
something
else
like
I?
A
G
H
B
What
yes,
so
what
I
could
say
is
Turney.
If
you
want
to
move
fast
on
this,
we
could
maybe
open
our
own
org
or
a
team,
or
something
like
that
and
put
it
there
like
I,
don't
know
friends
of
the
node
foundation
or
whatever
I've
done
or
something
that
would
be
generic
and
everyone
can
take
part
about,
and
then
just
if
this
process,
as
sam
says
you
know
it's
very
opinionated.
So
once
this
process
gets
a
little
bit
more
mature
and
people
are
kind
of
you
know
thumbs
up
about
it.
We
can
just
transfer
this.
A
B
A
Know
the
node.js
foundation
user
is
just
added
as
one
of
the
maintainer
x',
so
the
build
workgroup
has
access
to
that
password.
So
if,
if
all
the
maintainer
has
disappeared
for
some
reason,
we
can
login
that
does
that
user
and
add
new
maintainer
x',
but
otherwise
it's
you
know
the
maintainer
x'
would
publish
it
just
as
just
as
if
it
was
a
package,
any
other
package.
Alright.
B
So,
let's
look
at
that
on
my
own
also
want
I
also
want
to
be
very
diligent
in
how
we're
doing
that.
That's
why
I'm
asking
it,
because
once
you
enable
2fa
and
make
sure
that
everything
is
set
up
correctly,
especially
in
those
sensitive
areas,
so
like
I'm,
not
really
sure
how
that's
managed
today,
we
can
take
a
look
and
and
find
out
if
it's
good
enough
for
us
in
terms
of
the
security
aspect
of
that,
to
make
sure
that
this
is
fully
managed.
Well,
we
could
do
it
under
the
like
the
nodes
practices.
Yes,.
B
So
that's
why
I
would
like
us,
because
it's
get
freaky
right
like
we
would
want
that
when
like,
but
when
we
PR
something
new
like
a
new
vulnerability
to
the
repo.
It
would
also
want
maybe
to
flew
that
to
trigger
a
release
of
the
new
package.
You
know
with
that
owner.
So
there's
like
a
lot
of
tokens
and
stuff
going
around
yeah.
A
B
So
you
wouldn't
need
to
like
what
I'm
imagining
is
that
every
PR
that
that
plans
in
the
repo
once
that
PR
of,
like
a
new
vulnerability
lands
either
for
NPM,
core
or
ecosystem
that
would
itself
like
when
it
gets
merged
that
will
itself
triggered
the
whole
publishing
process
of
a
new
loop
back.
So
I,
don't
think
it
changes
anything
for
like
the
security
release,
but
I
think
you'd
want
to
do
it
in
a
very
secure
than
organized
fashion.
B
So,
like
know
what's
going
under
and
have
those
hooks
integrated
into
it
and
I
think
this
would
be
like
an
ultimate
way
of
providing
these
vulnerabilities
in
a
very
consumable
way
and
I
think
journey.
We
can
also
pair
on
like
what,
with
API,
look
like
what,
with
a
model
look
like,
so
it
would
work
well
for
for
all
of
us.
B
D
A
B
Just
to
be
clear
what
I'm
imagining
is
that
this
would
be
like
the
Moodle
that
were
publishing
is
an
actual
library.
Ok,
so
like
it
would
have
actual
class
methods
of
you
know,
get
ID
get
report,
cwe
get,
you
know,
upgrade
path
whatever
and
you
know
and
then
all
the
breaking
changes
that
we
will
do.
They
would
be
like
if
we
change
formatting.
It's
like
a
new.
C
A
A
B
I
think
cember
will
help
us
to
also
adhere,
try
out
and
then
we'll
those
breaking
changes.
If
we
need
to
hope
not
right.
A
F
I
A
E
E
Do
the
way
you're,
talented
you're
suggesting
which
is
ship
the
database
with
the
module
and
the
problem
we
have
is
like
we
have
to
keep
that
updated
with
every
new
update
of
the
you
know,
the
registry,
which
means
you
know
the
data,
that's
in
get
up
right
now,
so
that
means
the
only
real
way
to
reliably
do.
That
is.
We
have
to
maintain
that
and
maintainers,
but
then
the
users
also
have
to
use
it
with
MPX
because
they
can't
pin
to
a
virgin.
E
B
There's
like
a
yeah,
there's
two
edges
for
that,
like
I,
don't
think
anyone
actually
pins
to
like
a
specific
version,
because
that's
gonna,
if
you
did
it
for
a
model
as
well
you're
gonna,
pin
everything
else.
You're,
not
gonna,
get
security
upgrades
for
that
model
as
well.
I'd
keep
up
in
the
purchase.
Trust
me
what
do.
E
A
E
A
B
There
too,
so
again,
there
are
two
ways
to
do
that.
One
thing
is,
we
can
publish
it
and
then
people
can
consume
it
and
obviously
they
would
need
to
get
your
updates
to
get
your
databases
and
the
other
thing
is
which
is
we're
not
saying
it,
but
it
exists
and
that's
the
API
that
we
want
to
bring
to
it
and
I
think
that
email
already
worked
in
some
POC
with
the
algo
Leah
something-something.
B
So
if
someone
would
want
to
like
consume
in
real
time,
the
actual
database
that
exists
already
or
I
don't
know
some
work
in
progress
that
we
can
update,
but
that
exists.
So
you
have
two
choices:
either
you
get
something
bundled
and
you
get
some
kind
of
an
API
around
it
and
you
can
update
it
whenever
you
want
or
every
time
or
whatever
we
work
with
it
with
an
API
that
you
know,
the
API
is
also
kind
of
sponsored
by
a
thing
by
algo,
yeah
and
I.
B
E
And
this
is
this
is
kind
of
why
I
I
was
approaching
it.
The
lighter
touch
of
like
put
all
the
jinxing
together
as
it
exists,
because
then
we
can
like
that
in
game
that
index
Jason,
four
core
can
just
be
published
to
the
notice
that
or
website,
and
that's
not
like
meaning
to
making
a
module
or
an
API
it's
just.
This
is
D
data
as
it
exists
in
the
security
working
group,
or
you
know,
in
the
vulnerability.
E
Basically
that's
more
of
what
I
was
thinking
that
approach,
because
it's
less
prescriptive
about
how
you
need
to
use
this
or
why
you
need
to
use
this,
and
you
don't
very
much
Myron
would
say
that
API
that,
like
husband,
think
I
did
literally
what
Nathan
white,
like
I,
didn't
contribute
into
all
the
matter.
Grounded
that's
what
he
built
in
no
really
like
the
module.
You're
talking
about
is
what
he
built,
and
he
also
expressed
an
interest
in
giving
it
to
the
new
project.
E
So
that
kind
of
all
that
infrastructure
that's
already,
and
we
can
take
that
on
that
said,
you
know
I
the
be
light
approach
is
like
I.
Don't
particularly
need
a
a
module.
I
just
want
this
in
case
this
data
like
I,
totally
understand
that
it's
breaking
and
I'm
willing
to
think
that,
but
you
know
I'm
just
trying
to
give
my
thoughts
around
like
why
I
approached
it
I,
don't
not
trying
to
be
prescriptive
here.
B
B
Would
say
that
you
can
manage
that
index
file
outside
of
the
security
advisories,
because
that
will
tie
us
up
into
a
specific
format
that
I'm
not
sure
that
we
want
to
tie
our
hands
to
that
one
and
I
keep
anyway.
Something
that's
like
you
know
like
you
can
always
do
that,
regardless
right
that
you
can
clone
the
repo
make
that
as
their
bit
saying,
David
was
saying
you
know
Condor
a
point
make
that
that
image
of
the
JSON.
However,
you
want,
but
anyway
duplicated
it
on
your
own,
like
what
would
be
the
difference
for
you.
B
Sorry
I
wasn't
clear:
I
meant
what
like,
if
you
take
it
in
like
you,
want
to
do
it
for
the
node
org
website
right,
yes,
so
that
infrastructure
can
be
managed
and
like
this
index,
surrounded
by
the
node
or
website
as
as
well
and
then,
like
users,
would
get
it
for
free
because
we
maintain
it
for
them.
That's
what
it's
it's,
who
is
the
like.
A
Yeah
I
think
that's
the
key
thing
right
like
if
we're
gonna
have
to
somebody
has
to
maintain
the
second
one
right
right
and
then
you
have
the
same
problems
of
well
the
contract
of
you.
Don't
want
to
break
that.
So
I
think
that
that's
where
I
was
coming
from
it's
like
you
know.
My
concern
over
having
duplicate
data
is
that
you
know
you've
now
got
two
things:
you've
got
to
maintain
and
keep
the
car.
You
still
have
to
keep
the
contract
for
them
right.
So
it's
it's.
E
F
E
I
mean
I,
don't
know
if
y'all
took
a
look
at
how
I
how
I
built
the
index,
but
it's
literally
when
it
is
run,
it
just
puts
all
of
B
jason
files
together.
It
doesn't
keep
that
historically
it
just
does
it
then
so
that
that
index
that
gene
I
will
always
be
compliant
to
whatever
we
were
applying
to
in
the
Jason
houses
like
this,
like
that
that
same
contract
can
be
applied
to
book
it
like
is
inherently
applied.
E
A
A
E
The
management
for
us
looks
easier
like
if
we're
gonna
go
in
and
edit
text
or
or
fix
a
simpler
rate,
or
something
like
that.
That
definitely
appears
easier
to
me,
not
that
the
large
JSON
file
is
particularly
hard,
but
what
I
I
do
see
a
couple
of
different
exports
of
the
data
that
would
be
useful
as
well
like
exporting
them
all
to
markdown
and
having
that
in
a
visible
way
on
github
or
you
know,
there's
like
a
few
different
right.
A
E
And
honestly
I
mean
the
way
I've
written
the
tool
we
can
also
just
export
it
in
different
ways
like
we
can
do
whatever
manipulation
on
that
base,
your
each
game.
It's
in
file,
we
can
just
loop
over
all
of
them
and
export
it
to
markdown
or
CSV
or
whatever
other
format
enterprises
I'm
going
to
use.
We
can
do
whatever
the
with
that
data.
We
want.
B
B
E
Yeah
and
I
mean
honestly
I
like
I
would
also
be
happy.
So
you
know
I
would
be
happy
to
build
either
a
pro
but
a
purgative
action
that
effectively
well.
We
have
to
get
it
permission
to
add
it
to
the
word,
but
that
would
just
treat
NPR
when
those
directories
are
changed
and
updating
like
running
that
command.
I.
It's
been
a
while,
since
I've
worked
on
this
I,
don't
recall
what
it's
called
but
run
that
command
and
then
PR
that
and
it
would
basically
just
be
hitting
like
good,
remain.
E
A
I'm
more
like
it
would
be
good
to
be
completely
separate,
so
the
pure
lands
you
now
have
it
there.
Something
then
kicks
off
and
pushes
a
file
to
like
the
nodejs,
the
org
website,
okay,
because
that
that
would
mean
that
you
know
there's
no
human
like
as
soon
as
you
said,
PR,
it's
like
somebody's
got
to
do
something
right
and
yeah.
E
I
mean
yeah
cuz.
It
could
be
a
really
trigger
when
someone
burg
disappear
into
those
so
like.
Ideally,
the
first
thing
could
just
go
back
to
the
PR
thing
and
look
again:
I
mean
it
would
reference
it
as
well,
but
yeah
I'm,
not
sure
if
I
can
easily
do
cross
repo,
but
I
can
definitely
look
into
that.
Yeah
I
think
it's
worth
to
do
this.
You
know
whatever
work
needs
to
be
done
here.
It's.
A
F
F
F
A
E
I
mean
I,
so
the
way
I've
set
this
up
right
now.
You
know
it
pushes
the
indent,
and
this
is
into
the
quarry
and
p.m.
directory.
We
could
also
create
an
index
or
indices
directory
and
just
put
it
there.
So
we
can
just
look
at
that
directory
if
you
that
would
be
easier
and
like
more
explicit
with
how
like
the
file
structure
and
what
exists
in
what
repo
is.
Basically,
if
someone
tried
to
crawl
all
of
those
now
they
just
get
like
this
recursive
thing
on
all.
H
A
A
E
A
Basically
be
like
on
on
on
the
CIA
machine
is
the
easiest
that
I
can
think.
I'll
have
to
get
feedback
from
the
rest
of
the
build
working
group,
but
like
it
basically
could
just
CD,
and
you
know,
see
the
into
a
directory
where
the
repos
been
cloned
to
a
get
pummel
to
get
up
to
the
date
to
the
latest,
regenerate
the
file
and
then
move
that
new
file
to
the
directory.
Where
it's
served
on
nodejs
org,
which.
E
C
A
Current
benchmarking
and
coverage
data-
you
know
those
files
are
generated
on
by
a
build
job
somewhere
else
and
then
they're
on
a
machine
that
the
CI
like
SSH
are
syncs
them
across.
You
know
every
four
hours
or
something
okay.
In
this
case,
though,
because
it's
I
assume
it
doesn't
take
much
CPU
or
time
to
generate
that
big
file
right.
It's.
A
B
Gonna,
try
to
interrupt
I'm,
gonna
chime
in
again
try
to
approach
it
from
a
different
angle.
I
would
say
if
we
try
to
look
at
it
on
a
higher
level
is
whether
we
see
we
see,
we
see
it
as
a
responsibility
of
the
security
working
group
to
create
those
kind
of
this
kind
of
different
variations
of
exports,
like
the
like
a
CSV,
XML,
JSON
format
of
all
of
these
snapshots.
B
B
Wouldn't
you
know
on
a
PR,
you
wouldn't
need
to
update
twice
of
them,
so
it
would
use
some
kind
of
the
logic
that
you
only
did
where
it
would
just
concatenate
everything
together
and
would
not
cause
like
merge
conflicts,
because
I
think
we
raised
a
couple
of
issues
around
it.
So
if
you
that's
kind
of
my
days
for
it
so.
A
I
get
where
you're
coming
from
in
terms
of
like,
should
it
be
in
the
repo?
The
only
thing
I
might
add
to
the
that
discussion
is
one
like.
Is
it
gonna
be
more
complicated
to
get
a
PR
din
or
whatever,
or
maybe
we
could
just
do
it
on
the
wiki?
That
would
you
know
if
there's
a
wiki
in
the
repo,
so
we
could
get
in
the
repo.
The
thing
the
website
does
get
as
a
benefit
is
depending
on
level
of
usage.
It
does
have
things
like
cloud
fare
for
caching
and
other
things
mm-hmm.
E
B
A
It
sounds
like
we've
sort
of
triangulated
on
at
least
in
this
discussion
agreement
that
generating
that
file
and
putting
it
somewhere
makes
sense.
So
maybe
it's.
The
next
step
is
for
somebody
to
figure
out
what
the
flow
might
be
like
the
the
the
website.
One
I
think
I
understand
fairly
well.
I,
don't
know,
understand
as
well
how
easily
you
can
automate.
F
Right,
snow
and
other
ways
that
right
now,
each
time
there
is
something
merged
on
master
on
this
new
repo
there's
a
job
that
will
sync
it
up
with
the
current
repo.
So
we
could
like
just
update
this
job
through
bills,
and
then
we
commit
to
master
with
the
build
thing.
That's
pretty
straightforward
with
what
we
are
right
now:
okay,.
A
So
it's
just
a
matter
I
think
then
writing
down.
Maybe
how
like
a
flow-
and
we
can
work
on
that
in
an
issue
or
something
like
that,
because
there's
I
guess
a
couple
questions
like
how
does
it
get
built?
Where
does
it
go?
Does
it
go
into
the
working
security
working
with
repo
and
then
finally,
should
it
end
up
on
the
website
to
be
downloaded,
regardless
of
whether
it's
in
the
security
repo
or
not,
right
like?
Where
is
it
that
we
want
people
to
go
get?
It
is
almost
a
separate
question.
A
E
Yeah
I
agree,
I
think
that's
where,
like
website
versus
module
versus
whatever
other
consumption
that
did
all
right,
I'm
like
broad
it's
worth,
you
can
also
do
just
like
go
to
the
rod
up.
You
know
User
Content,
calm,
link
and
consume
it.
That
way.
There's
like
you
know,
there's
a
plethora
of
ways.
We
could
suggest
people
to
consume
this
right.
Yeah
I
think
that's
a
different,
but.
A
It's
almost,
we
should,
you
know,
come
up
with
our
recommended
ones,
just
so
that
we
can
get.
You
know
the
general
feedback
on
well
wait
a
sec,
some
people
may
say
no.
If
it's
gonna
be
broadly
consumed,
it
should
be
the
website
because
you
get
the
cloud
for
a
caching
and
all
that
kind
of
stuff,
or
no,
it's
fine
to
just
come
out
of
the
repo
right.
A
E
I
mean,
like
you
know,
I
think
Byron
mentioned
I,
hope,
I'm,
saying
your
name
right,
I
I
think
he
mentioned
like
possibly
changing
the
naming
from
numbers
to
like
words
or
something
and
right
now,
like
the
name
of
the
object,
of
course,
I'm
dropping
out
at
this
point.
Sorry
I
dropped
out
later
that
sucks
right
now.
The
name
of
the
object
is
actually
than
the
name
of
the
file.
So
in
that
way
the
numbers
are
very
extremely
durable.
E
B
B
A
A
E
E
E
Yeah
and
I
think
that
you
know
the
point
of
you
know:
have
you
the
you
know,
data
is
NTC's
and
be
able
to
experience
multiple
things
that
probably
you
probably
want
to
go
down
the
path
of
a
new
indices,
fool,
learn
or
just
export
that
into
like.
We
don't
have
the
debate
of
putting
it
into
the
bottom
Aires
repo
yet,
and
so
you
know
that
PR
might
Wow,
it
won't
go
to
plotting
ways
repo
by
the
data
from
it
might
so
that
would
again
make
sense,
is
different
directory,
like.
A
H
A
Defile
will
end
up
in
the
security
working
group
repo.
If
it's
considered
to
be
part
of
the
core
or
delivery
like
you
know,
there
could
be
a.
We
currently
have
I
forget
what
the
directory
is,
but
like
say,
there's
a
vulnerables
directory.
There
could
be
like
a
Volm
slash
formats,
and
then
you
could
have
like
Jason's.
D
A
Not
the
the
two
indices
themselves,
as
sort
of
you
know,
that
would
be
a
working
step
towards
okay,
here's
something
which
will
create
them.
People
can
then
PR
changes
against
that,
but
really
I.
Think
the
the
the
the
biggest
next
step
is
a
issue
that
says
here's
what
we're
gonna
be
generating
and
here's
going
to
be
the
flow
of
how
you
know
somebody
makes
a
commit
to
updates
of
PR.
What
are
all
the
things
that
happens
in
terms
of
you
know.
How
is
that
actually
gonna
end
up,
resulting
in
the
new
index
file
being
generated?
A
E
A
Okay,
I
think,
then
we
were
at
the
end
of
the
agenda
and
we're
pretty
much
at
the
end
of
the
time.
I
think.
But
let's
just
flip
over
to
see
if
there
were
any
questions
on
the
YouTube,
the
channel
I
see.
One
question
which
was
you
know,
is
no
more
snow,
just
more
secure
than
Java
I.
Think
I'll
agree
with
Vladimir
that
that's
kind
of
a
broad
question
for
us
to
answer,
and
not
necessarily
the
scope
of
our
working
group
to
to
argue
one
way
or
the
other
necessary
there.