►
From YouTube: Digital Identity Attestation (October 14, 2020)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
You
know
dynamically
link
everything,
but
what
it
also
means
is
this
repo
is
completely
self-contained
with
installing
like
build
essentials,
you
more
or
less
can
compile
on
any
linux.
It
doesn't
require
anything
else
on
the
system,
so
we
have
a
lot
of
different
information
in
our
readme,
but
the
main
things
that
I
wanted
to
point
out
towards
this
question
regarding
on
boarding
process
is
our
current
team
members,
so
you'll
notice
that
we
have
a
technical
steering
committee.
A
I
believe
there's
18
of
us
you'll
see
we
have
emeritus
of
the
steering
committee
and
then
we
have
the
collaborator
list,
you'll
notice.
It
is
not
a
short
list.
There
is
over
100
collaborators
and
we
even
have
a
lot
of
former
collaborators.
A
We
recently
introduced
a
new
rule,
called
triagers,
of
which
I
will
show
with
you,
show
you
all
in
a
minute
and
the
release
stuff
we'll
get
to
in
a
bit.
We
have
release
keys
for
all
the
people
who
are
able
to
do
releases.
So
the
first
thing
that
I
would
dig
into
here
is
like
our
governance.
Talking
about
the
different
roles,
so
I
mentioned
we
have
a
triager
role.
This
is
a
new
role
that
we
just
opened.
There's
a
guide
for
people
who
are
triaging.
A
They
have
the
ability
to
apply
labels
to
issues
as
well
as
comment
close
and
reopen
issues.
The
project
recently
introduced
the
ability
to
kick
off
ci
and
land
things
through
a
commit.
Cue
commit
cue
through
labeling,
as
long
as
like
a
handful
of
release,
gates
and
checks
are
done,
so
you'd
need
all
the
ci
to
be
green.
You'd
need
the
right
amount
of
sign
off
and
you'd
need
the
commit
to
be
able
to
land
cleanly
for
it
to
work
that
way,
but
our
triagers
have
the
ability
of
doing
all
of
that.
A
A
Both
collaborators
and
non-collaborators
can
propose
changes
for
a
change
to
land.
Two
collaborators
must
have
approved
it
or
one
collaborator.
If
the
if
the
commit's
been
open
for
at
least
seven
days,
if
something's
been
open
for
48
hours,
it
has
green
cci,
it
has
two
sign-offs
on
it.
It
can
land
unless
it's
december
major,
in
which
case
the
tsc
needs
to
approve
it.
A
If
there
are
objections,
we
do
work
through
a
consensus-seeking
process.
We
attempt
to
talk
through
those
objections
and
find
consensus.
Consensus
cannot
be
reached
most
of
the
times
that
just
means
that
the
change
doesn't
land,
but
it
can
be
escalated
to
the
technical
steering
committee
to
either
vote
or
reach
their
own
consensus
to
override
the
objection.
A
There's
a
handful
of
other
activities
for
collaborators,
but
the
one
thing
that
you
were
asking
was
nomination.
So
it
is
a
bit
of
I
think.
Shoulder
tap
is
maybe
the
best
way
to
describe
it,
but
essentially
any
existing
collaborator
can
nominate
someone
else
to
be
a
collaborator.
A
There
is
an
expectation
that
nominees
would
have
a
significant
or
valuable
contributions
across
the
node.js
organization,
so
it
doesn't
need
to
be
explicitly
only
in
node.js
node,
although
there
is
an
expectation
of
familiarity,
because
there
is
a
bunch
of
work
that
needs
to
be
done
from
a
stewardship
position
so
to
nominate
a
new
collaborator,
you
can
open
an
issue
in
the
node.js
node
repository
and
provide
a
summary
of
the
nominees
contributions,
such
as
the
commits
in
the
node.js
repo
pull
requests
comments
on
pull
requests,
reviews
help-
and
here
are
just
like
a
handful
of
different
queries,
ready
to
go
that
we
can
look
at
so,
for
example,
if
we
did
this
one
and
we
put
in
my
name
and
I'm
struggling
to
type,
because
I
am
on
a
split
ergonomic
keyboard-
I
don't
normally
use,
but
you
can
see
you
know,
for
example,
like
all
of
my
commits
and
we'd
go
through
and
kind
of
list,
all
of
these
things.
A
For
example,
if
we
look
at
the
repo
right
now
and
we
look
at
issues-
and
we
look
at
nominate-
maybe
a
keyword
we
can
see-
you
know
like
100-
closed
nomination
issues
for
different
people.
This
individual
is
one
of
my
colleagues,
so
I
don't
feel
so
bad
highlighting-
and
this
is
for
roy
and
beth
says
that
she'd
like
to
nominate
roy-
and
this
is
actually
pertinent
to
the
question
about
the
release
group,
because
roy
is
also
a
member
of
the
release
group.
A
So
roy
was
nominated
based
on
current
and
future
contributions
to
the
node.js
release
group
that
he's
been
regularly
attending
the
fortnight
release
mentoring
sessions
for
over
six
months
and
attends
regular
release
working
group
meets
meetings.
He
also
helps
to
maintain
npm,
which
is
dependency
within
node.
A
The
technical
steering
committee
had
already
approved
roy
to
become
a
releaser
which
is
part
of
the
release
process,
we'll
talk
in
a
bit
and
shows
their
commits
their
comments
and
their
org
wide
involvement,
and
so
generally,
the
way
in
which
in
which
this
works
for
a
nomination
is
you
mention
the
collaborators
and
if
there's
no
opposition
within
a
week
to
the
nomination,
the
nomination
moves
forward.
Now.
A
One
thing
that
we
do
sometimes
is
there's
a
step
where
we
go
to
the
collaborators
discussion
page,
which
is
a
private
discussion
page
only
to
the
collaborators
to
request
feedback
from
collaborators
in
private
before
nominating
an
individual.
A
We
don't
require
this,
but
we
do
encourage
it.
It's
not
a
lot
of
fun
to
nominate
someone
publicly
and
then
have
them
removed.
It's
you
know
it's
bad
for
everyone.
When
that
happens.
To
be
honest,
so
after
the
nomination
passes,
there's
a
tsc
member
who
goes
through
and
works
on.
A
What's
the
word,
I'm
thinking
of
onboarding,
the
individual,
and
so
here
you
can
see
we
have
a
full
onboarding
guide.
That
goes
through,
like
the
whole
process
of
how
we
nominate
people,
what
permissions
we
give
them
and
then
internally
we
extensively
use
iam
within
the
node.js
org
by
having
a
variety
of
teams
in
place
for
handing
off
permissions.
So
we
don't
like
add
people
to
repo.
A
Specifically,
we
add
people
to
a
collaborations
membership
group
which,
underneath
it
has
a
bunch
of
other
sub
teams
for
various
working
groups
and
other
subsets
before
I
move
on.
Are
there
any
other
questions
about
onboarding
for
collaborators
and
the
permissions
we
give.
C
A
Yeah,
okay,
so
we
should
maybe
move
on
to
talking
about
releasers,
because
it's
only
the
releasers,
whose
keys
that
we
keep
on
on
file.
There
isn't
an
expectation
or
even
a
collection
of
keys
of
collaborators.
So
those
are
only
the
release
team
that
we
maintain
keys
of.
So
I
one
thing
I
wanted
to
touch
on
really
quickly
was
like
our
charter
and
governance
before
I
touched
that
specific
point
dan.
A
So
the
technical
strength
committee
is
chartered
by
the
cross
project
council
in
the
open,
js
foundation
and
they're
chartered
with
a
number
of
responsibilities,
including
stuff,
like
setting
release,
dates,
release
quality
standards,
technical
direction,
project,
project
governance.
A
The
cpc
in
turn
is
actually
chartered
by
the
board
of
directors
of
the
openjs
foundation,
including
their
own
list
of
responsibilities,
which
include
overseeing
the
projects.
The
way
in
which
this
structure
works
actually
continues
to
turtle.
When
we
go
and
look
at
the
node
release
working
group,
which
has
been
chartered
by
the
tsc
to
include
release,
process
content
for
all
releases
schedule
for
all
releases
and
contribution
pro
policy
for
the
release
repository,
why
this
is
important,
is
it
talks
about
like
a
separation
of
concerns
and
a
delegation
of
duty?
A
In
turn,
then,
charters
working
groups
with
that
responsibility,
so
the
release
working
group
maintains
their
own
process,
their
own
policy
and
is
able
to
act
autonomously,
and
the
thing
that's
rather
interesting
here
is
that
the
release
working
group
set
in
a
way
sets
the
direction
of
the
project
because
they
decide
what
goes
out
in
the
release
and
even
the
technical
steering
committee
doesn't
tell
the
working
group
what
to
do
now
within
the
working
groups.
Governance.
A
There
are
things
that
are
documented,
where
they
defer
to
the
technical
steering
committee
so,
for
example,
the
working
group
membership,
which
I
think
starts
to
get
towards
your
question
dan,
which
I
promise
I
will
get
to
states
that
the
seats
are
not
time
limited
and
there's
no
fixed
size,
there's
no
specific
set
of
requirements
or
call
or
qualifications.
Beyond
these
rules,
the
working
group
may
add
additional
members
to
the
working
group
by
consensus,
defined
as
no
objections
and
more
than
50
of
the
members
participating
in
the
discussion.
A
A
working
group
member
may
be
removed.
Changes
to
the
membership
should
be
posted
to
the
agenda
and
also
that
no
no
more
than
one
third
of
the
working
group
may
be
affiliated
with
the
same
employee.
Now
this
is
slightly
different
to
being
a
releaser
which
is
like
a
subset.
Not
everyone
on
the
working
group
is
a
releaser.
A
So
when
considering
new
releases,
we
email
the
technical
steering
committee
for
approval
after
approval,
the
nominee
will
be
assigned
a
mentor
from
the
release
team
and
walks
them
through
the
process
of
having
learning
how
to
prepare
a
release.
They
work
consistently
with
them.
A
Often,
for
many
months
before
they
onboard
and
then
before,
they're
fully
onboarded
to
the
team,
they
need
to
have
be
added
to
the
team.
They
need
to
be
added
to
the
security
release
team,
which
I
don't
think
we're
going
to
get
too
much
into
our
security
process
today.
But
we
have
a
whole
distinct
set
of
repos
and
stuff
for
managing
our
security.
They
have
a
single
high
quality,
ssh
key,
that's
added
to
the
disk
user
on
our
server
and
their
gpg.
A
Key
gets
added
to
the
readme
and
they
open
a
pull
request
to
document
to
add
our
key,
because
it's
part
of
our
automations
and
we
generally
wait
at
least
two
weeks
after
that
key
has
been
added
to
propagate
before
they
start
actually
signing
releases
and
what's
cool.
Is
we
actually
have
like
separately
from
this,
a
build
working
group
that
maintains
the
secrets,
repo
and
the
secrets
repo?
A
We
can
actually
see,
for
example,
on
our
test
infrastructure,
we
use
the
keys
of
people
who
have
access
to
to
lock
the
secret
so
that
you
know
people
can
get
the
ssh
keys
to
sign
into
our
resources.
Now
not
everyone
from
the
release
team
has
access
to
this,
but
just
another
example
of
how
we
use
gpg
within
within
the
project.
A
So
these
individuals
are
the
ones
that
actually
manage
the
releases
and
they're
on
board
and
embedded
through
an
ongoing
process
generally,
but
not
always
the
majority
of
the
release,
team
and
releasers.
If,
if
we
go
back
and
look
at
this
main
repo,
most
of
them
work
for
large
companies,
so,
for
example,
beth
works
for
red
hat
colin,
I
believe
works
for
joint
james
works
for
I'm
god.
I'm
spacing
out
the
name
of
the
company
near
forum,
which
is
a
really
well
respected
vendor
in
this
space.
A
Michael,
though,
is
a
student
in
he's
on
the
technical
steering
committee
he's
a
long-term
trusted.
Member
of
the
project.
There's
myself
at
github
richard
at
red
hat
rod
is
another
long-term
trusted.
Contributor
reuben
is
another
individual
who
is
an
independent
right
now,
but
long
time
trusted.
Individual
roy
is
also
at
github
and
shelley's
at
microsoft.
So
a
good
majority
of
the
team
do
work
for
large
companies
and
large
companies
that
are
very
active
in
the
repo,
which
makes
it
much
easier
to
kind
of
trust.
A
C
A
But
in
general
we
do
run
a
lot
of
mentoring
sessions
every
other
week.
We
run
a
mentoring
session
that
people
can
connect
to
and
ask
questions
and
kind
of
see
how
we
go
through
the
release
process
and
then
every
thursday
every
other
thursday.
We
have
a
release
working
group
where
we
talk
about
our
schedule
and
people
can
also
come
and
participate
in
those.
A
So
there
are
some
folks
that
are
not
working
at
very
large
companies
who
come
and
participate
over
extended
periods
of
time
and
build
trust,
and
it's
kind
of
at
the
point
where
you
know
we
feel
we
really
trust
those
individuals
that
we
will
nominate
them
for
the
release,
at
which
point
you
know
they
do
also
get
vetted
by
the
technical
steering
committee.
B
A
Yeah,
the
release
working
group
itself
says
no
more
than
one-third.
I
don't
know
if
we
have
governance
specifically
to
releasers
yeah,
so
the
governance
doesn't
explicitly
apply
to
the
releasers,
but
it
applies
to
the
release
working
group
itself,
which
actually
has
a
slightly
larger.
So
there's
an
lts
group.
A
There's
a
back
quarters
group,
there's
a
releases
team
and
there's
a
sitkin
team
and
kind
of
all
of
all
these
members
come
together
honestly,
like
we've
re-shipped,
we
we've
shifted
the
membership
and
and
set
up
of
this
team
a
couple
times,
so
that
actually
is
a
reasonable
thing
to
point
out
that
we
may
actually
need
to
go
through
and
kind
of
rethink
the
governance
there.
But
at
the
moment,
to
the
best
of
my
understanding.
C
A
Two
three:
four:
five,
six,
seven
eight.
So
we
have
ten
people.
I
don't
believe
there
is
any
company
that
has
four
people
on
the
team
right
now,
but
but
yeah
I
could
see
how
maybe
we're
not
a
hundred
percent
to
that
right
now.
So
that's
worth
digging
into
deb.
B
I
was
just
going
to
ask
how
you
I
mean
yeah,
you
kind
of
answered
the
question.
My
question
was
going
to
be
how
you
monitor
keep
up
to
date
with
that,
and
I
guess
the
answer
is
hasn't
been
a
big
enough
problem
to
really
be
worth
trying
to
track.
A
Yeah
so
historically,
until
I
would
say
the
last
two
and
a
half
or
so
years,
we
were
significantly
understaffed
on
the
release
side
of
things,
and
it
wasn't
until
more
recently
that
we
started
getting
a
lot
more
people
involved.
A
So
this
just
is
something
likely
that
we
need
to
go
through
and
and
review,
but
it's
not
been
a
problem,
at
least
in
my
perspective,
and
I
I
definitely
have
a
bias
here,
but
there
hasn't
been
any
problem
with
people
on
the
release
team
like
acting
in
a
way,
that's
untoward
or
to
benefit
their
their
organization
as
a
whole.
The
project
tends
to
be
rather
good
there.
A
The
technical
steering
committee
is
the
place
where
we've
definitely
been
like
a
lot
more
on
top
of
ensuring
that
we're
not
having
a
disproportionate
number
of
members
from
one
company,
but
it's
definitely
worth
reviewing
this
again
now
you,
you
did
ask
a
final
question
that
I
haven't
gotten
to
yet
is
details
on
the
signing
process
outlined
in
verifying
binaries.
A
A
Which,
let
me
just
open
that
up
for
you
all,
so
this
is
get
secure
tag.
It
was
written
by
fedor
who's.
An
amazing
contributor,
former
tsc
member
is
one
of
the
best
experts
in
crypto
that
I
know,
and
it
get
get
only
uses
shaw.
One
hashes
when
signing
and
shot
one
is
generally
deprecated,
so
it
gets.
Secure
tag
runs
a
cat
file
recursively
for
every
entry.
Sorted
alphabetically
enter
submodules
of
pro.
A
A
C
A
Those
tags
get
pushed
up,
and
then
we
have
a
release
script
that
we
run,
which
is
this
script
right
now,
which
will
actually
go
through,
and
you
know
it
might
be
easier
to
show
you
in
the
markdown,
because
we've
got
it
in
details.
A
A
And
it
builds
all
the
releases
and
it
creates
the
shawsome
256
file
and
then
signs
it
using
your
gpg
key,
and
this
is
right
here-
the
script
that
actually
goes
through
and
does
all
the
gpg
work
for
that
and
if
we
go
and
look
at
like
one
of
the
releases
on
our
download
page,
we
can
look
at
like
our
latest
release.
A
You
can
see
that
we
have
this
like
shawson's
256
and
that
gets
the
sha-256
of
every
single
file,
and
then
we
have
a
pgp
signed
version
of
those
shas
and
then
the
signature
for
it.
So
in
here
in
our
instructions
in
the
readme,
we
have
all
of
the
keys
of
all
the
releasers,
the
commands
for
grabbing
those
keys
from
the
sks
key
servers,
and
we
have
a
section
on
verifying
binaries.
A
One
thing
that
we
have
discussed
doing
recently,
but
have
not
done
yet,
is
switching
to
using
a
single
gpg
key
that
we
would
keep
within
our
release
infrastructure
for
all
of
our
releases.
Instead
of
using
keys
for
all
of
our
individual
contributors,
we've
also
received
a
complaint
in
the
past
about
the
fact
that
some
of
our
signers
use
sub
keys,
but
we
keep
the
primary
key
in
this
list
of
in
the
list
in
the
the
the
key
list
here.
A
For
me,
for
example,
I
have
like
a
primary
key
and
then
I
have
a
sub
key
for
signing,
which
I
use
to
sign,
but,
like
this
key
that's
listed
here
is
my
primary
key,
which
will
also
get
you
my
sub
key.
But
you
know
there
is
some
confusion
about
this,
but
overall
we
are
including
you
know,
all
of
our
releases.
A
The
tag
is
signed
and
then
the
tar
balls
that
we
have
every
single
one
of
those
tarballs
includes
that
manifest
that's
signed
as
well.
B
A
Yeah,
so
we
have,
we
have
ci
infrastructure
for
building
the
releases,
all
of
those
releases,
so
okay,
so
actually
this
is
worth
putting
mentioning.
So
we
have
two
different
ci's.
I
will
show
you
one
of
them,
because
it's
the
public
one,
so
there's
ci.nodejs.org-
and
this
is
our
main,
continuous
integration
server.
A
So
our
release,
ci
instance,
is
only
accessible
to
the
release
team
and
the
build
team
that
manage
that
infrastructure
and
it
uses
build
bots
that
are
isolated
and
only
used
in
the
release
ci.
A
When
we
kick
off
a
release
job,
we
have
a
centralized
server
that
those
assets
get
put
onto
into
a
staging
directory
and
then
the
release
team
members
create
individual
keys
specifically
for
that
server
and
then
the
public
keys
are
added
to
it
and
the
release
script.
That
we
have
to
promote
our
releases
from
stage
from
the
staging
directory
to
the
production
directory
and
create
those
signed
shaw.
C
Can
you
all
right
in
a
moment?
Can
you
go
back
because
I
wanted
to
understand
how
someone
would
verify
this,
but
I
guess
looking
at
this
right
now.
I'm
sorry
yeah
good,
so
they've.
C
From
the
readme,
the
the
signing
keys,
the
public
keys-
yes,.
B
Well
yeah.
This
looks
similar
to
like
the
get
web
of
trust
and
constantine
showed
that
that
the
linux
kernel
project
uses,
except
this
is
kind
of
in
the
readme.
Instead
of
I
think
there
was
a
special
file
format
for
all
the
keys
in
a
constantine's
version.
E
E
The
problem
is
that
we
we
had
is
that
you
mentioned
sks
servers
that
the
sks
servers
are
slowly
die,
they're,
probably
not
going
to
survive
for
much
longer.
So
we
we
had
to
find
find
some
other
mechanism
of
doing
this.
A
E
I
have
a
question,
so
this
is
ultimately
depends
on
the
security
of
the
infrastructure.
Right
there's
you
basically
delegate
everything
to
the
infra
and
then
trust
that
infrared
has
not
been
compromised
right
because
yeah,
so
the
bit
that
calculates
all
the
show
on.
I
mean
all
the
all
the
all
the
hashes
and
generates
all
of
these
signatures.
E
A
Yes-
and
all
of
that
is
automated,
but
like
it
could
be
done
manually
the
same
way
and
only
the
releasers
and
a
subset
of
our
build
team
have
access
to
that
machine.
E
A
The
releaser
themselves
opens
a
pull
request,
adding
themselves
and
adding
their
key.
So
it's
that
process
is
not
automated.
All.
A
A
Trusting
like
the
health
of
our
repo
for
the
long
term
management
of
those
keys.
B
So
one
thing
I
noticed
in
the
verifying
section
there's
this
list
of
keys
that
there's
a
section
at
the
bottom
too.
That
says
some
other
keys
might
have
been
used
in
the
past.
B
A
A
B
C
A
Of
trusted
keys
and
any
one
of
those
keys
being
compromised
could,
in
theory
break
that
network
of
trust,
and
this
is
part
of
the
reason
why
we've
looked
into
and
I'd
have
to
double
check
which
repo
it's
in.
But
this
is
part
of
the
reason
why
we've
explored
switching
to
just
using
a
single,
centralized
project
key
to
sign
everything.
Instead
of
an
individual's
key.
B
B
E
E
Theoretically,
that's
why
there's
expiration?
So
if
you
do
create
some
keys
that
expire
right,
then
you
sign
them
with
an
expired.
So,
for
example,
if
somebody
is
in
emeritus
then
five
years
later,
they
they
they
lose
their
access
to
the
key
or
somebody
gains
it
and
resigns
a
bogus
release
with
their
key.
At
least
you
know
that
key
is
sub-key.
Is
now
expired,
or
something
like
that
right,
that's
it.
A
A
A
I
know
this
may
sound
like
a
bit
of
an
excuse,
but
gpg
is
hard
and
it's
like
hard
enough
at
times
to
just
get
people
to
get
their
like
system
set
up.
I'm
like
consistently
signing
everything,
especially
since
not
everyone
who
is
on
the
release
team
is
an
expert
with
gpg.
Although
I
I
would
maybe
even
claim
that
very
few
people
on
this
planet
are
so
you
know
that
definitely
has
us
where
it's
like.
Our
bar
is
like
having
a
key
not
like
having
a
particular
complex
set
up.
Yeah.
B
You
know
we're
not
definitely
not
trying
to
point
out
flaws
or
anything
here.
I
guess
my
motivation
for
asking
me
to
come
and
do
presentations,
because
I
think
node
has
done
a
better
job
than
I
think
99.99
of
other
projects,
I've
seen
and
most
people
don't
realize
how
hard
this
is
until
they
see
something
like
a
project
that
has
a
dozen
maintainers
over
many
years.
Try
to
do
this
correctly.
A
A
So
this
was
an
issue
that
was
opened
a
while
ago
on
a
new
strategy
for
managing,
sharing
and
documenting
release,
keys
and
so
like.
We
do
have
this
kind
of
like
ongoing
conversation
and
a
prototype
about
like
how
we
manage
the
keys.
So
there's
like
this
prototype
repo,
where
it
would
actually
be
like
a
separate
repo
where
we
keep
all
the
keys,
and
I
think
that
there
was
I'm
trying
to
remember
if.
A
Yeah
there's
a
couple
different
threads
that
we've
had
over
time,
and
I
bring
that
up
more
just
to
like
point
out
that
that
we're
we
are
ongoing
having
discussions
but
but
I
would
say
it
definitely
becomes
difficult
because
there
are
only
so
many.
A
There
are
only
so
many
experts
that
we
have
around
this
and
it's
our
project,
and
I
think
this
is
unique
to
node,
although
there
are
definitely
other
open
source
projects
like
bits
that
are
this
way
that
node
is
primarily
volunteer
led,
and
while
there
are
some
people
on
the
project
that
you
know
are
paid
by
their
employees,
employers
to
work
on
it.
You
know
like
that
kind
of
corporately
funded
open
for
us
is
still
you
know
it's
not
the
same
as
having
a
staff
so
like
we
don't
have
a
roadmap.
A
We
have
limited
resources
to
what
we
can
do
and
I'm
often
amazed
at
the
infrastructure
we
have
just
based
on
that,
but
it
does
make
it
much
harder
for
larger
infrastructural
changes,
and
I
would
say-
and
I
don't
know
if
this
is
something
that
your
team
is
looking
at-
if
there
were
clear,
documented
best
practices
or
off-the-shelf
kind
of
kitchen
sink
utilities
that
we
could
use
to
improve
these
things.
A
That
would
be
huge,
and
I
think
that
this
is
just
for
releases
in
general,
like
our
whole
release
pipeline
is
super
bespoke
and
honestly,
due
to
that
kind
of
fragile,
and
I
don't
even
want
to
talk
about
the
number
of
times
I've
had
to
like
dive
into
our
bash
script
and
make
tiny
changes,
because
there
was
like
a
weird
change
to
the
gpg
client's
output
that
like
breaks
our
weird
script.
So
it's
like,
I
would
love
things
to
to
be
more
streamlined.
B
D
Oh
yeah,
no,
no!
I
was
just
going
to
echo
the
sentiment
earlier,
which
is
where
I've
audited
a
lot
of
projects.
A
majority
are
not
even
signing
releases
so
you're,
definitely
in
a
good
good
spot.
B
Yeah,
I
think
the
other
thing
I
noticed
like
you
mentioned
some
of
it
is
node
specific,
but
I
saw
almost
you
know
very,
very
little
node
specific
stuff
in
what
you
showed.
So
I
guess
it's
just
this
reflection
of
the
sad
state
that
an
open
source
project
just
trying
to
release
a
binary.
You
had
to
build
all
of
this
project,
specific
infrastructure,
kind
of
github.
B
A
A
I
would
also
love
our
resources
to
perhaps
be
hosted
more
statically
in
like
buckets
as
opposed
to
on
an
actual
server
they're
being
served
by
a
cdn,
but
there's
definitely
a
lot
of
room
for
improvement
here,
and
I
would
say,
like
separately
from
this,
like
in
my
role
at
github,
I'm
a
product
manager
in
the
cloud
support
coach
cloud,
org
and
primarily
focused
right
now
on
npm,
but
also
thinking
about
like
holistic,
end-to-end,
developer
experience
and
and
one
of
the
things
I
definitely
have
an
itch
to
solve.
A
Although
I
have
a
giant
bias
around,
it
is
like
improving
some
of
these
things
around
releases,
so
for
node
it
definitely
becomes
a
challenge,
because
the
the
platform
matrix
that
that
we
support
covers
machines
that
just
you
can't
get
in
continuous
integration
anywhere
like
we
build
for
like
arms,
we
have
like
a
cluster
of
raspberry
pi
ones,
sitting
in
a
in
someone's
garage
in
in
australia.
A
You
know
like
the
the
build
matrix
that
we
need
is
not
totally
covered,
but
then-
and
I
don't
mean
this
as
a
plug
for
my
organization-
it's
just
what
I
know
like
github's
actions
are
starting
to
support
like
custom,
build
bots,
so
something
I
I'm
personally
interested
in
is
like
you
know
like
whether
it's
techton
or
jenkins,
or
some
of
the
open
source
software.
That's
out
there
like
what
are
ways
in
which
we
can
make
this
more
portable
and
reusable
and
less
needing,
like
really
custom
handheld
infrastructure,
to
keep
this
kind
of
stuff
up.
E
So
the
linux
foundation
and
release
engineer
team
is
managing
releases
for
a
whole
bunch
of
open
source
projects
that
are
part
of
the
linux
foundation
and
opengs
is
sort
of
affiliated
with
an
explanation,
but
it's
not
really
managed
by
them,
but
there
are
a
lot
of
the
things
that
are
that
the
team
has
approached
that,
I
believe,
have
been
solved
fairly
insanely.
So,
for
example,
there
is,
there
is
release
keys,
but
they're
not
available
to
any
member
of
the
team.
E
There
is
there's
a
back-end
front-end
process
that
is
all
the
tags,
get
tags
and
all
the
releases
and
the
tarballs
and
binaries
are
signed
using
a
key
that
is
done
on
the
back
end
infrastructure,
that's
behind
a
whole
bunch
of
firewalls,
so
that
that
key
is
not
accessible
to
anybody,
but
the
members
of
the
of
this
administration
team.
It's
like
even
all
these
engineers
don't
have
access
to
that,
and
so
that
sounds
like
you
are
trying
to
get
to
the
same
place
where
the
members
of
the
release
engineer
team
are
already
at.
E
So
maybe
that's
one
of
the
ways
to
talk
to
to
those
folks.
Maybe
they
can
come
and
present
that
part
saying
you
know:
here's
how
we
sign
releases
using
using
jenkins
using
garrett
or
using
github
actions
or
azure
actions.
There's
there's
a
number
of
ways
to
do
it,
they're
all
using
different
infrastructure,
but
the
the
way
the
release
is
assigned
and
the
tags
designed
are
very
similar
among
the
projects.
This
uses
signal,
which
is
a
tool
that's
developed
by
fedora
infrastructure.
E
It's
like
the
actual
signing
key
lives
on
a
system
that
only
connects
out
to
the
bridge
right.
So
they
all
talk
to
the
bridge.
The
client
says
here
I
have
a
binary,
please
sign
it
for
me
and
so
that
the
request
itself
is
signed
by
a
female
certificate,
and
then
we
verify
that
the
certificate
matches
and
then
the
bridge
then
says
to
the
to
the
to
the
server
that's
connected
to
it.
You
know
yeah,
this
is
the
client
says:
I
checked
their
certificate.
E
They
say
please
sign
this
binary
with
the
following
key
and
then
the
server
then
signs
it
and
says
here's
a
signature,
and
then
that
is
transmitted
back
to
the
agent
so
that
that's
kind
of
how
fedora
infrastructure
signs
their
all
of
their
packages
and
we've
adopted
this
mechanism
for
all
the
releases
done
for
the
release.
Engineering.
A
Interesting
now
is
that
is
that
only
for
gpg
signing
or
is
there
anything
in
there
for
notarizing
native
applications,
because
we
also-
and
I
didn't
really
get
into
it-
we
have
like
dot,
pkgs
and
dot
msi's
that
we
distribute
that
we
need
to
notarize.
Is
there
any
infrastructure
for
that
kind?.
E
Of
work,
so
apparently
it's
only
doing
tgp
stuff,
but
fedora
has
the
same
need
to
do
this,
so
we
are
working
with
them
to
get
all
this
stuff
in
so
signing
your
msis
and
signing
all
of
your
mac
stuff,
for
example,
all
the
binaries
for
them
accomplishable
using
the
same
mechanism,
so
that
you
know
you,
you
can
request,
give
me
this
kind
of
signature
for
this
kind
of
binary,
and
then
you
know
to
to
and
there's
actually
a
there's
a
x599
certificates
used
for
the
actual
communication
process.
E
So
you
issue
an
agent
sort
of
like
a
very
short-lived
certificate
from
a
ca
saying
you
know
this
is
this:
this
is
good
for
the
next
three
months.
If
it's
stolen,
nobody
can
request.
E
First
of
all,
if
it's
stolen-
and
we
know
about
this-
we
can
revoke
it
and
but
if
it's
stolen,
we
don't
know
about
this.
At
least
it's
only
worth
three
months,
there's
a
very
limited
exposure
for
it,
but
the
actual
key
that
is
doing
the
signing
is
never
exposed
to
anybody,
except
like
the
I.t
administration
team,
which
is
like
10
people.
A
All
right
cool
yeah,
I
I
there
are
ongoing
conversations
with
our
build
team
and
the
linux
foundation's
it
team.
I
don't
know
if
it's
the
exact
same
people
but
like
we
definitely
run
into
like
scalability
and
availability
issues.
So
I'm
going
to
ping
the
folks
who
are
involved
in
those
conversations
to
make
sure
that
they're
touching
on
this
as
well
yeah.
A
E
To
more
people
to
present
here,
then
somebody
can
come
and
I
can
talk
to
them
and
say
you
know:
here's
how
the
linux
foundation
releases
are
done
or
like
15
20
projects.
However,
many
that
we're
doing
that
uses
various
infrastructure
and
not
just
jenkins,
but
also
github
actions
or
azure
or
whatever.
But
it's
all
all
the
binary
is
assigned
in
this
process
that
I
described.
A
E
D
Cool,
so
have
you
ever
thought
of
utilizing
some
sort
of
a
mutable
append
only
record
of
the
the
artifact,
the
releases
that
are
signed
and
by
which
key.
D
D
A
Yeah,
absolutely
it
absolutely
makes
sense.
We
do
have
something
kind
of
like
this,
but
it's
not
immutable.
We
have
like,
I
think
we
call
index.tab
and
index.json
that
are
available
on
the
download
server
that
have
the
information
from
us
on
every
single
release.
It
includes
some
metadata,
such
as
like
whether
or
not
it
was
a
security
release
and
the
versions
of
all
of
our
like
embedded
dependencies,
as
well
as
like
the
specific
platforms
and
files
that
are
available.
A
But
some
of
the
I
guess,
like
the
two
major
differences
between
what
we're
doing
and
what
it
sounds
like
you're
suggesting
is,
is
one.
This
file
is
mutable.
It
gets
appended
every
single
time,
but
like
it's
not
like,
we
have
no
way
of
like
ver
of
like
confirming
this
file
will
like
never
be
changed
over
time,
so
maybe
that
that
mutability,
but
that
mutability
in
general
is
actually
something
that
we
need
to
talk
about
as
a
team
as
to
like
how
we
how
we
source
our
static
resources.
A
The
other
thing
in
there
is
that
there
is
none
of
the
information
about
like
who
the
releaser
was.
What
kia
got
signed
with
or
or
the
shaw's?
But
but
I
guess,
we're
relying
on
you
know
our
static
hosting,
not
being
man
in
the
middle,
essentially
sure
yeah,
which
you
know
only
helps
so
much.
D
Yeah,
I
see
yeah
because
it's
very
very
early
on
and
it's
not
a
an
open,
ssf
effort
at
the
moment,
although
it
could
potentially
come
become
one
and
myself
and
dan
have
been
hacking
on
a
prototype
which
does
exactly
this
and
and
when
we
were
talking,
we
thought
it
might
be.
We
it
struck
us
as
being
something
to
be
well
integrated
with
with
node
and
the
signing
system
that
you
have.
So
it's
something
that
we
could.
We
could
possibly
explore
and
look
at.
D
Not
really
no
it's,
this
is
it's
basically
a
it's
a
it's
a
transparency
log,
so
it's
effectively
a
merkle
tree
and
once
you
record
an
entry
in
there,
so
you
take
an
artifact,
you
hash,
it
you
associate
with
a
public
key.
It
goes
in
and
then
it's
permanently
recorded
and
then
any
anybody
can
audit
the
system.
They
can
measure
the
they
come.
They
can
monitor
the
log
for
entries
and
then
look
at
things
such
as
the
signature
who
signed
it.
D
What
its
hash
state
is
when
that
happened,
the
actual
time
that
happened
and
it's
a
means
to
sort
of
protect
against,
like
I
said,
targeted
attacks
or
freeze
attacks,
replay
attacks,
that
sort
of
thing
and
and
it's
just
a
good
way
of
monitoring
who
signed
what
and
and
then
not
being
able
to
conceal
or
make
that
information
render
different
to
different
parties
as
part
of
some
sort
of
targeted
attack.
E
How
do
you
check
that?
How
would
somebody
check
to
make
sure
that
that
you
know
it's
one
thing
to
have
an
audited
like
audit
log
like
this
right,
but
another
thing
for
somebody
to
say?
Well,
how
do
you
integrate
this
into
the
thing
that
checks
to
make
sure
that
this
is
the
you
know
the
this
release
is
actually
the
newest
or
it
hasn't
been
pulled
back
or
anything
like
that.
E
Is
committed
into
the
git
repository?
That's
then,
replicated
no
fast
forward
only
right,
so
if
any
changes
are
made
to
it
in
the
back
end
like
I
want
to
do
it
or
anybody
else,
sneaks
in
and
tries
to
say.
Oh
this
commit
didn't
actually
happen
or
anything
like
that.
Then
those
repositories
will
just
stop
replicating
will
be
obvious
that
this
is
broken,
but
the
problem
I
always
run
into
is
that
yeah
it's
great
to
have
it
for
forensics,
but
how
do
we
make
use
of
this
on
the
client-side
tooling?
E
B
Yeah,
I
think
that
the
easiest
example
of
an
attack
that
what
luke
was
describing
could
we
can
mitigate
be
something
like
if
one
of
those
old
emeritus
releasers
in
the
node
repo
had
their
key
compromised.
Somebody
could
do
a
targeted
attack
like
take
a
tampered
release
of
node
sign
it
with
that
old
key
that
they
found
and
then
give
it
to
a
couple.
People
and
those
people
would
check
and
it
would
validate.
B
They
can
even
make
it
look
like
one
of
the
newer
versioned
releases
to
use
that
transparency
log,
then
in
the
instructions
or
in
any
client-side
tooling,
to
validate
a
release
of
node.
You
would
also
have
to
check
to
make
sure
that
that
signature
was
in
the
public
record,
and
you
would
also
need
people
watching
that
public
record
to
make
sure
that
all
the
entries
that
show
up
in
there
that
were
signed
with
their
public
key
were
actually
signed
by
them.
So
it's
kind
of
a
way.
B
I
guess
just
to
alert
those
people
that
their
keys
were
compromised
and
used
to
sign
something
that
they
didn't
necessarily
intend
to.
E
Yeah-
and
this
is
complicated
too,
because
like
for
a
while
back,
there
was
a
tool
called
our
get
right
that
would
use
let's
encrypt
infrastructure,
basically
to
record
a
release,
but
it
had
a
major
flaw
that
I
thought,
because
that,
if
somebody
wants
to
create
another
record
of
another
release
for
with
a
different
hash
right,
that
would
just
be
two
records
and
target
would
say
yep.
E
This
is
good
and
somebody
would
actually
have
to
monitor
continuously
to
make
sure
that
they're
not
duplicate
records
created,
and
this
is
the
same-
it's
the
same
problem
for
all
of
these
blockchain-like
systems
that
you
know
you
can
check
to
make
sure
that
you
know
is
this.
Is
this
binary
with
this
hash
in
this
record?
And
the
answer
would
be
yes,
it
is,
but
it
may
not
be
the
correct
one.
It
could
be.
Somebody
snuck
it
in
there.
B
Right
so
our
get
worked
as
a
and
there's
a
new
version
of
it
called
just
called
transparency
log.com.
If
you
want
to
find
the
new
version,
but
our
get
was
basically
a
map
of
url
to
hash
in
a
transparency
log,
so
anybody
fetching
contents
from
a
url.
We
can
make
sure
that
it
was
the
same
hash
that
went
to
everybody
else
that
fetched
from
that
url.
B
B
I
guess
like
some
of
those
node
urls
miles
was
showing
us
some
are
kind
of
designed
to
be
immutable,
so
they
would
probably
want
to
know
if
the
contents
at
like
a
specific
thing
from
that
disc,
that
node.js
that
already
changed,
but
they
wouldn't
necessarily
about
something
that
was
kind
of
in
more
of
the
latest
or
mutable
tag.
I
think
our
get
suffered
from
not
having
a
way
to
for
people
that
maintain
and
own
those
urls
to
declare
whether
or
not
it
was
supposed
to
be
immutable.
A
We
have
like
a
latest
tag
for
each
of
the
release
lines
and
each
of
our
lts
code
names
and
so
for
tools
like
nvm,
which
is
a
version
manager
that
a
lot
of
people
use
to
install
node,
which
does
to
the
best
of
my
knowledge,
verify
the
shot
sounds
before
installing
you
could
just
do
like
nvm
install
node
and
it
will
install
the
latest
version
of
node
based
on
both
that
index
tab
and
that
current
alias
that
we
have,
and
you
can
do,
that
for
each
of
the
individual
release
lines
and
we'll
get
the
latest
version
of
that
release
line
and
be
able
to
verify
the
the
sha-7
as
well.
A
I
think
the
the
attack
that
you
were
talking
about
dan,
where
someone's
gpg
key,
was
compromised
for
it
to
reach
our
release
server.
It
would
likely
need
to
be
a
combination
of
both
their
key
getting
compromised
their
gpg
key,
but
also
their
ssh
key,
and
we
rotate
the
keys
on
the
server.
So
that,
like
we
make
sure
that
only
the
active
releasers
have
their
keys
on
there,
so
without
that
ssh
key
they
wouldn't
have
accessed
to
the
machine.
A
It's
not
perfect,
but
it's!
You
know
better
than
nothing.
B
Yes,
I
think
just
to
really
complete
the
story,
though,
on
what
you
were
asking
constantine.
You
would
need
something
like
a
service
that
would
let
you
kind
of
subscribe
to
that
transparency
log,
like
even
a
target
to
like
look
for
certain
changes
to
urls
like
if
you
owned
a
url,
you
could.
Theoretically,
I
think
brandon's
talked
about
the
subscribe
and
the
system
would
email
you
or
something
like
that.
Any
time
the
content's
changed
the
same.
E
Is
that
there
is
email,
fatigue
right
and
a
lot
of
fatigue
and
it's
all
unless
it's
automated
and
that
infra
is
guaranteed
to
last
for
the
next.
You
know
five
ten
years,
which
is
improbable
in
you
know
in
best
cases
right
all
of
these
fail
down
at
some
point
so
yeah
we
can
create
all
of
these
check.
You
know
checks
and
balances
and
an
automation,
but
unless
the
end
user,
who
is
consuming
this
tool,
knows
about
them
and
knows
how
to
use
them
and
doesn't
forget
or
just
doesn't
care
to
verify
them.
E
That,
in
the
end,
does
not
actually
get
us
any
more
security
into
what
I'm
saying
it
is
difficult
enough
that,
for
example,
the
when
I
ask
just
you
know
stays
in
here,
but
three
years
ago
I
asked
fedor
infrastructure
to
say:
do
you
actually
verify
kernels
that
he
didn't
learn
from
kernel.org
and
the
answer
was
we
don't?
Actually?
You
know
we
kind
of
trust
that
the
https
process
is
efficiently
secure
in
the
infrastructure
and
both
sides
sufficient
secure
that
we
don't
have
to
do
it.
E
E
Talking
like
a
major
distribution
right,
that
is
not
doing
this.
What
what
hope
do
we
have
for
the
just
general
end
users
doing
this
right,
which
is
the
programs
like
msi
signing
and
the
what
apple
does
on
their
side?
I'm
not
sure
I
don't
use
it,
but
the
this
is
better,
but
it
also
is
incredible.
Gatekeeping
right
so
this
is
we
are.
E
We
are
trading
off
the
freedoms
of
open
source
and
and
liberty,
and
what
what
not
to
you
know
having
this
tremendous
gatekeepers
in
place,
saying
you
know
you're
not
allowed
to
release
this
binary
and
have
it
run
on
our
system
versus
versus
the
almost
guarantee
that
nobody's
going
to
check
that
this
is
a
valid
binary.
So
we
kind
of
between
these
two
anvils.
B
Yeah,
I
guess
the
one
example
of
something
working
out
like
this
recently
would
be
the
go
transparent,
the
go
modules,
transparency
like
effort,
where
they
kind
of
bake
the
validation
in
into
the
go,
get
command
by
default
and
all
new
versions
of
go.
So
everybody
installing
dependencies
is
kind
of
acting
as
a
check
without
even
maybe
realizing
it
it's
sort
of
like
if
there
something
bad
does
happen,
it's
sort
of
like
ssh
blowing
up
and
saying
hey
something
like
when
an
ssh
server
side
key
gets
changed.
It
says
this
might
not
be
what
you
want.
B
B
Right,
we
are
going
way
to
the
intentions
now
yeah
we're
way
down
yeah.
I
guess
we
have
one
minute
left
and
people
are
starting
to
drop
any
other
questions
or
just
thanks
a
lot
for
presenting
today
miles.