►
From YouTube: Jon Ringer - The Architecture and History of Nixpkgs (SoN2022 - public lecture series)
Description
In this lecture, Jon will talk to us about the architecture and history of Nixpkgs. Jon will cover some unique aspects of Nixpkgs, the CI/CD process, a quick how-to on contributing to Nixpkgs, and more.
Special thanks to the NLnet foundation, the European Commission, the NixOS Foundation, and Tweag for making this event a reality!
The continued discussion for the lecture series is happening over here:
https://matrix.to/#/#son2022-lectures:matrix.org
More information about the Summer of Nix can be found on the website:
https://summer.nixos.org
A
And
we're
live
hello.
Everyone
welcome
to
yet
another
installment
in
the
summer
of
nix
public.
No
sorry
summer
of
next
2022
public
lecture
series.
There
we
go
today,
we'll
have
john
ringer
talk
about
next
packages
actually
and
a
lot
of
things
around
mixed
packages,
and
I
hope
you
guys
are
by
now
familiar
with
the
format
of
the
lecture
series,
meaning
that
for
a
continued
discussion
after
or
even
during
the
talk,
there's
the
matrix
channel
so
head
over
to
there.
A
If
you
have
any
questions,
also
do
not
be
afraid
to
post
your
questions
in
the
oncas
instance:
youtube,
linkedin
or
twitter
and
yeah
again.
I
want
to
make
sure
to
thank
the
people
that
made
all
of
this
happen,
which
is
the
anal
net
foundation,
european
commission,
the
next
dress
foundation
and
tweak,
of
course.
So
I
hope
you
all
have
a
wonderful
lecture
and
I
do
hope,
you'll
enjoy
it,
so
I
will
be
giving
this
stage
to
john.
So
please
all
welcome
john
and
let's
see
what
he
has
to
say.
B
Brian,
yes,
my
name's
jonathan
ringer
and
today
I'm
going
to
talk
about
nick's
packages
and
within
this
talk
I'll,
do
kind
of
what
is
next
packages
a
brief
history
of
next
packages,
some
of
the
unique
qualities
of
mixed
packages
as
compared
to
a
lot
of
other
package
repositories,
and
also
in
that
vein,
the
many
ways
in
which
next
packages
can
be
leveraged,
even
in
like
non-traditional
domains,
the
release
process
that
we
have
for
next
packages
a
little
bit
about
maintainership
and
then
also
how
to
contribute.
B
So
to
begin
this
talk,
why
should
you
care
about
what
I
have
to
say?
I've
been
programming
for
almost
10
years
now.
I
first
stumbled
upon
nix
in
around
2017
through
a
haskell
subreddit
began,
trying
it
in
early
2019.
First
contribution
was
may
that
year,
then
I
got
my
commitment
september.
I
became
addicted
to
using
nyx.
B
I
got
really
enthralled
with
its
designs,
its
elegance
and
yeah,
to
try
to
learn
how
to
use
nyx.
I
just
started
reviewing
pr's,
so
that's
how
I
got
started
with
the
community
and
then
also
the
next
year.
B
I
became
the
release
manager,
so
the
2009
release,
which
was
with
world
of
peace
and
then
later
on,
solo,
2105
and
then
21-11
with
tim
diehara
and
tom
barrett,
and
then
I'm
still
involved
today,
so
currently
I'm
over
70
000
contributions
and
going
to
go
strong
and
this
isn't
to
top
my
own
horn.
This
is
more.
Let's
just
say
that
in
the
past
three
and
a
half
years,
nyx
has
been
a
huge
part
of
my
life
and
really
happy
to
contribute
and
be
part
of
the
the
community.
B
And
so
then,
let's
start
with
a
brief
history
about
nyx
knicks
was
started
as
a
phd
thesis
by
this
began
in
2003.
Originally
it
was
svn
code
repository
eventually
a
move
to
get.
I
I
couldn't
find
a
year,
but
then
the
community
moved
to
github
in
2012
nixos
used
to
be
a
separate
code
repository
all
together
that
lived
outside
of
mixed
packages,
but
in
2013
it
was
merged
and
that's
now
the
nexus
directory
inside
of
the
repository.
B
We
also
had
the
first
release
of
nexus
that
year
and
then
that,
at
least
for
my
own
personal
observations,
this
is
a
rough
estimate,
but
it
seems
like
every
two
years
activity
on
these
packages
has
been
about
doubling
and
that
that's
a
huge
growth
rate
and
I'm
really
excited
to
see
that
continue
to
grow.
B
But
what
is
mixed
packages,
and
so
these
are
kind
of
the
figures
that
we
would
normally
see
when
we're
kind
of
touting
or
like
shilling
nick's
packages
and
it
it
does
have
all
these
features.
So
it's
a
massive
package,
a
repository,
that's
all
source
based.
We
have
roughly
60
to
80
thousand,
depending
on
how
you
count
that
and
then
it's
also
the
home
of
nixo
west
and
also
too,
like
the
the
amount
of
activity
on
that
is
very
impressive.
B
So
we
have
about
like
100
merch
prs
a
day
about
100
commits
and
then
also
like
10
10
issues
a
day.
So
I
extend
that
to
a
month
like
it's
very,
very
active
there
and
it's
like
a
very
vibrant
community.
B
But
qualitatively-
and
this
is
what
really
excites
me
about
mixed
packages-
is
that
it
is
so
much
more
than
just
a
repository
packages.
It's
also
multi-platform,
so
you
can
run
it
on
any
distro.
B
It
doesn't
matter,
you
can
run
it
on
mac
os
and
then
there's
some
experimental
support
for
other
platforms
as
well
like
free
sba,
freebsd,
but
your
mileage
will
vary
a
lot
and
then
the
as
a
personal
user
of
it.
I
really
like
that.
B
Nyx
packages
isn't
just
a
static
collection
of
configurations
and
state,
so
think
of
something
like
app
or
debian,
where
that's
kind
of
just
this
meta
data
repository
where
there's
these
artifacts,
that
you
eventually
fetch
and
the
creation
of
those
artifacts
is
very
separate
from
the
maintenance
of
those
artifacts
and
so
then,
as
a
user
of
nyx
packages,
I'm
able
to
use
stuff
like
overlays
and
overwriting
to
kind
of
shape
that
landscape
to
something
that
I
would
actually
want
to
consume
and
then
extend
forward
in
my
use
cases.
B
The
one
way
that
I
like
to
conceptualize
this
is
that
each
one
of
these
little
packages
are
like
a
small
island
of
context
and
included
software.
So
if
I
have
something
like
firefox
and
I
use
the
firefox
package
from
nyx
packages,
everything
that
it
needs
to
run
at
runtime
can
kind
of
be
bundled
with
it,
and
it's
this,
like
nice
minimal.
B
What
we
call
closure
needed
to
then
actually
run
that
software
and
then
the
last
little
bit
is
that
it's
just
the
body
of
code
there's
nothing
super
special
about
nyx
packages,
it's
highly
regular,
it's
just
nick's
code,
and
that
has
all
the
benefits
of
code.
So
you
can
version
control
it.
You
can
iterate
on
it.
You
can
do
pr
workflows,
you
can
do
ci,
workflows,
it's
very,
very
beautiful
and
how?
How
is
nix
able
to
kind
of
achieve
this
uniqueness?
B
Well,
it's
kind
of
how
we
abstract
over
a
package
and
so
each
package
within
nyx
packages.
Technically
we
have
three
different
types
of
derivations,
but
the
most
common
one
that
you'll
see
is
called
it
input
address
derivation,
and
that
means
that
these
packages
are
unique
based
upon
the
sources
that
are
included
in
them,
the
build
and
runtime
dependencies
that
they
have
the
build
steps,
architecture
and
platform.
All
of
this
gets
factored
into
a
very
unique
package.
That's
separate
from
everything
else.
B
You
can
think
of
it
almost
like
a
you
would,
and
so
there's
like
a
way
to
uniquely
identify
a
package
exactly
how
you
intended
to
use
it
and
so
then,
down
at
the
bottom.
I
just
have
a
quick
little
example
where,
if
you
remove
an
optional
dependency,
then
it
gets
reflected
in
how
nicks
will
address
that
package.
B
If
you're
familiar
with
the
concept
of
merkle
trees,
you
can
think
of
the
hash
there
as
the
like
the
merkle
tree
hash,
and
that
would
be
a
relatively
correct
intuition,
and
the
thing,
though,
is
that
I
mentioned
earlier
where
it's
source
based,
but
we
have
this
way
to
uniquely
identify
things
and
so
nyx
lives
on
both
spectrums
of
this
source
versus
binary
distribution
of
a
package
manager.
B
On
the
one
hand,
you
have
something
like
gentoo,
which
they
do
have
support
for
downloading
some
binaries
now,
so
it's
not
completely
source
based
if
you
opt
into
the
binary
workflow,
but
historically
people
kind
of
associate
gentoo
with
you
build
everything
on
your
machine.
You
use
some
used
flags,
so
you
have
some
way
to
kind
of
compose
a
environment
that
suits
your
needs
better
for
you,
and
but
the
thing,
though,
is
that
it's
incredibly
slow
to
build
everything,
and
so
then
a
lot
of
times
package
managers
like
apps
rpm.
B
They
get
the
leg
up
because
for
a
user
experience
it's
really
fast
to
just
download
pre-built
binaries
and
that's
like
a
good
user
experience
and
then
recently,
like
recently
like
10
years
ago,
docker
kind
of
came
on
to
the
the
playing
field.
And
now
you
have
this
like
environment
plus
an
application.
B
And
then
you
have
other
kind
of
technologies
that
are
built
like
kind
of
using
that
containerization
technology,
but
some
metadata
to
then
have
a
package
like
experience,
and
so
that
would
be
things
like
flat
pack
and
snaps,
but
nick's
nix
doesn't
cleanly
fit
anywhere
on
there.
On
the
one
hand,
you
can
just
disable
substitutes
all
together
and
now
you
have
the
gen
2
workflow.
B
B
We
we
essentially
don't
have
to
worry
about
stepping
on
other
packages,
and
it
also
kind
of
enables
all
these
other
beautiful
things
that
kind
of
fall
out
of
being
able
to
uniquely
identify
a
given
package
and
how
does
nyx
go
about
creating
these
very
unique
packages?
Well,
it's
in
it's
in
two
ways
and
in
in
this
is
that
all
things
that
nyx
ever
builds
follows
these
two
steps,
so
one
would
be
instantiation.
B
This
would,
if
you
ever
look
up
how
to
like
write
a
package
generally
they'll
call
it
a
next
expression,
but
instantiation
is
taking
that
that
code,
that
expression
and
then
kind
of
creating
an
intermediate
build
workflow,
and
so
we
call
this
a
derivation
where
things
like
your
platform
and
architecture
get
resolved
and
all
of
your
dependencies.
And
so
then,
after
instantiation
nyx
has
a
very
clear-cut
way
to
exactly
build
whatever
piece
of
software
that
you
want
and
then
the
second
step
would
be
realization.
B
This
is
the
actual
build
part
that
we
think
in
a
traditional
sense,
and
so
we
go
from
that
like
build
recipe
into
that
real-life
store
path
or
or
the
package
that
you
want
to
consume,
and
one
thing
to
note:
that's
unique
about
nyx
packages,
maybe
not
unique
anymore,
with
stuff
like
silver
blue,
but
what
what
is
unique
about
it,
though,
is
that
all
these
paths
are
now
read
only
after
building
and
what
that
enables
is
also
something
called
maximal
sharing.
B
So
if
you
have
package
a
and
package
b
and
they
both
rely
on
the
exact
derivation
of
package
c,
they
can
freely
share
that
without
having
to
worry
about
the
other
ever
mutating
it,
and
so
we
get
ver
pretty
much
100
like
dependency
reuse
across
the
landscape.
If
they
just
happen
to
use
something
by
accident,
there's
no
and
it
like
organically
falls
out
of
being
able
to
uniquely
identify
everything.
B
But
what
is
this
all
gets
stored?
So
in
a
traditional
operating
system?
You
would
have
stuff
like
the
bin
directory
lib
directory
user
directory
and
all
of
these
resources
dependencies
binaries
libraries-
all
of
these
things
would
kind
of
have
to
cohabitate
with
each
other,
and
you
could
really
only
have
one
version
of
each
other,
but
the
one
thing
that
nix
does
is
kind
of
separates
those
concerns,
so
you
can
think
of
packages
existing
independently
of
what
is
on
your
system.
B
So
the
nice
thing
is
that
you
don't
have
to
be
stuck
in
some
invalid
state
and
what
I
mean
by
that
is
that
let's
say
you
updated
like
open
ssl,
and
then
you
just
break
a
bunch
of
stuff
later
on.
When
you
go
to
try
to
run
it
that
I
would
consider
your
system
to
be
an
invalid
state.
It
makes
packages
trying
to
do
that.
You
would
like
redefine
the
entire
history
of
how
to
build
something
and
then
we'll
try
to
build
something
new.
B
The
build
and
development
environments
that
you
would
use
with
nyx
contain
only
what
they
need.
So,
for
example,
we
have
a
very
common
tool
with
the
next
packages
called
next
shell,
and
you
can
just
instantiate
an
excel.
They
will
modify
your
path,
modify
certain
environment
variables
to
communicate
what
should
be
present
in
there,
but
in
the
end,
what
you
get
is
just
exactly
what
you
needed
and
once
you
leave
it's
like
it,
never
existed,
it
doesn't
exist
in
the
next
store.
It's
just
not
exposed
in
any
way,
and
I
guess
conceptually.
B
This
is
one
thing
that
I
think
is
very
different
about
mixed
packages
as
well.
Is
that
many
people
when
they
come
from
a
more
traditional
like
rpm
or
apps
environment?
They
always
want
to
talk
about.
How
do
I
install
something-
and
I
think,
that's
kind
of
the
wrong
way
to
view
nics
what
you
really
want
to
view
nyx,
as
is
how
do
I
expose
the
packages
and
pin
dependencies
in
the
environment
in
which
I
wanted
it
so
for
nix
os?
That
would
be
your
system.
B
How
do
I
expose
it
at
the
system
level,
but
then,
if
I
were
to
do
development
then
like
how
do
I
expose
it
for
my
development
workflow
and
then,
if
I
were
to
build
a
next
package,
how
do
I
expose
it
in
the
build
environment
and
I
think
those
are
all
separate
concerns,
but
that
that's
kind
of
like
the
main
perspective
difference
that
I
think
is
is
is
different
there,
and
so
then,
extending
next
package
is
beyond
just
packages.
Like
I
mentioned
earlier,
the
little
islands
of
software.
B
Well,
it's
really
nice
to
have
some
way
to
make
it
into
something
bigger
than
the
parts
that
they
think
they
are
so
in
this
way,
nixo
kind
of
inverts
the
logic,
whereas
next
packages
we're
only
concerned
about
a
what's
the
minimal
amount
of
dependencies
that
I
need
to
run.
A
piece
of
software
now
in
excels
is
saying
it's
still
like.
What's
the
minimal
thing
to
achieve
this
goal,
but
it's
the?
B
How
can
I
combine
and
compose
all
of
this
software
and
configuration
into
something
that
is
greater,
and
so
I
mentioned
how
we
do
the
instantiation
realization.
Well,
that's
just
arbitrary
that
we
would
do
it
mostly
for
software
packages,
but
you
can
also
do
things
like
configuration
or
systemd
units
and
so
you're
able
to
kind
of
freely
use
that
derivation
model
to
also
build
up
larger
abstractions,
and
in
this
case
it
would
be
like
services
so
like.
B
If
I
wanted
to
do
services.postgres,
then
I
can
build
up
all
my
database
configuration
alongside
of
the
actual
software
that
I'm
going
to
use
along
with
any
other
kind
of
runtime
dependencies.
That
I
would
also
need-
and
one
interesting
aspect
about
exos
modules
is
that
we
use
something
called
fixed
point
logic
to
kind
of
deduce.
What
the
final
state
of
the
system
is
so
from
the
module
years
itself,
you're
kind
of
able
to
inspect
the
entirety
of
the
other
system
as
it
would
be.
B
If
I
were
to
consume
and
it's
in
state
and
then
when
from
the
specific
module
you
can
kind
of
just
do
your
impulse.
One
mod
logic,
and
what
I
mean
by
that
is
that,
like
the
postgres
module
is
only
concerned
about
setting
up
postgres
services
and
it
can
kind
of
view
everything
else.
B
If
you
ever
use
stuff
like
makeforce,
make
order
make
make
merge,
you
have
all
these
nice
little
rich
primitives
to
then
overwrite
kind
of
the
default
behavior
that
nixos
is
kind
of
opinionated
about
you
get
like
nixos
will
give
you
the
defaults
like
hey.
This
is
what
I
think
it
should
be,
but
if
you
do,
if
you
are
more
opinionated,
for
example,
if
you
have
a
hardened
system,
you
can
just
say:
hey
my
open
tcp
ports
make
force
only
like
ssh
or
maybe
just
open
ssh
on
a
different
port.
B
One
thing
that
this
also
allows
for-
and
this
is
probably
the
most
recognized
aspect
of
nixos-
is
that
you're
able
to
define
your
system
within
a
single
configuration
file,
and
so
then,
on
the
right-hand
side.
I
have
like
this
very
minimal
amount
of
declarative
syntax,
where
it's
like,
hey
I'd,
like
openssh,
to
be
enabled,
and
also
I
want
document
enabled
and
from
there
all.
B
The
other
modules
that
may
be
concerned
about
those
options
are
able
to
react
accordingly
and
then
the
open,
ssh
modules
themselves
are
able
to
do
stuff
like
ensure
that
there's
a
openness
as
sshd
daemon
running
and
that
the
ports
are
open,
and
it's
it's
very
beautiful
and
the
other
thing
too
I
mentioned
earlier,
where
next
doesn't
really
have
like
a
system.
So
there's
that
divide
between
the
system
and
the
packages,
and
so
then
what
this
allows
for
is
something
called
generations.
B
So,
each
time
we
build
a
system
configuration,
we
can
just
say
generation
and
then,
because
you're
kind
of
only
manifesting
that
environment.
When
you
need
it,
you
can
go
back
and
forth
through
history,
and
so
we
call
this
rolling
back
but
you're
kind
of
able
to
select
this
timeline
of
modifications
that
you
do
and
that's
that's
very
unique
aspect
as
well.
B
A
lot
of
people
today
they'll
do
something
like
butter,
fs
and
snapshots,
but
you
have
to
capture
everything
about
your
system
and
that
can
be
very,
very
large,
whereas
with
the
nix
you
get
this
nice
little
beautiful,
minimal
configuration
file
and
the
only
thing
that
you
would
really
want
to
snapshot
is
your
immutable,
persistent
data.
B
I
would
describe
as
something
like
nixon's
light
so
or
nixo
is
for
the
user,
and
what
what
this
means
is
that
you
can
think
of
it
more
as
like
your
dot
files,
your
user
programs,
like
a
lot
of
people,
will
have
these
like
very
intensive,
install
scripts
that
are
kind
of
like.
If
I'm
not
in
this
state,
then
I
need
to
get
to
the
state
by
doing
these
actions,
so
that's
kind
of
like
a
reconciliation
configuration
management
but
with
home
manager.
It's
like
the
congruent
model.
B
So
what
I
describe
on
the
input
side
is
what
I'm
going
to
get
on
the
output
side
and
the
other
nice
thing
about
home
manager.
Is
that
because
it
doesn't
have
a
intuition
about
system,
services
or
hardware,
it
can
also
be
extended
across
more
than
just
nix
os.
So,
for
example,
I
use
my
home
manager
configuration
for
mac
os.
B
I
used
to
use
it
for
wsl
and
then,
like
my
three
other
machines,
and
I
essentially
have
like
the
same
home
across
multiple
devices
and
it's
very
beautiful
and
then
on
the
right
hand,
side.
I
have
something
where
it's
just
kind
of,
like
I'm
very
particular
about
how
my
git
configuration
is
and
I'm
able
to
encode
that
in
a
nice
declarative,
minimal
syntax.
B
Can
we
extend
this
model
further,
though,
with
index
packages
we
do
also
nixos
tests
and
what
they
are
is
just
like
these
multi-node
multi-service
workflows,
where
you
can
bring
up
these
independent
actors
and
then
you
can
kind
of
assert
some
workflow
and
trying
to
do
this
in
a
traditional
context.
Let's
say
you're
a
sas
provider
and
you
want
to
spin
up
a
sas
scenario
where
you
spin
up
your
service
and
then
some
user
workflow.
B
It's
actually
very
difficult
to
achieve
this,
and
generally
it's
very
much
like
hacked
on
and
then
like
hope
that
things
are
in
place
and
then
try
to
do
the
workflow
and
hope
to
god
that
your
test
cases
match
like
the
behavior
that
was
intended
and
yeah,
so
that's
very
beautiful
and
then
on
the
remote
side.
So
if
you're,
if
you're
trying
to
do
something
where
you're
reasoning
about
multiple
machines,
there's
there's
many
things
that
are
available
today.
B
So
nyx
ops
is
the
one
that's
within
the
next
ecosystem
itself
and
then
the
player
s.
I
think.
What's
don't
quote
me
on
that
and
then
a
bit
of
it
would
be
like
iog,
but
these
are
used
in
scenarios
where
you're
talking
about
mini
services,
and
you
want
to
have
kind
of
that.
B
High
availability
or
micro
service
type
model,
and
then
my
my
personal
favorite
about
mix
is
the
nick
shell
tool,
and
here
this
is
like,
if
you
need
to
try
a
package
like
you,
need
a
different
go
compiler
version,
then
you
can
just
next
shell,
that
into
environment,
and
then
you
can
use
it
and
when
you're
done
using
it,
you
can
get
out
of
that
shell
and
once
you're
out
of
that
shell
exactly
never
existed
if
people
are
familiar
with
python's
virtual
environment,
I
think
of
that
as
like
a
virtual
m,
but
like
for
your
system
and
so
like
you
can
do
it
for
native
dependencies,
build
dependencies,
libraries,
whatever
you
can
bring
them
in
use
them,
discard
them,
move
on
with
your
life,
and
you
don't
have
to
acquire
all
this
cumulative
debt
of
doing
apt,
install
homebrew,
install
yeah,
but
this
is
that's
kind
of
how
nyx
is
leveraged
but
like
how?
B
How
does
how
does
it
actually
work
underneath
the
covers,
because
within
mixed
packages
like
there's
all
this
work
to
be
done?
I
mentioned
earlier
on,
there's
like
all
this
activity,
but
like
how
do
people
actually
receive
that?
And
so
then,
the
development
on
nixon's
next
packages
is
just
a
code
related
workflow.
So
it's
just
a
github
repository.
We
do
prs,
we
do
issues
etc
for
this,
but
what
that
manifests
into
is
that
like?
B
If
I
wanted
to
update
my
individual
package,
that's
just
a
pr
send
up
there
so
we'll
review
it
hopefully
and
then
merge
it,
and
this
also
includes
stuff,
like
fixes
security,
patches,
enhancements
to
existing
services
and
modules,
and
then
also
back
porting
from
the
the
development
branch
which
is
master
to
the
release
branch
at
the
time.
But
that's
just
how
we
collect
all
of
that
activity.
The
one
that
actually
distributes
that
to
the
end
users
for
binary
downloads
is
hydro.
B
So
hydra
is
a
very
nick
specific
cicd
tool
since
mix
is
so
different
than
other
traditional.
I
don't
know.
Package
managers
or
like
ci
cd
workflows
like
hydra,
is
the
one
that's
like
understands
the
next
workflow
and
is
able
to
translate
that
and
leverage
that
to
its
to
its
to
its
benefits.
B
So
the
official
new
hydra
instance
is
hydra.nexus.org
and
you
can
see
that
it,
the
the
amount
of
builds
that
it
does
is
pretty
astounding,
and
I
think
like
for
trunk
right
now,
which
is
a
holdover
from
the
espn
days,
but
on
truck
right
now
I
think
it's
building
like
140
000
packages
or
something
like
that.
Each
time
we
have
a
an
update
or
you
show
me
a
pull
from
master.
So
it's
it's
a
it's
pretty
pretty
amazing
how
much
it's
able
to
do
and
how
the
release
workflow
is.
B
Is
that
if
you
ever
tried
to
install
mixed
packages
or
use
next
channel
or
use
flakes
generally,
people
say
like
oh
use
like
nexus
unstable.
If
you
want
unstable
or
nixos
2205
would
be
the
latest
stable
and
those
are
release
channels
and
hydra
is
actually
the
one
that's
pushing
those
updates
and
hydra
what
it
does
is
once
it
pulls
master,
then
there
will
also
be
a
description
of
what
to
build
so
like
a
nexus
release
dot
next
or
something
like
that,
and
in
there
it
will
say
like
hey.
B
If
all
of
these
things
pass
and
this
kind
of
the
the
ci
workflow
or
the
continuous
integration
like,
if
all
these
gates
pass,
then
we're
able
to
go
forward
with
incrementing
the
release
channel
and
then
we
can
release
that
to
the
public,
and
that's
that's
it.
Actually.
B
The
other
kind
of
interesting
thing
about
hydra,
though,
is
that
the
updating
of
the
cache
is
actually
done,
asynchronously
from
release
channels,
and
so
what
I
mean
by
that
is
that
anytime,
hydra
has
a
successful
build
of
a
package,
regardless
of
how
it
came
to
be
it'll,
just
be
automatically
updated
uploaded
to
the
cache
and
this
model
works.
Well,
because
I
mentioned
the
we
can
uniquely
describe
how
packages
exist.
B
Well,
if
you
wanted
to
consume
from
that
cache,
you
just
need
to
also
describe
the
exact
package
that
you
wanted,
and
so
then
the
cash
can
be
thought
of
like
a
key
value
store
of
if
it
exists,
please
give
it
to
me.
If
it
doesn't
exist,
then
I'll
go
down
the
the
tree
to
see
what
you
do
and
don't
have
and
I'll
build
the
rest,
and
next
is
kind
of
just
ways
to
contribute
to
nyx
packages.
There's
there's
a
lot,
a
lot
of
ways
to
contribute
you
can
think
of
next
packages.
B
Its
scope
is
trying
to
do
all
software
in
existence,
so
there's
a
lot
of
a
lot
of
domain
in
there
and
we
have
a
lot
of
packages
and
a
lot
of
packages
have
a
lot
of
different
workflows.
So
yes,
if,
if
you're
using
next
packages,
please
please
submit
issues
a
lot
of
times.
What
happens
is
that
someone
will
package
something?
It
gets
updated
that
person
that
first
package
it
no
longer
uses
it
daily
until
then,
maybe
some
update
invalidated
workflow
and
we're
just
not
aware
of
that,
because
it
builds
fine,
it
looks
fine.
B
If
you're
more
comfortable
with
mixed
packages
and
writing
nics,
then
please
please,
please,
please
do
pull
requests
for
fixing
broken,
builds
or
certain
use
cases
for
your
software
or
doing
updates.
That's
a
that's
a
great
way
as
well.
Nyx
packages
is
kind
of
in
this
weird
state,
where
a
lot
of
actual
large
foss
projects
are
once
you
get
to
a
certain
size
of
contributions.
B
It's
really
hard
to
find
reviewers
for
that,
so,
like
the
linux
kernel
is
also
in
a
similar
state,
where
it's
really
hard
to
find
maintainers
that
I
can
review
the
patchwork
load
and
yeah
next
package
is
also
in
the
state
where
it's
like.
It's
really
hard
to
find
enough
manpower,
personal
power
to
then
review
all
that
work,
but
if
you're,
if
you're,
not
wanting
to
contribute
in
just
code.
B
B
But
it's
like
how
how
do
I
write
a
unifile
and
it's
it's
really
hard
to
to
find
like
what
you
should
do
and
I
think
nix
is
kind
of
in
a
similar
state
where,
where
we,
we
can
work
on
making
it
these
more
these
these
workflows
that
people
are
more
likely
to
actually
consume,
and
then,
in
that
vein,
nichols
wiki
generally
does
a
good
job
of
doing
deep
dives
on
specific
topics.
We
do
have
nyx
pills,
but
that's
like
very
technical.
B
That's
that's
kind
of
describing
how
standard
div
dot
make
derivation
works
and
which
I
think
is
good
if
you're
going
to
do
a
long-term
investment
index,
but
we
don't
really
have
anything.
That's
like
a
nice
row,
sling
book
and
then
the
last
thing
that
you
can
do
is
become
a
maintainer,
so
maintainer,
I
think,
is
kind
of
like
a
nice
compromise
between
all
of
these.
B
Where,
like
what
does
it,
what
does
it
mean
to
become
a
maintainer?
Well
today,
largely
what
it
means
is
that
the
packages
that
you
maintain,
eventually
you
get
notifications
whenever
they
get
updated
or
changed
by
people,
and
so
then
you'll
just
get
a
little
github
notification.
You
can
go
there.
You
can
review
the
work.
We
we
document,
how
you
should
review
a
package,
but
but
yeah
it's
immensely
beneficial,
even
if
you
don't
have
commit
rights
just
to
see
like
hey,
like
I'm,
the
maintainer
on
this
package.
B
There's
like
some
work
on
it.
I
reviewed
it.
I
certified
that,
like
at
least
in
some
normal
workflow
that
it
works
like
if
I'm
able
to
be
a
committer-
and
I
come
across
that-
like
that's
a
huge
win
for
me
like
yes,
thank
you
so
much
like
I
don't.
I
don't
have
to
be
as
worried
about
the
correctness
of
the
package.
If
someone
who's
very
familiar
with
the
package
says
that
it's
correct
and
with
that,
I
think
I'll
take
questions
now.
A
Yes,
thank
you
very
much
sean.
I
will
also
be
pulling
matthias
in
here,
since
he
has
been
posting
a
lot
of
questions
on
matrix
as
well.
There
we
go
I'll,
do
it
like
that
there
we
go
so
welcome
matias.
B
Do
I
feel
the
absence
of
static
types,
I
would
say
on
the
average?
No,
but
I
would
say
when
I
do
miss
them.
It
hurts
bad
so
like
if
I
am
writing
an
array
of
something-
and
I
forget
to
do
the
little
parenthesis
around
some
function,
call
and
just
this
item.
This
item
is
a
function,
not
a
set
yeah,
it's
it.
It
can
bite
and
I
would
really
like
to
see
static
typing
within
mixed
packages
and
the
workflow
with
nickel
kind
of
gives
me
hope
there.
A
That
makes
sense,
I
think
it's
best.
If
I
let
mattias
ask
his
own
questions.
C
C
Thanks
john
for
the
for
the
great
talk
yeah,
I
have
a
few
questions.
Number
one
is
so
next
packages.
Is
this
giant,
mono
repo
and
then
now
there's
also,
this
approach
to
with
flakes
have
every
package
in
in
its
own
repo.
Basically,
and
then
a
search
engine
could
potentially
aggregate
those
different
repos.
C
Do
you
see
in
between
those
kind
of
extreme
ends?
Do
you
see
also
space
for
let's
say
having
something
like
two
mixed
packages.
B
B
It's
also
causing
issues
and
being
able
to
kind
of
have
a
holistic
view
of
everything
at
once
is
really
enabling
for
cross
package
issues,
and
what
I
would
like
to
not
see
is
just
if
we
do
go
to
the
flex
model.
I
think
it
it
might
work
for
stuff
for,
like
the
python
package,
set
like
a
lot
of
those
packages,
kind
of
don't
care
about
the
native
dependencies
on
the
whole,
the
exception,
maybe
of
like
machine
learning
and
limiting
algebra.
B
But
the
thing,
though,
is
that,
like
software
isn't
meant
to
be
just
in
a
vacuum,
so
especially
in
the
foss
community,
there's
so
many
different
libraries
and
pieces
of
software
like
would,
I
would
think
it
would
be
really
hard
to
actually
shard
that
that
model
into
something
that's
more
distributed.
B
I
do,
I
think
it
can
work,
I
think
it
can.
I
think
it
would
just
be
really
difficult
in
practice
and
the
maintenance
burden
of
tracking
down
where
those
regressions
happen
would
be
a
lot
more
difficult.
One
thing
that
I
do
like
in
this
packages
is:
you
can
just
do
a
git
bisect
on
something
that
if
it
does
breaking
staging
and
eventually
you
can
just
use
the
git
tooling
to
find
where
that
got,
where
that
that
workflow
got
invalidated
or
died.
C
Okay
and
then
I
have
another
question
so
on
mixed
packages
also
has
flake's
support
already,
and
I
think
many
packages,
at
least
for
me,
work.
What's
the
state
of
this?
Do
you
know
whether
there
are
things
left
to
be
done
to
make
that
fully
competitive
flakes
compatible.
B
Are
you
talking
about
like
consuming
flakes
from
mixed
packages,
or
are
you
saying
like
consuming
that's
I
I
was
just
about
to
wear
that
question
in
the
same
exact
way,
but
I
mean
two
different
things:
do
you
mean
like
from
next
packages
being
able
to
use
other
flakes,
or
do
you
mean
like
just
making
the
the
flick
workflow
the
standard.
C
Yeah,
but
I
I
basically
just
mean
to
type
nick's,
run
next
packages
and
then
whatever
package,
and
then
you
know
it
starts
with
with
the
flakes
command
line.
B
Yeah,
I
would
I
would
defer
to
elko,
and
I
I
sympathize
with
his
want
to
keep
it
experimental
and
so
that
he's
still
like.
Oh
hey,
as
we
incur
pain
with
the
cli
and
its
ergonomics,
we
can
change
directory
change
trajectory
without
kind
of
like
breaking
the
world
and
that
I
think,
is
still
very
powerful
for
someone
who
just
doesn't
want
to
make
who
doesn't
want
to
have
to
do
a
knicks
4.0.
B
Essentially,
so
I
that
I
sympathize
with-
and
that's
that's
all
I'm
going
to
say
on
the
matter
like
I've
been
using
flex
for
a
year
and
a
half
and
I've
had
relatively
little
issues,
at
least
for
the
99
of
the
workflows,
the
the
command
line.
Experience
has
been
the
same.
C
Okay
and
then
I
have
the
last
one,
so
when
we
do
when
we
add
tests
on
next
packages,
how
far
would
you
typically
go
with
with
tests?
Would
you
just
kind
of
test
all
you
can
or
do
you
think
we
shouldn't
add
too
many
tests,
because
then
you
know
we
have
to
maintain
these
tests
as
well
and
change
them.
Do
you
have
an
opinion
on
this.
B
Yeah
I
do
actually
so
I
would
say
is
that
the
correctness
of
the
software
that
burdenship
or
that
burden
of
ownership
is
on
the
upstream.
So
it's
like
the
upstream's
responsibility
to
make
sure
that
it
works.
Our
responsibility
is
just
to
make
sure
of
how
we
package
it
that
works.
So
I
would
say
tests
should
be
kind
of
like
minimal
and
complete
by
doing
workflows
where
they
kind
of
push
the
boundaries
of
what
it's
supposed
to
do,
but
not
do
like
some
exhaustive
massive
test
suite.
B
It
is
really
annoying
when
you
run
a
python
machine
learning
library
and
then
they
do
their
test
suite,
which
is
massively
compute
memory
and
storage,
heavy
yeah
yeah,
so
there's
definitely
compromise
there.
We
should
make
sure
that
it
it's
more
or
less
correct
in
the
context
of
next.
That's
what
we
should
be
asserting
most
and
optimizing
for
that.
A
Okay,
nice
there's
also
a
question
over
on
youtube
from
mihai
fuvezon.
If
I'm
pronouncing
that
correctly,
how
would
you
envision
a
next
package
transition
from
next
to
nickel.
B
B
So
if
I
do
see
it
roll
out
either
two
things
if
the
syntax
is
not
compatible,
then
we'll
have
to
just
do
a
full
rewrite,
which
would
be
very
painful
or
if
it's
something
kind
of
like
python,
where
they
do
have
gradual
typing
and
you
can
just
have
additional
syntax
in
so,
in
addition
to
the
existing
syntax,
then
then
we
have
something
where
you
kind
of
slowly
introduce
that
syntax
over
time,
and
I
think
I
think
that
would
be
the
preferable
route.
B
But
there's
always
pros
and
cons
with
this,
like
maybe
like
the
really
nice
ergonomic
syntax,
is
just
not
compatible
with
the
current
parser.
The
the
I
was
the
word
when
you
parse
the
grammar
there
we
go
yeah,
it's
not
compatible
with
that
grammar.
A
Okay,
I
don't.
A
We
have
new
intel
by
the
way
it's
pronounced
mighty.
I
am
so
both
me
and
modules
were
wrong
last
time
on
the
name,
but
they
asked
how
do
multiple
versions
of
packages
work
at
all.
B
How
do
multiple
versions
of
package
work
at
all
nyx
has
the
separation
of
build
and
runtime
dependencies,
and
so
then,
as
long
as
you're,
never
in
an
environment
where
those
two
things
need
to
be
coherent
I'll
give
one
example.
So
like
hardware
acceleration,
if
you
need
them
both
to
reference
your
video
drivers,
then
that
would
be
one
instance
where
they
need
a
link
against
something
outside
of
just
their
closure,
but
on
average
cli
applications
services.
Anything
that
doesn't
necessarily
need
like
hardware
acceleration.
B
And
if
you
have
something
that
can
compile,
then
we
have
other
leverages
like
run
path
and
our
path
where
we
can
set
that
to
very
nick
specific
items.
So
anyway,
to
say
it
in,
in
short,
is
that
nyx
allows
you
to
express
incompatible
packages
or
different
versions,
and
then,
as
long
as
you
don't
have
something
where,
like
the
path
like,
if
you
put
python,
39
and
310
on
the
same
path,
then
it's
only
going
to
pick
one.
B
So
as
long
as
you're,
not
in
a
scenario
like
that,
you
can
use
multiple
versions
as
you
wish,
or
if
you
describe
them
by
the
next
path,
then
you
can
always
use
them
regardless.
A
Okay,
thank
you,
john
another
question.
Over
in
matrix,
akanji
asked
what
is
the
major
feature
that
hydra
has
that
other
ci
cds
don't
support.
B
B
So,
for
example,
if
we
want
to
update
nexus
unstable,
what
happens
is
that
every
six
hours
hydra
is
going
to
go,
pull
the
master
branch
and
it's
going
to
run
nixon's
release.mix
and
then
in
there
just
evaluating
that
logic
will
then
kind
of
expand
out
to
all
of
the
packages
that
are
free
for
every
platform
and
then
hydra
then,
is
able
to
then
kind
of
distribute
that
and
what's
the
build
machines
and
I
think
yeah.
So
I
would
say
that's.
A
Okay,
cool
then
mighty.
I
am
again,
why
are
those
occasions,
sorry,
why
are
there
occasionally
failures
on
hydra
and
how
can
those
be
eliminated
or
reduced.
B
Yeah,
so
I
would
say
that
the
the
bane
of
most
builds
right
now
is
the
staging
workflow.
So
nyx
has
this
dual-edged
sword
of
being
aware
of
everything
that
it
needs
right,
and
so,
if
it's
great,
when
everything
is
already
built-
and
it's
fantastic-
but
let's
say
if
you're
like
oh,
I
want
to
update
the
version
of
glibc.
Well
pretty
much.
Everything
has
an
opinion
about
what
version
of
gilo
seed
they're
using.
B
So
if
you
do
modify
that
then
you're
rebuilding
the
world
essentially
and
that
that
is
a
very
computationally,
expensive
task,
hopefully
with
ca.
Derivations,
let's
get
taped
heavily
our
band-aid
heavily,
but
in
its
current
form,
to
rebuild
the
world
is
very
long
and
we
don't
expect
anyone
to
do
that
on
their
personal
machines.
So
we
have
a
staging
next
workflow,
where
you
get
all
these
risky
updates
all
compiled
together,
and
what
happens
is
that
we
just
kind
of
do
a
prioritization
of
well.
B
B
Is
that,
generally,
there
are
these
packages
that
the
upstream
isn't
super
active
about
adopting
new
dependencies
and
the
next
packages
is
in
this
weird
state
of
the
assumptions
about
what
it
needed
has
changed
without
the
manpower
to
actually
solve
it,
and
so
this
is
how
we
remedy
this
on
the
the
next
packaging
site.
Is
that
twice
a
year?
B
We
do
something
called
zero
hydro
failures
in
anticipation
of
a
release,
and
so
then
we
will
generally
have
the
community
take
like
a
a
good
effort
in
going
through
that
backlog
and
going
through
that
backlog
and
reminding
all
those
those
build
failures.
A
A
Then
colin
arnett
also
on
matrix,
asks:
how
will
you
improve
the
experience
of
python
developers.
B
A
B
It's
yeah
onyx
runtime,
so
onyx
is
a
machine
learning
neural
net
framework,
that's
done
by
microsoft
and
a
few
other
other
partners.
Anyway,
it
doesn't
matter
but
yeah.
I
think
I
think
it's
just
a
few
things
like
python
is
uniquely
painful.
I'm
just
gonna
use
that
word
because
the
python
packages
need
to
be
coherent,
so
I
mentioned
earlier,
like
you
can
have
multiple
versions
of
things.
Python
is
one
of
those
exceptions
where
the
python
interpreter.
B
If
you
do
import
numpy,
it
can
only
import
the
first
numpy
that
it
encounters,
and
so
then
that's
one
thing:
you
need
to
force
a
coherent
environment.
This
is
also
why,
if
you
try
to
do
different
versions
on
a
pr
we're
gonna
we're
gonna
yell
at
you,
we
don't.
We
don't
mean
to
hate
on
you,
it's
just
the
python
that
complexity
percolates
up
to
us
and
then
another
thing
too
is
machine.
Learning.
B
B
That's
just
not
allowed
with
linux
packages,
because
it's
going
to
limit
the
amount
of
hardware
that
can
actually
run
that
software.
So
that's
another
kind
of
thing
against
it
and
then,
lastly,
for
onyx
runtime
in
particular.
B
It's
just
that
the
upstream
and
not
not
it
not
to
hate
on
microsoft,
because
a
lot
of
those
people
are
working
hard,
but
they
do
like
a
lot
of
unadmitted
on
on
idiomatic
practices
in
regards
to
git
cmake
and
a
few
other
things
where
it
makes
it
really
painful
to
actually
maintain
that,
and
what
I
mean
by
that
is
that
the
cmake
workflow
is
essentially
like
get
module
based,
and
it
doesn't
try
to
do
any
type
of
fine
package
thing
so
that
it
works
well
with
different
ecosystems
and
then
also
I
just
remembered
the
last
time
I
tried
to
do
it.
B
The
base
checkout
was
one
gig,
because
I
had
a
bunch
of
example:
neural
nets
in
it,
so
it
was
just
I
was
just
like
okay,
I
don't
feel
like
maintaining
this.
It
feels
like
a
pain.
A
Okay,
so
if
my
taps
were
to
switch
there,
we
go
paul
henry
over
on.
The
oncast
instance
is
asking
when
defining
a
nexos
module.
How
far
should
we
go
with
the
explicitness
of
the
configuration,
for
instance?
What
is
the
goal
between
just
a
plain
tax,
config
or
full
explicit
typed
conflict
like
matrix,
synapse
module.
B
Yeah,
I
would
say
that
one
one
isn't
necessarily
better
than
the
other
so
going
to
fully
typed
route.
It
is
a
nice
user
experience
because
then,
if
you
do,
man,
configuration.nx
you'll
just
see
everything
explicitly.
B
But,
for
example,
I
recently
tried
to
package
spire,
which
is
like
this
identity
type
of
service
framework,
so
that,
like
if
you're
in
an
organization,
you
can
uniquely
kind
of
provision
and
do
like
apples
on
different
machines
and
workloads.
The
thing,
though,
is
that,
like
they
have
a
very
dynamic
configuration
there
and-
and
I
really
wanted
to
just
say,
like
hey-
please
use
the
h,
hcl
and
then
just
delegate
to
that,
but
there's
not
a
good
way
to
do.
B
Okay,
inspire
was
probably
a
bad
example
because
they
use
hcl
but
like
in
a
lot
of
workflows
like
nick's,
for
example,
we
went
over
to
just
having
a
settings
and
that
that's
untyped,
but
as
long
as
you
can
reference
the
documentation
where
that
gets
consumed,
I
think
that's
actually
a
better
workflow,
because
that's
closer
to
what
upstream
has
so.
B
We
don't
have
to
do
this,
like
nick
specific
translation
layer
where
that
can
be
like
updated
out
of
sync,
and
that's,
I
think,
the
the
foot
gun
for
doing
a
specific
type
module
configuration
is
that
you're
you're
very,
very
specific
to
what
it
is
at
that
current
state
and
then
you're
unable
to
kind
of
change
with
time.
A
So
I
see
that
matthias
has
another
question
on
matrix,
so
I'll
leave
it
over
to
him
again.
C
Yeah,
I
have
one
more
question.
John,
do
you
know
what
resources
you
need
to
actually
run
something
like
mixed
packages,
so
the
how
many
build
servers-
and
you
know,
storage
and
so
on?
Dude,
do
you
know
that
or
oh.
B
Oh
so
actually
run
next
packages.
So
right
now
we
as
in
not
me
because
I'm
not
maintaining
this
at
all,
but
graham
and
other
people
yeah,
I
think
the
s3
bucket
last
time
I
checked
was
like
220
terabytes
like
a
year
and
a
half
ago,
and
so
then
it's
probably
getting
close
to
300
terabytes
for
all
of
the
packages
right
now.
B
It's
just
kind
of
like
an
append,
only
s3
cache
and
then
in
front
of
that
we
have
fastly
cdn,
and
so
then
that's
why
you
get
relatively
great
download
speeds
and
the
other
thing
about
the
build
server.
You
would
probably
have
to
ask
graham
right
now
we
have
a
very
generous
donation
through
equinix
and
they
supply
a
lot
of
build
compute,
and
I
think
the
only
thing
that's
in
our
domain
is
the
m1
mac
machines.
Everything
else,
though
I
think
I
want
to
say,
is
donated,
but
don't
quote
me
on
that.
B
Graham
would
be
a
better
one
to
ask
yeah.
So
I
I
I
don't
think
it's
as
much
as
people
think
it
is
mostly
just
because
we
have
so
much
donations
and
thank
you
to
everyone
who
donates
to
the
nexus
foundation
and
to
support
the
lights
and
being
on.
A
Okay,
great
so
over
matrix,
you
guys
are
killing
me
with
these
names,
because
I
have
a
really
difficult
time
pronouncing
some
of
them.
Facts
gun
just
correct
me.
If
I'm
butchering
that
I'm
terribly
sorry
is
there
anything
you'd
change.
If
you
were
to
start
mixed
packages
again.
B
Oh,
I
don't
know,
I
think
the
main
thing
with
nyx
packages
is
that
it's
grown
so
organically
over
time
that
a
lot
of
those
decisions
that
we
made
later,
that
were
painful,
weren't
apparent
that
they
needed
to
be
made
until
we
had
to
incur
the
pain
of
them
initially.
B
So
I
don't
know,
I
kind
of
part
of
me
just
kind
of
really
likes
the
chaos
with
the
nyx
packages,
where
it's
not
so
like
rigidly
defined
that
we
have
to
do
x
forever
and
it's
kind
of
just
like
as
long
as
you
get
enough
people
on
board
with
something.
If
it's
like
mixed
packages
wide,
maybe
you
need
to
do
rfc,
but
essentially
it's
like
as
long
as
you're
able
to
put
in
the
effort,
you
can
kind
of
change
nick's
packages
as
you
want.
B
As
long
as
other
people
also
agree,
that's
the
right
path
forward
and
so
then
to
answer
would
I
do
anything
different?
No,
because
I
I
think
it's
just
like
kind
of
growing
like
you
could
think
of
it
as
like,
a
child
growing
into
like
a
doll
like
next
package.
Is
this
like
this
organic
body
of
code?
And
it's
kind
of
just
nice
to
see
it
grow
over
time.
B
Oh
you're,
talking
about
like
different
languages,
I'm
not
sure
I'm
not
a
good
person
to
answer
this,
because
I'm
my
first
language
and
pretty
much
only
language
is
english
and
southern.
That's
just
the
world
that
I
know
I
think
I
I
think
what
we
would
want
to
try
to
avoid,
though,
is
make
it
burdensome
to
do
future
contributions
as
long
as
it's
kind
of
just
like
able
to
be
done
in
addition
to
what
we
currently
do.
B
I
think
that's
fine,
but
I
would
like
to
avoid
a
situation
where
now
it's
like
to
contribute.
You
also
need
to
provide
localization
or
stuff
like
that,
but
I
wouldn't
even
know
how
that
would
fit
in
just
because
nick's
like
it's
another
parameter
that
would
need
to
go
into
evaluation
to
determine
whether
or
not
it
should
or
shouldn't.
I.
B
I
can't
speak.
I
can't
give
a
good
answer
on
this
apologize.
A
That's
fine,
I'm
looking.
If
I
see
any
more
questions
coming
in.
B
A
Yeah,
I
know
the
struggle
right
now,
so
you
know
there's
one
more
person
typing
a
matrix,
so
we're
gonna
wait
for
that
to
either
disappear
or
come
in
and
after
that,
we'll
wrap
up.
If
that's
okay.
B
Oh,
I
just
woke
up
so
but
relatively
good.
B
I
get,
I
guess,
while
we're
waiting
on
that.
One
thing
that
I
would
just
like
to
iterate
is
that,
like
next
packages
is
the
culmination
of
many
many
many
many
people
contributing
to
it
over
time
and
that
we
are
very
welcome
to
having
new
people
come
in
and
if
you
do
want
to
contribute
to
next
packages,
don't
feel
like
you're
unable
to
do
so.
B
It
is
generally
I've
never
seen
anyone
kind
of
be
pushed
away
just
because
they
didn't
know
it's
more
or
less
if
they
do
yeah
anyway,
we
we
have
open
door,
high
policy
and
we're
very
welcome
to
two
new
contributions.
So.
C
Cool,
I
think,
there's
also
an
interesting
comment
from
silvan
who
said
that
this
there's
a
next
packages-
architecture
team
now
to
discuss
the
architecture
of
nix
packages
and
there's
a
matrix
channel
to
join
that.
Maybe
I
don't
know
if
you
can
post
a
link
here.
B
Yeah,
that's
a
that's.
An
excellent
comment,
like
I
said
earlier
next
package
is
very
organic
community
and
code
base
and
that
yeah,
so
the
architecture
team
is
a
relatively
new
edition,
but
yeah.
That's
the
other
thing
too,
is
that
if
you
have
your
ear
to
the
ground
and
next
packages,
community
there's
likely
to
be
something
that
you're
very
interested
in
to
come
along
at
a
particular
time.
A
Okay,
cool:
there
are
people
asking
to
review
their
pr.
A
B
Yeah,
there's
there's
pr's
ready
to
view
on
discourse
thread
and
then
on
the
unofficial.
B
Discord
there's
also
a
thread
there,
but
the
main
thing,
though,
is
that
if
you
do
post
a
pr
I
would
say
try
to
do
that
in
kind,
so
review
someone
else's
pr
and
if
we
do
kind
of
have
that
like
nice,
if
you
contribute
to
pr,
but
then
you
also
help
remove
a
pr.
Then
the
status
quo
will
be
a
lot
easier
to
maintain.
A
A
Think
that
would
be
it.
I
didn't
see
any
other
questions
pop
up
on
any
of
the
other
platforms.
So
thank
you
very
much
again,
john
for
giving
your
talk.
I
know
that
I
really
enjoyed
it
and
I
I'm
guessing
we'll
see
you
hanging
around
in
the
community
in
certain
places,
so
yeah
I'll,
think
I'll
be
wrapping
up
them.
B
Okay,
my
my
little
exit
will
just
be
that
I
really
would
really
like
to
thank
everyone
who
was
involved
with
making
this
happen.
I
know
that
organizing
planning
setting
up
things
all
the
infrastructure
planning
scheduling
it's
a
lot
of
time
to
do
that-
and
I
know
for
myself
when
I
was
early
on
on
my
next
adventure-
and
I
was
desperate
for
like
more
mixed
videos,
content
everything.
So
I
think
it's
really
good
that
we
kind
of
publish
more,
and
I
think
this
is
really
good
for
the
community.
So
thank
you.
A
So,
yes,
thank
you,
everybody
for
joining
once
again,
if
you
have
more
questions
for
john
or
anyone
else
from
the
community
head
over
to
the
matrix,
the
link
is
on
screen
right
now
and
we
can
have
a
continued
discussion
over
there
once
again,
thanks
to
everybody
that
made
this
happen,
and
I
do
hope
to
see
you
all
next
time
keep
an
eye
out
on
discourse
for
the
next
announcement.
When
the
next
lecture
series
will
be
so
see,
you
then.