►
From YouTube: NixOS Office Hours 2019-08-02
Description
Almost all of the recording with Samuel Leathers (disasm), John Lotoski, and Amine Chikhaoui discussing restructuring NixOps in PR #1179.
NixOps PR: https://github.com/NixOS/nixops/pull/1179
About Office Hours: https://github.com/worldofpeace/events/blob/master/office-hours/office-hours.md
A correction to my introduction: Worldofpeace wrote up information about Office Hours.
A
Hey
everyone,
sorry
for
the
late
upload
of
this
officers.
Unfortunately,
I
didn't
manage
to
record
the
first
few
minutes
of
last
the
last
office
hours.
That's
something
I'm
hoping
to
improve
upon
next
time,
just
a
little
bit
of
business
before
I
get
into
the
part,
I
did
record,
which
is
firstly,
a
big
thank
you
to
world
of
peace.
Next
packages,
contributor
who's
been
helping
put
together
and
organize
these
office
hours.
A
My
original
intention
was
to
have
them,
be
a
co-host
and
help
organize
and
put
them
together
more
visibly,
but
that
didn't
work
out
that
first
we're
gonna
start
having
them
be
more
present
and
be
present
in
future
office
hours.
Alright,
next
office
hours
is
August,
16th
I'm
recording
this
on
August
15th.
So
that's
tomorrow,
at
3:00
p.m.
American
New
York
time,
which
is
also
1900
UTC,
so
a
little
bit
more
business.
A
A
Being
one
word
where,
if
you
ask
a
question,
somebody
will
be
able
to
ask
it
on
the
stream
today,
today,
being
the
stream
you're
about
to
see
we're
joined
by
Samuel
who's
known
by
as
disassembler
on
IRC
and
github,
and
John
they're,
both
from
IO,
HK
and
they've,
been
doing
a
lot
of
work
on
Nick's
ups.
We
also
have
a
meme
who's.
A
The
Nick
sumps
maintainer,
who
and
and
together
we'll
all
be
discussing
pull
requests
to
Nick's
ops
number
one,
one,
seven,
nine,
where
most
of
the
code
is
actually
deleted
and
moved
out
into
another
another
set
of
repositories.
The
goal
here
is
to
make
it
easier
to
maintain
mix-ups,
let
more
people
maintain
little
subsets
of
mix-ups,
while
keeping
the
core
is
simpler
and
easier
to
keep
up
to
date
and
keep
higher-quality
anyway.
I
will
see
you
all
on
August,
the
16th
at
3
p.m.
New.
York
time
and
thank
you
so
much
see
you
soon,
Paquette.
B
Dotnet
has
an
SOS
console
command,
which
comes
in
useful
to
connect
to
the
consoles
when
you're
spinning
up
a
machine,
and
so
we
were
able
to
integrate
that
into
the
command
line
and
and
issue
that
we
did
similar.
We
migrated
another
repo
over
as
well
or
Sam
head,
and
also
we
did
the
same
with
Hetzler,
although
we
hadn't
tested
it.
So
really
we
just
tested
a
little
more.
What
had
already
been
started
and
developed
it
a
little
bit
further
cleaned
it
up
a
bit
and
it's
looking
pretty
functional
as
of
right.
B
Now
we
did
modify
in
terms
of
the
release
nix
code
that
was
modified
and
revamped
a
little
bit.
So
we
could
have
the
structure
to
just
issue
the
build
command
with
which
plugins
you
want
to
pull
in,
and
that
does
give
you
the
option
of
pulling
in
plugins
that
are
either
sitting
in
a
back-end
repo
somewhere.
B
They
could
be
your
own
repo
that
it
it's
not
necessarily
already
specified
in
the
repo
file.
That
would
come
with
this
core
plug-in
nix
ops,
plug-in
file
already.
You
can
also
do
a
call
package
for
a
local
plug-in
if
you're
doing
development.
So
there's
a
few
different
ways
that
you
can
use
that
and
the
commands
for
different
plugins.
We
can
make
them
extensible
as
we
need
to
you
know,
pull
in
other
commands
for
backends,
so
that's
kind
of
where
it's
at
it
still
needs.
It
still
needs
more
work.
B
I
believe,
since
the
PR
went
up,
Amin
has
done
some
work
and
also
another
community
member
to
bring
a
number
of
other
backends
over
and
as
of
right
now,
I
think
two
of
the
things
that
are
lacking
in
terms
of
being
migrated
over
to
the
plug-in
architecture.
Yet
is
the
test
library
and
then
also
the
documentation.
A
C
Yes,
so
the
primary
motivation
we
had
was,
we
were
constantly
work.
We
were
working
on
a
packet
net
plugin
that
we
were
using
and
we
were
basically
working
our
own
Knicks
ops
to
do
it,
and
then
we
were
having
to
constantly
rebase
things
in
the
master
and
deal
with
merge
conflicts
and
things
that
changed
and
when
I
talked
with
Graham
about
his
plugin
that
he
had
created
in
the
repo
to
basically
remove
everything
and
make
it
a
separate
plugin
I
was
like
wow.
This
would
be
great.
My
else
I
ran
into
similar
issues.
C
What
are
functions
that
should
be
specific
to
a
plugin
and
we
did
add
the
ability
to
add
features
specific
to
a
plug-in
as
well
like
we
have
an
SOS
console
for
our
packet
mod
for
our
packet
plugin,
that
you
can
basically
just
a
nyx
ops,
packet,
SOS
console
and
it
gets
you
into
the
system.
Even
if
you
completely
screwed
up
the
ssh
networking
and
everything
it.
C
A
A
D
C
Yeah
I
think
one
of
the
primary
things
we
need
is
a
a
decision
that
this
is
the
way
we
want
to
go
and
be
creating
the
repos
for
the
plugins
that
NYX
Ops
is
going
to
officially
support
in
the
NYX
OS
org
and
giving
John
and
Ameen
and
anyone
else
that
needs
push
access
to
that
to
basically
get
those
initial
plugins
created.
And
then
once
we
have
those
initial
plugins
created,
we
can
update
our
pull
request
to
point
to
the
NYX
OS
repositories
instead
of
our
local
ones
and
I
get
something
in
core
that.
E
E
It's
gonna
make
it
easier
to
maintain
like
mix
up
since
we're
focused
on
a
couple
of
plugins
which
are
officially
maintained
and
the
core,
the
core
Nick
shops,
and
you
can
have
community
members
spinning
up
their
own
plugins
and
working
with
those
without
having
a
bottleneck
on
the
on
the
car
on
the
car,
repo,
so
I
think
it's
really
the
way
to
go
yeah
and
to
go
forward.
I'm,
not
sure
what
should
be
our
priorities
from
what
I
can
think
of
make
sure
that
we
can
have
documentation
and
the
manual.
E
E
Another
important
thing
is
running
the
functional
test,
so
I
think,
even
if
we
can
just
run
the
test
and
specify
a
file
from
another
repo
to
make
sure
that,
for
example,
the
packet
plug
in
we
can
rent
a
functional
test
easily.
That
should
be
enough,
in
my
opinion,
yeah,
so
yeah
that
is
sort
of
the
summary.
D
E
Not
sure
if
that's
required,
I
know
that
terraform
the
debt
for
their
providers
but
I,
don't
see
that
there
is
a
problem.
We
haven't
the
backends
in
the
main
mix,
OS
organization
and
having
a
couple
also
in
next
community
organization.
So
we
can
have
both
I'm
not
against
having
a
different
one,
but
not
sure
if
that's
needed
really.
A
So
you
mentioned
some
concerns
about
running
the
functional
tests
and
documentation.
Do
you
think
that,
so
right
now,
the
documentation
includes
all
the
different
providers
that
we
have
bundled
in.
Do
you
think
that
you
would
want
to
preserve
that
behavior?
Where
maybe
you
have
a
way
to
link
to
like
index
of
these
modules
and
how
to
view
their
documentation?
A
E
B
I
had
a
question
about
that:
I
was
thinking,
I,
don't
know
how
much
work
it
would
be,
because
I
haven't
actually
kind
of
dug
into
the
code
to
see
how
difficult
it
might
be.
But
what
if
each
of
the
plugins
still
have
their
own
documentation
as
a
part
of
the
backend
repo,
but
we
put
a
we
put
a
hook
in
in
particular
for
the
documentation
so
that
when
the
person
goes
to
build
their
Nix
ops
and
they
choose
whatever
plugins,
they
want
each
plug.
B
Each
plug-in
then
passes
the
documentation
blob
to
the
core
build
and
that
gets
basically
compiled
in
when
the
document
gets
assembled,
so
that
if
you
build
it
with
three
plugins,
then
you
get
the
documentation
for
all
three
plugins
in
there
you
get
the
documentation
for
their
plugins.
You
build
it
with.
B
A
E
Yeah
I'm
open
for
opinions
have
to
be
honest,
but
I
think
I
think
it's
really
a
matter
of
deciding
how
to
manage
the
the
manual
and
running
the
functional
tests,
because
that's
what
we
need
for
to
create
the
next
release.
For
example,
if
we
can
run
the
functional
tests
for
all
the
back
ends,
I
would
expect
that
we
were
going
to
remove
the
files
for,
for
example,
AWS
functional
tests
and
GCE
and
whatnot
in
the
koribo.
E
F
E
A
C
C
Okay,
so
this
is
the
next
ops
plug-in
core.
That's
in
the
pull
request
there
and
essentially
I
have
a
bolter
plug-in
setup
in
mind,
so
I
can
do
dot,
slash
dev,
shell,
Rd
and
then
sense
if
I,
that
yeah
I
want
to
have
the
vulture
plugin.
You
can
replace
this
with
a
path
just
somewhere
on
your
system
or
whatnot
in
here
as
well,
but
I
just
have
it
in
the
main
mix-up
score,
plugin
people
right
now.
So
this
brings
it
up
here
and
then
I
can
basically
use
this.
Just
like
normal
you
to
deploy
there.
C
Can
you
guys
still
hear
me?
Yes,
okay
and
then
there's
a
deploy
and
it's
starting
to
build
stuff.
The
bolter
one
doesn't
have
any
extra
stuff
like
the
packet
dot
nut
one,
but
I
can
do
Nick
stops.
C
B
C
C
A
B
So
it's
you
can
add
resource
types,
but
basically
everything
that's
coming
in
from
a
plug-in
is
through
well-defined
hooks.
So
there's
three
hooks
in
there
right
now:
one's
for
the
CLI
once
for
resources,
and
it's
been
a
few
weeks
since
I
looked
at
the
source
code,
so
I
don't
immediately
recall
the
other
one,
but
essentially
it's
well-defined
interfaces
for
those
hooks.
So
to
the
extent
that
those
hooks
you
can
pass,
you
can
pass
functionality
or
resources
through
that's
what
you
can
do,
but
beyond
that
you
won't
be
able
to
do
anything
else.
The
core.
B
A
A
C
So
we're
using
mix-ups
when
I
first
got
here
and
we're
very
big
in
the
NYX
community
and
everything
we
like
to
do
we
like
to
benefit
the
next
community,
so
we
figured
rather
than
used
something
like
a
terraform
or
other
tools
that
deploy
we
used,
visuals.
That
Nick
provides
and
that's
why
we
use
mix-ups.
A
A
B
So
I'm
thinking,
probably
maybe
not
immediately
depending
on
the
level
of
challenge
and
how
much
you
know
resource
time.
We
have
to
work
at
it,
but
at
some
point
it
would
probably
be
good
for
the
test
library
to
similarly,
you
know
each
each
back-end,
repo
or
plugin
would
package
its
own
or
include
its
own
tests,
unit
tests,
etc
and
then,
through
another
hook,
another
testing
hook
core
would
be
able
to
pull
those
in
and
then
execute
based
on
which
plugins
are
are
being
built.
H
B
A
C
A
I
think
so
one
of
the
one
of
the
problems
that
we've
faced
is
the,
for
example,
for
the
vulture,
the
digitalocean
back
ends.
Getting
changes
and
improvements
merged
for
those
can
be
very
challenging
because
the
the
people
who
have
commit
rights
to
mix-ups,
maybe
don't
have
the
knowledge
or
experience
with
digitalocean
to
know.
If
it's
a
good
thing
so
I
think
we
would
want
to
move
those
those
providers
into
a
separate
repository
where
we
can
be
pretty
forgiving
about
who
gets
what?
A
H
D
D
D
D
D
So
right
now,
the
only
capability
is
to
expose
the
five
star
key.
So
you
can
basically,
for
example,
build
a
static
website,
put
it
into
your
derivation
outputs
and
then
once
you
find
the
star
path,
you
can
just
give
that
to
someone
and
you
can
share
preview
of
your
websites,
so
that's
kind
of
a
nice
forefront
and
people
that
want
to
preview
stuff.
D
So
it
gives
you
this
sort
of
review
a
bit
easier
things
and
the
demo
is
basically
pulling
stuff
from
the
binary
cash
at
the
Chevron
and
yeah.
That's
it
I
mean
the
the
path
that
you
use.
Is
you
basically
reserve
your?
You
find
your
star
path
that
you
have
and
you
put
this
after
the
hostname
of
the
surface,
and
you
just
enter
this
and
you
get
the
final
impact
on
the
fly.
D
D
So
it
can
be
like
if
any
type
of
files
it
can,
so
you
can
basically
store
files
directories
and
simin
inks
in
an
our
file
and
files
can
have
the
executable
attributes,
but
that's
not
exposed
because
there
is
no
notion
of
executable
HP
concert,
I
don't
know,
but
yeah,
that's
that's
the
gist
of
it.
That's.
A
Really,
it's
really
incredible
right,
so
I
I
use
I
use
Nick's
to
build
a
lot
of
disk
images
that
are
booted
over
I
pick
C,
which
is
a
way
of
putting
essentially
an
ISO
over
HTTP
and
then
just
booting
your
server
off
of
it.
This
is
pretty
nice
because
it's
really
easy
to
start
from
a
clean
slate,
but
it's
pretty
annoying
because
I
have
to
upload
these
really
big
disk
images
to
a
special
place
on
a
special
place
on
my
web
server.
Could
this
serve
those
I
pick
C
images?
D
Definitely
it
would
be
perfect
for
that.
I
think
it's
really
wonderful.
The
only
thing
that's
missing
toe
is
way
to
create
a
stable
name
like
a
meteor
good
name
like.
If
you
have
a
release,
you
might
want
to
name
the
release,
whilst
by
now
you
have
to
remember
the
star
path,
the
completes
path
so
I'm
still
thinking
about
how
I
could
do
this,
because
that
would
be
pretty
good
like
I'm.
Just
see
that
quite
nice
I
hope
I,
don't
miss.
D
E
I
have
just
a
question:
is
it
able
to
the
service
I
mean?
Is
it
able
to
follow
symlinks
because
one
of
the
issues
in
hydra
with
those
build
products
which
almost
does
the
same
thing
I,
think
it
just
shares
out
to
Nick's
cats,
Kettner,
so
I
think
there
is
an
issue
with
following
symlinks
in
Hydra
I,
don't
know
if
this
fixed
that
issue
all
right.
H
D
It's
not
hard,
but
it
means
you
just
need
some
magic
to
find
the
names
in
there.
Okay,
cuz
right
now,
I'm
just
streaming
until
I
find
the
file
and
then
I
output.
Its
then
I
just
need
to
add
a
bit
more
logic
to
keep
streaming
and
collect
the
names,
and
that
should
be
fairly
easy
to
do
like
I,
don't
know
one
hour
of
hacking.
A
D
It's
actually
because
it's
written
and
go-
and
it
has
a
quite
as
nice,
streaming
capabilities.
So
it's
all
just
plugging
things
together
and
I'm,
saying:
okay,
unpack
it,
while
you're
in
packets
also
read
the
narf
file
and
then
I
have
an
iterator
on
the
NAR
fine
and
that
just
iterates
until
I
find
fine.
A
D
A
D
I
don't
know
I
think
it
depends
on
the
use
case,
because
the
problem
is
with
the
redirect.
If
you
refresh
the
page,
you
lose
you're,
not
gonna,
get
the
update
so
like.
If
you
want
to
use
it
for
static
websites,
you
could
imagine
you
know
to
have
the
hostname
to
be
like
a
name,
and
then
you
reserve
with
it
much
as.
D
A
C
I
was
wondering:
I
had
raised
a
pull
request
on
Hydra
week
or
two
ago,
or
so
about
a
approach
to
use
RabbitMQ
as
a
cueing
system.
I
got
some
feedback
from
the
Elco
that
he
thought
it
would
be
better
and
PostgreSQL
rather
than
and
RabbitMQ,
but
I
was
just
wondering
if
anyone
had
any
thoughts
on
that.
That
was
something
that
we
were
looking
at
doing,
probably
in
the
upcoming
month
or
so
on,
trying
to
improve
our
hydras.
So
we
can
get
notifications
properly.
A
Okay,
I
can
I
can
start
so
I
the
it's
it's
this
difficult
balance,
so
I
was
actually
talking
to
Elko
about
that
just
last
week
and
it's
a
bit
of
a
difficult
balance
of
making
it
very
scalable,
especially
for
something
the
size
of
Knicks
packages,
which
would
benefit
from
this.
Something
like
rabbitmq,
but
also
smaller
users,
who
maybe
build
10
things
and
they're
in
their
hydra
and
and
so
then
having
to
install
and
maintain
rabbitmq,
would
be,
would
be
a
bit
onerous
for
that
that
case.
A
C
It
might
be,
it's
gonna
require
some
digging
into
how
easy
it's
going
to
be
to
make
it
pluggable
and
what
the
interface
would
look
like
for
interfacing,
with
the
different
queuing
systems
and
whatnot,
but
I
think
that
might
be.
And
then
you
can
just
have
more
features
with
RabbitMQ
than
you
have
with
the
Postgres
back-end
for
it
right.
A
A
C
A
Elko
actually
is
doing
a
bit
of
a
Hydra
hack
day
today,
oh
really,
or
he
was
he
was
doing
it
in
zurich
time.
So
I'm
not
sure
where
he
is
on
that
we
should
def
you
and
him
should
definitely
talk,
and
it's
a
bit
unfortunate
that
he
was
in
zurich
this
week
because
he
would
have
been
here
today
in
office
area.
So.
A
Two
weeks
from
now,
I
think
or
two
weeks
from
now,
but
I
think
he's
available
to
come
two
weeks
from
now,
but
yeah.
So
maybe
maybe
I
will
send
him
a
message
and
see
if
he'll,
if
he
made
any
progress
to
update
that
ticket
and
if
he
hasn't
then
I'll
say
something
and
and
yeah
definitely
do
some
research
into
using
Postgres.
For
events
like
that.