►
From YouTube: NixOS Office Hours 2019/08/16
Description
Today, Vincent Ambo (@tazjin on GitHub and Twitter) joins us to talk about Nixery: a Docker container registry which transparently builds and servers container images with Nix.
Nixery: https://nixery.dev/
About Office Hours: https://github.com/worldofpeace/events/blob/master/office-hours/office-hours.md
A
Welcome
to
Nix
OS
office
hours.
Three,
these
video
calls
happen
every
other
Friday
at
3
p.m.
America,
New,
York
time
or
1900
UTC.
They
are
recorded
and
live
streamed
on
YouTube.
These
calls
are
covered
by
the
contributor
covenant
code
of
conduct
and
I'm
glad
you're.
Here
today
we
have
Vincent
Ambo,
also
known
as
taja
Jame
I
believe,
because
Vincent
works
at
Google
and
as
part
of
some
project
there
has
created
a
docker
registry
which
automatically
builds
docker
images
that
you're
able
to
pull
based
on
Nix
expressions
that
you
pass
in
through
the
URL.
B
B
I
have
a
background
in
Haskell
and
other
functional
language,
just
dabbled
a
little
bit
in
Erlang
for
a
while
and
I've
been
using
Nix
for
about
three
years
or
so
for
some
some
of
the
time
I've
been
using
it
professionally
at
work,
but
not
at
the
moment,
except
for
this
particular
project
that
I'm
going
to
be
talking
about.
Now.
B
A
Sounds
good
one
other
note
is
that
we
are
on
the
Nix
OS
office
hours
channel
on
freenode
and
I'm,
also
watching
there
for
people
with
questions.
So
if
you're
on
the
stream-
and
you
have
questions-
you
can
ask
there
and
somebody
will
probably
need
we'll
relay
them.
Yeah
so
tell
me
a
little
bit
about
your
project.
It's
called
Nik
Surya,
yes,.
B
That's
perfect.
Thank
you
very
much
cool,
so
I'm
gonna
talk
about
Nick.
Sorry,
Nick,
sir,
is
my
current.
What
we
call
20%
project,
which
means
that
I
can
spend
some
of
my
time
at
work,
doing
something
that
is
unrelated
to
my
day-to-day
job
and
as
a
big
fan
of
Nick's
and
also
a
big
fan
of
modern
deployment.
Infrastructure
such
as
kubernetes
I
thought.
It
would
be
interesting
to
look
at
some
of
the
ways
of
and
of
putting
these
two
together
Graham.
Our
host
here
has
written
a
blog
post,
for
which
I've
joked
on
IRC.
B
That
I
should
have
a
shot
shortcut
for
linking
it
on
my
machine
where
he
talked
a
little
bit
about
docker
layers
and
a
container
image
layer,
so
I'll
be
getting
into
that
in
a
second,
and
there
was
one
of
the
first
things,
I
kind
of
motivated
motivated
me
to
start
this
project.
I've
already
done
the
introduction
so
I'm
going
to
skip
this
slide.
B
Yeah.
Imagine
a
world
where
we
don't
have
to
explicitly
build
images
when
we
want
to
deploy
them
on
kubernetes
or
some
other
container
management
solution.
Currently,
the
workflow
that
people
are
going
through
a
lot
of
the
time
is
something
like
you
need
to
run
a
small
tool.
Let's
say
s
tunnel
in
your
cluster,
so
you
go
and
you
fetch
an
image
like
Alpine
and
you
write
docker
file
to
pull
from
this
image,
install
the
tools
you
want.
You
push
it
to
a
registry.
You
copy
the
name
of
the
image
in
the
registry.
B
You
go
into
your
cluster
configuration
added
there
and
so
on.
You've
got
a
lot
of
steps
that
are
sort
of
explicit
and
imperative
and
stateful,
because
you're
building
an
image
that
you
have
to
store
somewhere
and
some
of
the
time.
You
really
just
want
this
tiny
image
that
has
one
specific
tool
in
it
right
now
and
you
don't
want
to
have
to
go
through
a
whole
process.
B
So
the
first
idea
I
had
originally,
which
was
discussed
a
little
bit
at
Nick's
con
last
year,
with
a
few
of
you
I
think
some
of
the
people
attending
are
actually
also
Nick's
con
participants.
You
must
met
me
in
real
life
in
in
this
original
idea.
What
I
was
thinking
is
that
we
could
create
a
resource
in
kubernetes
caster's
that
users
can
define.
You
can
see
this
this
little
example
here
on
the
side.
It's
my
cursor,
visible
by
the
way:
okay,
quick
and
occasional.
B
B
The
problem
with
this
is
that
docker
has
no
concept
of
what
the
content
of
these
layers
actually
is,
because
these
are
just
a
sequence
of
steps.
You
could,
in
theory,
create
a
step
in
which
you
copy
a
directory
into
your
image,
and
then
you
create
a
subsequent
step
where
you
overwrite
a
single
file
in
this
image.
So
now
you've
got
a
problem.
You
need
to
have
these
things
in
the
correct
order
and
all
sorts
of
other
nonsense,
such
as
that
in
the
next
world.
B
We
have
figured
out
that
things
can
be
content
addressable
actually
for
the
next
story.
I
guess
that's
work
in
progress,
but
in
general
people
in
the
community
know
what
this
is
about
and
Graham
wrote
this
very
interesting,
blog
post,
where
he
pointed
out
that
we
can
actually
look
at
the
file
system
of
a
container
as
something
that
Maps
pretty
directly
to
Knicks
star
paths.
B
So
if
you
had,
for
example,
curl
which
depends
on
Lipsy-
and
this
is
L
certificates
and
various
other
derivations
and
Knicks,
you
could
take
these
put
them
in
individual
layers
with
their
individual
star
paths
and
then
the
order
no
longer
matters,
because
the
only
layer
that
is
actually
going
to
be
variable
is
the
one
on
top
of
that
that
assembles
the
actual
environment.
So
if
you
build
an
image
with
curl
and
then
later
on,
you
build
an
image
with
kerlun
get.
You
would
actually
end
up
sharing
a
lot
of
the
content
of
those
two
images.
B
B
I
came
up
with
the
idea
of
what
could
actually
happen
if
we
didn't
have
to
specify
a
resource
in
kubernetes
in
advance,
which
means
that
we
a
disconnect
from
kubernetes,
which
is
nice,
because
not
everybody
is
using
it.
It
might
be
overkill
even
for
certain
environments
and
be
one
of
the
other
steps
now
is
also
gone.
So
we're
done
we're
down
to
one
stop
at
one
step.
B
If
you
look
at
this
slide
here,
you
can
see
the
name
of
a
docker
image
and
it's
separated
vaguely
into
three
different
parts
which
have
colored
the
first
one
is
the
URL,
pointing
at
the
instance
of
the
registry
Nick.
Sorry
in
this
case,
the
second
one.
The
first
packages
in
this
list
are
so-called
meta
packages,
I'll
get
to
that
in
a
second
and
then
in
the
rest
of
the
URL.
You
can
see
that
every
path
component
is
basically
its
own
tool.
B
B
So
based
on
these
specifications,
we
can
actually
pull
in
most
of
next
packages
and
there
are
some
minor
things
that
we've
run
into
over
time.
So
the
first
one
was
the
docker.
Images
cannot
contain
uppercase
characters
in
their
paths
and
it
turns
out
that
we
have
a
few
packages.
They
have
application
characters
in
them.
So
we've
now
managed
to
use
a
very
simple
method
of
rewriting
all
of
the
uppercase
characters
to
lowercase
and
then
seeing
if
there's
a
matching
attribute
set
for
them.
B
If
the
casing
was
different,
which
works
for
something
like
99.9%
of
all
the
other
ones
of
all
the
uppercase
packages
in
next
packages,
with
some
exceptions,
which
are
packages
that
actually
conflict
in
the
casing,
but
we
don't
have
to
get
into
that
so
yeah.
This
turns
out
to
actually
be
a
pretty
useful
abstraction.
A
B
B
B
Rm
garbage
collect
the
container
after
we're
done,
and
now
I
can
write,
something
like
Nick,
sorry,
dev,
shell
and,
if
I
run,
this
I
have
to
actually
also
tell
it
what
I
want
to
run,
which
is
bash.
When
I
run
this
I
get
a
container.
This
one
was
already
pre
fetched
you.
You
see
that
there
was
no
download
time
or
anything,
but
in
this
inside
of
this
container,
I've
got
basically
no
programs
available.
B
So
if
a
try
get
there's
nothing
here,
if
I
try
age
hub
not
available,
so
I
might
not
want
to
exit
and
think
what
about
a
chub
and
maybe
rip
grip,
which
is
a
rust
implementation
of
grep
that
I'm,
a
huge
fan
of
we
can
ask
Nick
Sri
to
provide
us
an
image
with
seasons
that
I've
got
all
of
these
cached.
So
now,
if
I
type
RG,
then
I've
got
rip
grep
installed.
If
I
try
a
chop,
nice
eh,
dub,
I'm
gonna,
try
to
add
one
that
isn't
cached.
B
B
Can't
right
so,
you
can
see
now
that
docker
doesn't
have
this
image
locally.
So
it's
now
going
to
ask
Nick
sory
for
the
manifest
describing
this
image
what's
happening
on
the
back
end.
I
can
open
the
logs
in
a
second
is
that
Nick
Suri
will
now
map
all
of
these
to
the
packages
and
Nick's
packages
to
the
attribute
names,
and
then
it
will
build
this
image
and
and
provide
me
with
the
manifest.
As
you
can
see,
there
is
currently
no
way
to
feed
back
the
build
status.
B
So
while
this
is
ongoing,
there
is
a
server
running
in
the
back.
That's
actually
executing
a
mix
build,
but
because
docker
expects
registries
to
have
pre
created
images.
We
don't
really
have
a
method
in
the
API
for
feeding
back
informational
service
going
on.
Then
you
can
see
some
stuff
happening,
we're
downloading
a
whole
bunch
of
layers,
and
once
this
is
done,
we
get
a
shell
again
and
now
I
should
be
able
to
type
Royal
Castle
yeah
I
should
probably
also
actually
feed
something
to
it.
B
B
You
do
that
tada
rip
grep,
slash,
get
what
has
to
happen
so
from
darkest
perspective.
These
are
two
different
images,
so
it
will
actually
go
and
ask
Nick
Siri
for
the
image
again,
but
Nick
Suri
doesn't
care
about
the
order
of
of
these
commands,
so
it
will
still
be
able
to
serve
the
same
manifest
back
to
the
client
and
the
can
will
then
download
it,
but
you
do
have
to
go
through
that
roundtrip.
B
B
Now
this
will
take
a
second
under
than
it
should
in
fact
there
is
it
there's
a
bug
that
occasionally
occurs,
but
docker
is
pulling
layers
again
that
it
already
has
I,
don't
know
why
that
happens.
If
there's
somebody
here
when
those
Tucker
internals
very
well
feel
free
to
jump
in
on
the
issue,
for
that
it
would
be
nice
to
get
that
when
IO
dot,
yeah
yeah-
that's
exactly
what's
happening
now.
B
So
if
I
were
to
take
this
list
of
layers
and
compared,
they
would
actually
already
be
there,
which
is
peculiar,
but
I
have
to
look
into
that.
Yeah
I
should
probably
mention
this
is
not
yet
production-ready
software.
You
can
feel
free
to
experiment,
but
we
haven't
really
tagged
the
version
1.00.
Yet
the
plan
is
to
do
that
reasonably
soon,
if
possible
before
next
con,
because
there
will
be
an
extended
talk.
Hopefully,
if
my
proposal
gets
accepted
at
next,
con
yeah
did
that
answer
the
question
I.
B
B
So
one
question
that
I
get
a
lot
when
demoing
this
next
people
initially
is
okay,
so
you're
pulling
random
images
from
a
next
package
set,
which
package
said
we
have
all
the
different
Nicks
or
as
channels
we've
got
next
was
unstable.
We've
got
1903,
we've
got
the
upcoming
1909,
we've
got
the
older
ones,
and
so
on.
Is
there
some
way
to
specify
which
one
you
want,
and
the
answer
is
yes,
but
it's
at
deploy
time
I'm,
so
I
can
go
to
my
here.
See
git
repository
by
the
way
gets
up.
B
There
come
/,
Google,
/,
Nick,
sorry
for
this.
If
you
haven't
seen
it
and
there's
a
few
environment
variables
that
are
supported
for
configuration
specifically
these
three,
that
the
user
specify
which
of
the
package
sets,
they
want
to
use
when
they're
building
the
images
so
Nick
sarit
channel
is
the
easiest
one,
and
this
is
actually
the
default.
It
defaults
to
1903
at
the
moment
where
we
simply
go
and
fetch
that
channel
from
github
and
import
it
and
then
off.
We
go
Nick's.
Repackages
path
is
a
local
file
system
path.
B
Repre
will
configure
Nick
series
such
that
it
will
use
the
existing
as
this
age
figuration
of
your
system
to
go
and
fetch
package
repository
from
the
Panther
has
been
specified
and
it
will
map
the
tags
at
the
end
of
the
image
to
basically
the
branches
or
git
commits
of
the
repository
that
you
specified
and
that's
where
things
start
to
get
a
little
more
interesting
if
I
yeah.
If
people
interested
I
can
dig
into
the
kubernetes
side
of
this,
where
we
could
use
this
for
deployment
yeah.
A
B
So
there's
one
caveat:
your
package
said:
must
either
be
an
overlay
over
the
existing
package
set,
or
it
must
be
some
other
method
that
you've
that
you've
come
up
with
to
actually
put
a
to
create
a
superset
of
the
package
set.
This
is
because
next
we
need
some
of
the
functions
from
lib
and
I
didn't
want
to
put
them
into
the
next
rip
and
x-ray
repository.
So
you
need
to
have
the
actual
package
set
somewhere
under
the
hood,
but
you
can
overlay
your
own
services.
B
I
will
demo
this
briefly,
so
people
who
know
me
might
know
that
I'm,
a
mono,
repo,
proponent
and
I
have
my
personal
infrastructure
in
a
mono
repo
and
I've
created
a
little
reduced
version
of
this
for
demo
purposes.
That
looks
roughly
like
this
probably
some
space
here.
Okay,
here
we
go
so
this
is
the
default.
Next
is
the
font
size?
Okay,
is
this
readable
if.
B
That's
probably
about
as
much
as
I
can
fit
on
screen.
Here
we
go.
Basically,
the
way
this
is
currently
set
up
is
that
I
create
an
overlay
function
to
the
standard
mix
packages
overlays
that
overlays
my
local
project
into
the
package
set
it's
imported
down
here,
overlays
local
packages.
So
if
I
go
into
the
folder
in
which
I
have
this
repository,
then
I
can
actually
still
get
all
the
normal
things.
I
would
expect
to
be
able
to
get
from
NYX.
Let
me
just
put
a
few
spaces
here
again.
Next
build
dish,
a.
B
And
then
I
get
the
normal
gate,
which
is
part
of
the
normal
package
set,
but
I
can
also.
You
can
see
that
my
blog
service
is
defined
in
here,
which
is
a
Haskell
service.
I
can
type
this
and
it
will
instead
go
and
build
my
blog,
so
it's
overlaid
on
top
of
the
same
Packer
set
now
what
I've
done
is
I
have
pushed
this
package
repository
into
a
cloud
source
repository.
B
So
I
have
a
remote
here,
zoom
in
again,
a
little
bit,
which
is
the
package
repository
hosted
on
my
private
sort
of
not
publicly
accessible
thing.
This
might
also,
in
the
case
of
a
different
company,
be
a
good
lab
instance
or
a
Garrett
instance,
or
some
other
form
of
repository
hosting
solution
and
I
just
push
my
package
set
over
there.
Now.
What
I
can
start
doing
is
if
I
demonstrate
make
your
own
at
this
cluster.
Briefly,
let
me
just
make
sure
that
there
are
no
pods
in
here
at
the
moment.
B
B
Yeah,
so
what
I
can
do
inside
of
my
kubernetes
clusters?
I
can
deploy
an
X
ray
instance
into
the
cluster,
and
then
I
can
do
a
little
trick
where
I
create
a
cluster
internal
load
balancer,
which
gives
me
an
IP
address
that
is
reachable
from
anywhere
inside
of
this
caster
and
now
comes
the
hack
I
can
create
a
private
DNS
zone,
which
is
a
feature
that
most
cloud
providers
have
that
I
can
attach
to
this
network
and
I
can
call
it
local
and
create
an
entry
in
here
for
physical
there.
B
We
go
almost
for
Nick,
sorry
that
local,
pointing
at
the
address
of
this
load
balancer.
So
what
essentially
happens
is
that
I
can
specify
docker
images
instead
of
Nick's
reader
dev,
which
is
the
public
instance.
I
can
write.
Nick's
read
at
local,
slash,
shell
and
so
on,
but
I
can
specify
the
packages
from
my
own
package
set
in
here.
So
I
can,
for
example,
do
this.
B
This
command
here,
which
is
just
going
to
give
me
the
the
normal
shell
image
that
we
used
before
so
you
can
see
that
this
is
very
similar
to
the
command
we
used
with
docker
itself,
but
instead
of
running
against
the
local
container
demon
when
I
running
against
the
kubernetes
cluster
and
I've
got
a
shell
environment
again,
there's
no
gate,
there's
no
H
tab.
I
can
go
out
and
start
doing.
The
same
thing
so
I
add,
for
example,
get
I
have
to
delete
the
pod
again.
B
B
B
B
B
Just
don't
remember,
on
which
part
it
starts
might
be:
oh
I,
don't
have
curl.
Well,
you
get
the
idea,
and
this
is
now
actually
serving
my
blog
and
the
blog
is
imported
in
my
private
package
set
over
here.
Just
for
the
purpose
of
this
demo,
I
had
renamed
the
package
to
TAS
blog
instead
of
test
gender
blog,
but
you
can
also
nest
them.
The
periods
are
actually
supported
in
image
names
and
then
I
can
get
the
deployment
infrastructure
straight
through
this
way.
B
Now
the
interesting
thing
it
starts
happening
is
that
I
can,
in
theory,
specify
a
branch
or
tag
or
git
commit
here.
So
I
can
write
test
blog
master
and,
if
I
do
this,
Nick
Suri
is
going
to
build
the
image.
That's
going
to
specifically
fetch
the
branch
or
otherwise
tag
commit,
but
I
that
I've
specified.
B
This
means
that
it
would
feasibly
be
possible
to
have
a
package
set
where
you've
got
a
branch
for
testing
development
of
various
features,
and
you
can
ask
the
kubernetes
cluster
to
fetch
a
specific
build
of
a
service
at
a
specific
tag
or
commit,
as
he
could
build
all
sorts
of
interesting,
CI
pipelines
out
of
this,
which
is
something
that
I'm
experimenting
with
at
the
moment.
This.
A
B
B
I
should
specify
explicitly
that
this
is
not
an
officially
supported
Google
project,
which
means
that
if
you,
for
example,
use
the
official
rather
public
instance
at
Nick's
read
at
dev,
which
you
can
experiment
with
all
you
want
I
make
no
up
timer
SLO
guarantees,
and
if
it
eats
your
cluster
or
whatever,
you
can't
sue
me.
That's
the
basic
idea.
So.
B
A
B
Was
actually
at
briefly
mentioned,
as
I
think
I
wanted
to
make
a
flashy
demo,
so
I
already
had
a
system
that
could
do
this
based
on
the
the
resource
descriptions
inside
of
communities
and
I
tried
to
show
this
to
people
and
I
ended
up
like
writing,
Y
ml
and
getting
the
syntax
wrong
and
trying
to
update
this
in
the
cluster
and
hoping
that
the
whole
reconciliation,
loop
and
first
thing
works
out
and
then
I
wait.
A
second
I
can
actually
just
implement
this
registry
protocol
because
it's
basically
to
get
requests.
B
It's
not
that
complicated
and
I
can.
The
original
idea
was
just
to
do
this
for
the
demo.
I
can
put
the
packages
in
the
URL,
so
it
kind
of
just
came
up
along
the
way
and
then
after
showing
that
to
people,
they
ended
up
being
way
more
interested
in
that
than
the
original
idea
of
having
kubernetes
resources,
because
this
is
in
some
ways
more
flexible
and
especially
if
in
cases
where
you
want
to
quickly
launch
one
particular
image
with
a
tool
which
happens
to
me
all
the
time,
this
is
quite
useful.
B
A
B
You
know
I
can
share
an
idea
with
you
that
I
came
up
with
was
a
colleague
the
other
night,
so
we
were
brainstorming,
other
ways
of
serving
these
packages,
and
some
of
you
might
know,
if
you
look
into
Nix
OS
system
config,
where
the
build
primitives
sit,
we've
got
not
just
the
ability
to
build
container
images.
We
can
also
build
qmu
images
and
and
netboot
images
and
all
sorts
of
things.
B
So
we
vaguely
experimented
with
the
idea
of
serving
a
netboot
image
that
contains
a
custom
grub
and
now
the
interesting
thing
as
a
grub
can
chain
load
additional
netboot
things
over
HTTP
and
you
could
build
an
interactive
grub
menu
where
people
select
Nix
packages
and
architectures
and
so
on,
and
they
now
serve
the
net
boot
image
with
that
with
that
Nix
system,
there's
a
few
more
of
those
that
we've
experimented
with
I
think
there's
a
lot
of
potential
here
for
Nix
in
general.
That.
A
That
is
is
very
interesting
and
another.
Another
interesting
thing
is
I
think
squash
FSS
or
maybe
you
see
Pio
files.
You
can
just
keep
at
the
end,
so
you
could
pre
create
us
this
format
and
then
just
keep
serving
one
per
store
path:
yeah,
that's
until
you've
put
it
together,
well,
very
cool.
So
any
other
questions
from
the
audience.
A
A
A
A
So
I
have
a
I,
have
one
poor
request
here:
I
can't,
unfortunately
bring
it
up,
but
it's
it
is
indeed
I
have
good
that
Elko
is
here.
Hopefully
he
is
microphone
work
so
that
he
can.
You
can
talk,
but
it's
from
world
of
peace,
and
it's
about
the
that
is
owes
that
are
published
on
the
website
for
people
to
download.
Would
you
like
to
introduce
that
world
of
peace
yeah.
C
So
I've
been
thinking
like
that
we
should
have
like
distribute
a
gnome
3
I,
so
for
a
while
now,
actually
and
I.
Think
I
noticed
on
looking
at
like
the
history
on
github
that
it
was
an
idea
that
happened
in
the
past
that
really
didn't
go
through
so
I'm
trying
to
sort
of
revive
that.
So
we
can
actually
have
that.
A
C
C
C
D
A
D
D
And
so,
if
we
offer
multiple
graphical
ISOs,
then
they
have
to
start
wandering
well,
which
one
is
the
best,
and
perhaps
some
of
them
will
be
better
tested
than
others,
and
then
some
of
them
might
might
end
up
bit
for
think
so.
I
think
it's
better
to
have
one
well
tested
I.
So
then,
then
multiple
that
yeah,
it
might
all
be
not
so
good
because
now,
for
example,
if
you
go
to
do
Bluetooth
download
page,
you
click
on
download,
you
just
get
one
eye,
so
you
don't
have
to
think
about
it.
D
Well,
you
have
to
decide
whether
you
want
the
desktop
or
the
server
ISO,
but
so
the
other
issue
is
so.
These
eyesores
are
pretty
big,
so
they
add
some
overhead
to
the
build
process
and
they
have
to
be
done
for
every
commits
tunics
packages.
So,
unlike
most
packages
in
the
expected
use,
which
only
are
rebuilt
if
they
actually
change,
these
ISOs
have
mix
packages
itself
as
an
input,
so
they
get
rebuilt
on
on
every
commit.
So
it's
not
entirely
trivial
amount
of
overhead
and
it's
yeah
blocker
for
the
general
generation
scripts.
C
So
yeah
all
those
are
about
the
points
that
need
to
be
addressed.
I
was
actually
looking
fused
by
what
you
meant
by
testing
in
particular,
I'm,
not
drawers
like
tests
for
the
first
all
happening
in
geographical
is
Oh,
or
is
it
just
stall
install
tests
in
general,
or
is
it
tests
run
on
three?
Because
I
would
say
that
gnome
3
is
very
well
tested.
We
have
several
active
maintains
and
they
have
two
tests
currently.
D
D
E
E
E
C
E
A
C
A
So
I
did
some
some
quick
looking
for
as
long
as
I've
been
tracking
Nick's,
OS,
unstable
we've
had
I
actually
found
it
small,
you
look
it's
unstable
yeah.
So
as
long
as
I've
been
tracking,
the
channel
update,
we've
had
several
hundred
bumps
and
each
ISO
is
roughly
a
gigabyte,
so
that
would
translate
to
several
hundred
gigabytes
of
ISO.
Is
that
we're
keeping
around
I'm?
Not
not
saying
that
is
not
saying
that's
a
good
bet
or
a
bad
thing
either
way.
It
just
is
a
byproduct
of
how
our
builds
and
releases
where.
E
E
D
Know
so
currently
there
is
no
garbage
collection
of
the
cache
whatsoever.
So
basically
we
have
all
of
hydras
builds
from
the
last
X
year.
So
don't
know
exactly
how
long
back
but
yeah
that's
before,
because
company
enforce
paying
for
the
es.
Free
storage
cost
so
very
grateful
for
that.
But
at
some
point
that
might
stop,
and
so
then
then
we'll
have
to
start
garbage
collecting.
C
Point
you
had
was
you
didn't
want
to
open
the
floodgates
or
an
ISO
for
every
single
desktop
environment.
D
A
Yeah,
well,
that's
pretty
that's
a
pretty
soft
and
hairy
criteria.
I
think
I.
Think
they're
possibly
should
be
other
criteria.
Long
commitments,
long
being
part
of
the
release
process,
demonstrated
parts
of
commitment
to
keeping
it
up
to
date
and
testing
and
possibly
even
having
an
idea
of
number
quantity
of
user
views.
But
that's
not
really
a
question
that
we
can
answer
right
now.
Yeah.
D
D
C
C
C
I
think
I
I
did
say
that
the
current
size
of
no
Maya
so
is
1.5
gigabytes
and
that's
without
on
like
optimizations
or
defaults
aren't
currently
great
or
the
same
as
they
should
be
upstream.
A
All
right:
well,
we
are
actually
running
out
of
time
here
we're
in
the
last
last
few
seconds
so
that
we
have
allotted
for
this
string.
I
wonder
if
we
got
to
set
up
a
second
call
just
to
discuss
this
ticket
and
discuss
that
sort
of
criteria
or
even
dedicate
a
whole,
a
whole
15
20
minute
segment
to
it
in
the
future.
F
Don't
forget
that
the
the
images
are
not
only
in
the
cache
but
on
s3
bucket
for
their
website.
So
it's
a
2
times
the
size.
A
That's
right:
Thank,
You,
Samuel,
that's
really
great!
Well!
Thank
you!
Everybody
for
coming
to
the
3rd
Nexo
s
office
hours,
the
next
one
will
be
two
weeks
from
today.
These
this
will
be
published
on
the
Nick
Soros
foundation,
YouTube
channel,
which
is
available
I'll
post
that
in
the
next
office
hours
and
also
in
a
couple
weeks,
Nick's
con
is
coming
up
in
Brno.
I
am
just
pulling
that
up,
because
I
forgot,
the
in
actual
dates.
A
October
25th
through
the
27th
tickets
are
for
sale
now
at
Nick
scones
at
work,
where
I
hope
to
see
more
talks
from
Vincent
on
Nick
Suri,
as
well
as
your
talk
about
something
you
are
doing
with
Nick's
in
production
or
just
something
interesting,
you've
done
with
Nick's
and
again.
Thank
you
so
much
Vincent
for
coming
and
talking
about
Nick's
rate
today,
I
really
appreciate
it.
Thanks.