►
Description
This was an unconference session and therefore has no proper description.
B
A
couple
free
announcements,
just
I'm
kind
of
just
facilitating
if
you
have
any
questions
about
logistics
or
what's
going
on
just
ask
me
a
couple
things
this
is
recorded.
So
remember
anything
you
say
questions
etc,
are
being
recorded.
Introduce
yourself.
If
you
have
a
question,
just
you
know
say
you
are
who
you
are,
where
you're
from
what
you
love,
working
outside
or
just
other
room
kind
of
get
acquainted
with
you
and
then
we're
gonna
reserve
some
time
at
the
end
for
everyone.
You
know
like
a
glass
call
for
people
who
want
to
speak.
B
A
Yes,
thank
you
for
the
direction.
So
for
those
of
you
that
don't
know
me,
my
name
is
Jeff
Grafton
github
username
is
IXY
and
I've
sort
of
been
one
of
the
main
people.
That's
been
working
on
the
build
and
some
scent
extent
the
release
infrastructure
for
kubernetes
the
main
project,
as
well
as
doing
some
stuff
and
like
repo
infra
and
testing
fraud
in
the
released
repo.
Some
other
things
like
that,
and
so
I
guess
I.
You
know
proposed
this
just
because
I
know
is
largely
the
build
system,
kind
of
works,
but
you
know
it's.
A
It's
not
super
well
maintained
and
I
know.
There's
issues
like
I
know.
One
thing
that
some
people
would
complain
about
is
the
building
of
someone
release
artifacts
like
the
RPMs
and
Deb's,
which
are
currently
in
a
separate
repository
and
just
kind
of
some
unclarity
or
lack
of
clarity
around
you
know.
What
are
we
gonna
do
with
basil?
What
other
things
like
that?
So
this
is
kind
of
I.
Don't
really
have
a
particular
topic,
I,
don't
know
if
people
have
things
they're
curious
about
they
want
to
talk
about.
They
want
to
ask
about.
A
I
can
answer
questions.
We
can
discuss
things
we
can
try
to
figure
out.
You
know
what
we
want
to
try
to
do.
2019,
so
I
guess
kind
of
anybody
have
anything
they
want
to
say
or
ask
to
start
off
with
or
I
can
kind
of
talk
about
some
of
the
problems
and
some
of
the
things
I
think
maybe
might
be
worth
tackling.
C
A
A
So
looking
at
kubernetes
the
main
project,
it's
pretty
much
all
a
bunch
of
go
and
sort
of
the
legacy
or
you
know
if
the
original
build
system
which
is
still
will
be
used
for
official
releases,
is
basically
a
series
of
bash
scripts
with
some
make
files
and
docker
files,
and
so
basically
the
basic
idea
is
that
everything
is
driven
by
a
make
file
and
then
so
for
the
releases.
So
we
can
actually
get
hermetic
builds.
We
end
up
creating
a
build
container,
so
it
runs
in
docker.
We
sync
all
of
the
source
into
that.
A
We
actually
then,
eventually,
through
some
shell
scripts,
end
up
running,
go
build
a
bunch.
We
copy
everything
out
of
the
docker
container
and
then
after
that,
then
we
use
that
to
build
docker
images,
and
then
we
tar
everything
all
up
into
tar
files,
and
so
we
can
upload
the
tar
files
to
GCS
and
we
can
push
the
docker
images
up
to
DCR
and
then
there's
a
separate
out-of-band
process,
which
then
builds
the
debs
and
rpms
and
partially
that
works
inside
Google.
But
we
want
to
that's
kind
of
something
we
want
to
improve.
A
A
We've
been
working
on
which
we
were
looking
at,
possibly
as
a
more
maintainable
way
or
a
differently
maintainable
way
of
kind
of
trying
to
duplicate
a
lot
of
stuff
and
don't
have
to
maintain
shell
scripts
and
make
files
and
and
docker
containers
know
these
things
trying
to
simplify
them
got
on
one
system
that
maybe
you
could
be
shared
by
different
repositories.
That's
kind
of
I
don't
fly
to
answer
your
question
or
if
there's
follow-ups
people
want
to
ask
about,
I
can
go
into
more
details
about
any
of
that.
D
A
A
So
like
one
issue,
particularly
right
now,
that's
a
problem
is
that
all
of
the
specs
for
the
RPMs
and
Deb's
lives
in
the
kubernetes
release
for
posit
Ori
nothing's
versions
there,
and
so,
as
we
have
you
know,
three
different
versions
of
kubernetes
were
building
at
the
same
time,
sometimes
the
dependencies
change
and
we
aren't
really
tracking
that
version
in
the
specs
themselves.
A
lot
of
specs
are
falling
out
of
date.
A
It's
actually
doubly
complicated
because,
as
part
of
the
basil
experiment,
we
actually
moved
some
of
these
specs
into
the
communities
tree,
and
so
now
we
have
specs
that
are
in
both
places
that
are
out
of
sync,
the
official
rpms
or
the
official
rpms
and
debs
use
the
ones
in
urban
AG
release
where,
as
you
know,
all
of
our
CI
is
using
the
stuff.
That's
in
Cabrini's
communities,
it's
not
a
good
place,
so
I
think
one
of
the
plans
that
I
think
I
know.
A
If
we
actually
have
volunteers
yeah
but
I
think
at
least
is
like
an
initial
thing
is
trying
to
get
everything
into
kubernetes
kubernetes
is
like
the
source
of
truth
for
the
specs
for
those
artifacts,
but
then
how
exactly
those
are
built
is
still
kind
of
an
open
question.
There's
been
some
discussion
of
maybe
using
what
is
it
nf
p.m.
which
is
like
a
way
of
basically
specifying
everything
in
one
yeah
will
file
it'll
generate
it?
There's
you
know.
Basil
was
one
plot,
whether
we're
still
have
some
things.
We're
lacking
basil.
A
D
Putting
words
in
rpm
well
and
even
metadata
into
main
repository
is
one
side
of
the
story
right,
but
if
we
actually
really
want
to
support
where
p.m.
and
IBM
Arab
distributions,
we
actually,
we
need
to
have
expects
for
soo-ji
specs
for
Red
Hat
def
information
for
Ubuntu
Deb
for
Debian
were
pristine
ones
and
so
on.
But
what's
one
side
of
a
story
like
a
second
part
of
a
story
like
how
we're
actually
able
to
publish
it
and
publish
it
in
a
way?
D
What
like,
we
can
have
like
different
distributions,
because
right
now
like
we
only
have
for
some
toys
premiums,
which
is
smallest,
luckily
works
and
the
open
future,
but
not
always
similar,
like
we
have
like
bunch
of
different
versions
of
Ubuntu
switch,
remain,
might
require
different
tricks
well
and
when
we
have
problem
of
like
multiplying
and
it
by
branching
of
kubernetes
itself,
like
112,
113,
114
and
so
on.
Right.
A
Yeah
so
I
think
that's
I,
think
you're
getting
at
a
real
problem
with
this
or
one
of
the
real
problems
with
this
right
now
is
that
you
know
we
created
these
Deb's
and
rpms,
but
I
don't
know
yeah,
there's
all
these
there's
different
distributions
and
things
I'm
like
I.
Think
right
now.
Currently
we
like
there's.
We
target
like,
as
you
know,
not
the
latest
Ubuntu,
so
people
have
complained,
I
think
like
we
don't
have
Bionic
Deb's
and,
like
I,
don't
know
what
we're
like.
You
probably
know
more
about
the
RPMs
than
I
do
so.
D
A
Yeah,
so
different
hacks
is
what
you're
saying
so
I
mean
pretty
much
yeah.
It's
it's
one,
these
things
where
I
think
we
need
somebody
to
kind
of
step
forward
and
own
this
and
I
don't
exactly
know
I
mean.
Maybe
that's
the
cluster
lifecycle
team
is
the
right
group
to
kind
of
do
that,
or
maybe
it's
a
shared
thing
between
cluster
lifecycle
and
the
release
team.
But
at
some
point
you
know
it's
like
is
it
you
know?
Is
it
the
kubernetes
project?
Maintaining
this?
Should
it
be
more
like
an
OS
distribution?
A
That's
actually
been
hitting
these
things.
I,
don't
know
the
answer
these
questions,
but
we
need
I
think
we
need
to
look
into
this
and
kind
of
try
to
figure
that
out
and
I.
Think
that's
something
that's
lacking.
Is
that
yeah
there's!
You
know
it's
easy
if
you're
just
building
easy,
if
you're
just
building,
you
know
just
just
pushing
up
the
binaries
and
docker
images
whatever,
but
when
you're
starting
to
get
on
all
the
distributions
and
dependencies
and
things
like
that,
it
starts
to
get
really
tricky
and
as
to
the
notion
of
where
to
host
it.
A
I
think
that's
some
thing
that
there's
I
don't
know
people
here
are
familiar
with.
Theirs
is
the
infra
there's
like
an
in
for
a
working
team
or
working
group
which
is
basically
trying
to
get
all
you
know
all
the
kubernetes
open
source
project
infrastructure
off
of
a
Google
owned
project
onto
CN,
CF
owned
projects,
and
so
this
is
one
of
the
things
somewhere
down.
The
list
is
like
yeah:
we
need
somewhere
to
host
Deb's
and
rpms
and
I.
A
E
How
common
is
fit
for
projects
to
publish
rpms
themselves
versus
like
the
distribution
publishing
packages?
I
would
wonder
if
maybe
we
might
not
actually
want
the
project
to
run
all
of
this
and
I
would
add
that
for
the
most
part,
the
packages
just
contain
the
binaries.
The
only
thing
that
gets
kind
of
interesting
between
releases
is
the
system
of
unit
files
I,
believe
there
isn't
really
anything
else
in
the
package.
We
don't
have
main
pages
or
anything
like
that.
D
Well,
at
least
I
can
share
some
sort
of
historic
history
about
what
I
think
it
was
discussed
at
like
last
spring
in
Berlin
and
Keep
Calm
Berlin.
So
on
one
hand,
yes,
we
want
distributions
to
package
kubernetes.
A
problem
is
what
most
of
those
distributions
we
have
very
long
release
cycles.
So,
like
imagine,
with
Santos
will
package
with
kubernetes
right
now
it
will
be
like
version
1.8
or
maybe
even
earlier.
D
Second
thing
we
want
to
be
for
people
to
be
able
to
use
the
latest
and
greatest
version
of
so
it
means
like
for
us
as
a
cluster
lifecycle.
We
need
to
provide
a
way
to
install
it
and
install
it
in
easy
way.
Definitely
we
are
not
the
best
people
to
maintain
those
packages,
but
we
are
trying
to
balance
somehow
between
well
usability
and
functionality.
E
Have
a
thought
that
someone
else
in
cluster
lifecycle,
I
think
initially
mentioned,
and
then
it
kind
of
fell
off
I,
would
wonder
how
viable
you
think
this
is
potentially
what
we
could
do
is
have
a
queue
Batum
or
qubit
ship
with
their
files
baked
in
with
something
like
go
bin
data
and
just
write
them,
and
then
what
you
need
like
cubelet,
QA
to
baby
queue,
cuttle
binaries,
you
can
ship
those!
However,
you
want,
and
the
rest
is
docker
images.
It
might
not
be
the
best
idea,
but
yeah.
D
D
D
D
Well
personally,
when
I'm
talking
about,
we
have
what?
What
actually
do
you
need
to
run
with
you
be
DM
and
think
what
I
am
always
saying
is
like
if
you
use
CentOS
or
a
boon
tool,
you
can
use
for
packages
what
we
provide,
but
if
you
are
using
anything
else
or
any
of
unsupported
distributions,
just
follow
our
instruction.
What
was
created
for
Korres
Korres
exactly
specifies
like
what
like
three
or
four
binaries.
A
Anyone
else
have
thoughts
or
comments,
I
mean
I,
think
that's
one
kind
of
open
question
I
still
have
and
I,
don't
think.
There's
really
an
answer
for
that
is
what
you
know.
What
artifacts
should
the
kubernetes
project
itself
be?
Building
and
releasing
you
know,
is
what
what
what
what
are
the
different
repositories
responsible
for
you
know
what
what
are
we
actually
trying
to
ship?
You
know
we
kind
of
have
a
variety
of
things,
and
you
know
what
is
actually
most
useful
to
end-users.
E
Microphone,
I,
don't
think
we
actually
have
CI
using
the
anything
but
the
Debian
packages,
and
we
have
pretty
limited
CI
for
that
for
the
continuous
stuff
there's.
Some
continuous
integration
eases
the
Debian
packages
for
kubernetes
anywhere,
but
there's
not
much
else
using
the
packages.
So
if
we
expand
the
number
of
distributions
that
we're
shipping
things
for
I'd
be
a
little
bit
concerned
about
saying,
hey
we
ship
these
artifacts,
but
also
no
one,
has
stepped
up
to
make
sure
they
work.
F
We
host
the
Debian's,
the
the
Deb's
and
the
RPMs
and
how
we
go
about
like
promoting
them,
but
the
sig
release
is
also
the
key
owner
of
like
what
artifacts
do
we
actually
ship
and
if
we're
going
to
ship
an
artifact,
how
do
we
test
it
and
how
do
we
make
sure
that
that
is
the
same
kind
of
high
quality
releases
that
you
get
from
either
looking
at
the
binary
or
the
source
code,
or
that
kind
of
you're
going
to
get
a
dead
or
an
RPM?
Is
that
going
to
be
under?
F
G
So
an
interesting
side
note
on
the
witch
artifacts
we
like
control
and
which
we
don't
so
in
the
case
of
Cardenas,
which
went
from
basically
self-maintained
to
CN
CF
project,
so
it's
like
in
the
verge
of
like
doing
something
and
like
switching
which
artifacts
actually
get
delivered
with
humanities.
So
that's
still
like
an
open
question
even
for
stuff,
that
is
simply
shipping,
so
the
containers
get
built
by
the
project,
but
the
API
or
the
pool
that
is
actually
done
and
the
the
API
that
is
used
is
actually
controlled
by
kubernetes.
G
So
it's
like
still
even
for
something
that
is
currently
shipping.
Then
it's
an
open
question.
So
if
you
have
ideas,
please
come
forward
in
general
like
because
it
will
get
harder
and
harder
with
a
lot
more
plugins
coming
in
everything
being
decoupled
into
more
container
based
things.
So
is
that
completely
release
based,
but
I
thought
it's
a.
D
What
actually
reminded
me
another
question
so
last
month
it
was
for
cube
con
china
and,
like
most
most
people,
don't
know
how
gene
is
firewall
behaves.
So
okay
scenario
is
the
current
release
infrastructure
for
kubernetes
as
such,
what
our
download
side
like
well
as
free
is
actually
it's
accessible
from
my
china,
so
you
can
get
the
information.
What
is
the
latest
release?
You
can
search
for
binaries,
but
none
of
our
images
are
possible
to
fetch
from
from
China,
because
Google
registry
is
bonnet
so
going
further.
G
So
there
is
some
proposal
which
is
really
early
stages,
which
comes
into
the
notion
of
the
communities
project
controlling
the
end
points,
but
not
controlling
the
actual
infrastructure
behind
it
that
it
doesn't
matter
if
the
images
are
hosted
by
Asher,
Google
or
AWS
or
whatever.
But
the
entry
point
is
always
like
Cuban,
at
least
at
I/o
or
something
so
that
the
switch
is
seamless
and
with
that
it
with
that
change,
it
would
be
easy
to
get
a
company
in
China.
Who
knows
the
providers?
G
G
There
is
a
workaround
if
you
want
to
do
something
like
that,
so
one
of
our
open
source
projects
is
basically
a
redirect
engine,
which
also
supports
basically
mapping
a
whole,
the
whole
of
GCR
into
your
own
domain
and
at
the
cache
to
that
that
works
and
the
other
one
is
hosting
a
local
registry
docker
registry.
But
that's
knowledge.
A
H
If
I
miss
one
thing
or
worse
yet,
if
I
have
to
rebase
changes,
I'm
looking
at
an
hour
and
a
half
of
just
waiting
for
those
changes
which
didn't
produce
any
measurable
out,
reserve
change,
I
have
to
wait
an
hour
and
a
half,
and
sometimes
that
hour
and
a
half
feels
very
stressful.
Because
the
code
freeze
is
in
an
hour
and
45
minutes,
and
so
I
have
to
be
very
careful.
I.
E
I
So
I
feel
like
at
various
points
in
time
the
idea
had
been
floated
about
making
the
cake
a
repo,
primarily
just
like
an
integration.
Repo
of
you
know,
kubernetes,
foo
and
kubernetes
bar
then
both
get
vendored
into
kubernetes
and
we
don't
actually
do
any
development
in
KK
right.
You
know
is
that
still,
obviously
that
is
not
really,
maybe
that's
more
the
case
today
than
it
was
a
couple
years
ago,
but
I
don't
think
that
vision
has
fully
been
realized.
Like
you
know,
what
do
you
and
or
other
people
here
think
about
that
vision?
A
I,
haven't
heard
a
whole
lot
about
about
it
until,
like
today,
actually
whet
the
sega
architecture,
talk
where
they
talked
about
the
code
learning
station
sub
project.
It's
like.
Oh,
that's,
that's
neat
that
sounds
cool,
so
I
think
it
sounds
like
it's
something
that
you
know
the
the
the
organization
is
interested
in,
but
I
think
it
sounds
like
also
it's
something
that
still
needs
people
to
step
up
and
actually
work
on
it.
You
know,
I
think
it's.
A
A
Are
the
kubernetes
binaries,
probably
gonna,
do
a
horrible
job
and
back
to
describing
this,
but
I
think
the
basic
gist
was
that,
like
the
core,
kubernetes
binaries
are
kind
of
like
the
Linux
kernel
and
then
like
all
the
stuff
that
actually
makes
Cabernets
work
is
kind
of.
Like
you
know,
the
Linux
distribution
usually
use
and
currently
Cabrini's
communities
is
trying
to
kind
of
be
both
and
it's
maybe
not
doing
the
best
job
at
being
both,
and
so
maybe
we
need
to
like
actually
figure
out
how
to
treat
them
as
separate
projects.
A
Well
then,
yeah
the
that,
then,
you
know,
asks
different
questions
about.
You
know
what
do
we
build
and
how
we
build,
and
how
do
we
release
and
test
everything
together
in
a
meaningful
way?
And
so
I
don't
know?
If
anyone
has
answers
about
that,
I
mean
I.
Think
it's
an
interesting
thought
experiment
at
least
I
think
should
be
probably
explored
some
more,
but
it
does
raise
a
lot
of
questions
about
you
know
how
do
we
actually
synchronize
where
these
Cadence's
and
test
everything
together?
A
Because
now
you
know
if
you're
developed,
if
you're
actually
developing
as
a
bunch
of
different,
not
mono
repos
but
like
there's
mini
repos,
maybe
not
super
mini
but
a
smaller
repos,
then
you
know
you
need
to
still
somehow
way
have
a
way
to
be
able
to
now
make
a
change
that
affects
a
bunch
of
different
things
together,
be
able
to
get
those
PRS
and
be
able
to
test
these
things
in
C
I
constantly
integrating,
and
so
you
know,
how
does
that
work?
Is
that
something
that
the
test
infrastructure
can
help
with
or
the
build
infrastructure?
A
So
there's
I
think
there's
a
lot
of
open
questions
about
this
and
I.
Think
that's
why
larger
the
project
has
sort
of
still
stayed
a
mono
repo
and
not
tried
to
really
swim
in
some
pieces
have
moved
out,
but
I
think
a
lot.
Of
course.
Stuff
is
still
in
the
main
repository
because
it's
challenging
and
nobody's
really
kind
of
tackled
that
yet
I.
J
Think
will
likely
use
staging
sources,
the
mini
mono
repo
for
a
while,
like
keep
control
as
they're
trying
to
do
things,
they're,
bringing
things
into
staging
source,
and
there
was
some
discussion
about
cubelet
going
to
staging
source
and
you
know
just
treating
it
as
a
little
bit
less
of
the
monolithic
codebase
bit
more
of
the
integration
point.
So
it's
on
that
way,
but
you
know,
like
everything
in
cube.
K
A
I
mean
I
think
I.
Think
we'd
gladly
do
that.
The
main
thing
is,
we
just
need
actual
machines
to
test
it
on
and
I
guess.
There's
kind
of
you
know
that
gets
in
a
question
of
like
you
know,
is
that
something
you
know
is
that
something
kind
could
help
with
I.
Don't
know
like
you
know:
I,
don't
know
what
at
what
level
we're
testing
and
that,
maybe
that's
an
e
DB
test,
so
we're
kind
of
trying
to
move
I
mean
you
know
it
seems
like
a
lot
of
the
desire.
A
Is
that
we're
not
using
these
sort
of
large?
You
know
multi
machine
ete
tests
on
PRS,
because
they're
kind
of
slow
and
flaky,
but
I
do
think
you
know
we
should
get
more
testing
on
multi
arch,
especially
in
CI
and
I,
think
it's
mostly
just
been
waiting
on
the
relevant
companies
or
CN
CF
to
kind
of
supply
the
actual
machines
stable
to
do
that
testing.
But
you
know
we
are
definitely
building
it
and
I
think
we're
interested
in
that,
but
yeah
I
think
it's
mostly
just
been
a
lack
of
resources.
A
L
So,
on
top
of
that,
one
question
I
suddenly
remembered
is
so
currently
all
the
priests
limit,
testing
we're
using
beta,
Bute
and
I.
Think
I've
talked
with
Windows
people.
They
have.
Some
interest
is
a
day
like
Windows,
please
limit
that
they
are
asking
if
we
are
able
to
be,
they
like
Windows
versions
of
the
binary
even
basil
seems
like.
Currently,
the
basil
arc
is
hard-coded.
Yes,.
A
A
So
that
way,
multiple
PRS
or
you
know
mostly
not
changing
much
stuff
most
of
the
code
is
it's
the
same.
That
way
we
can
actually
just
we
use
those
build
artifacts
and
actually
cuts
down
our
test
time
from,
like
you
know,
20
minutes
to
5
minutes
in
build
time.
Similarly,
goats
down
a
lot,
so
that
helps
a
lot
but
yeah.
A
So
the
basal
build
kind
of
we
have
some
challenges,
mostly
around
cross
compilation
of
Segoe
doesn't
really
is
not
really
implement
it
right
now,
so
we're
mostly
just
targeting
building
for
linux
ami
64,
which
is
kind
of
the
main
thing
we've
cared
about,
but
then
yeah.
So
if
we're
trying
to
target
something
like
Windows
right
now,
they're
hard-coded
strings,
we
could
probably
you
know,
make
those
things
more
configurable,
but
I'm,
not
sure.
A
If
we'll
be
able
to
build
crossbolt
everything
for
Windows
because
of
some
of
the
Segoe
requirements
we
have
in
cubelet,
although
I
don't
know,
if
cubelet
how
that
works,
I,
don't
know,
I,
don't
know
enough
about
the
windows
supports
I,
don't
know
how
much
of
an
issue
that
might
be.
If
you
have
anything
to
add,
please,
we.
H
Talked
about
this
a
little
bit
at
basil
con
actually,
and
the
impression
I
got-
is
that
none
of
this
stuff
is
technically
impossible,
but
we
lack
a
champion
for
it.
So
if
somebody
wants
to
step
up
and
figure
out
how
to
glue
all
of
this
stuff
together,
I
think
most
of
the
pieces
are
there
already,
but
you
know
nobody's
really
motivated
to
replace
the
the
jiggery
that
we've
got
I.
E
The
windows
is
a
bit
of
a
different
problem
than
like
architectures,
because
it's
completely
different.
It's
not
just
the
actual,
build
like
the
functionality
and
the
logic
is
different.
So
we're
gonna
have
to
have
a
separate
discussion
around
that.
But
as
far
as
architectures
go,
that's
exactly
the
kind
of
thing
where
we
don't
want
to
cram
all
of
the
things
in
presubmit,
but
we're
really
looking
to
get
people
to
do
that
and
post
them
in.
E
We
have
a
program
now,
there's
a
kept
through
cig
cloud
provider
for
getting
cloud
providers
to
upload
conformance,
there's
also
test
grid.
It
is
kind
of
a
first
step
for
getting
signal
and
we
just
this
morning
have
a
new
one
from
our,
where
they're,
using
Amazon's
new
arm
machines
to
run
conformance
tests
in
that
way,
and
we
previously
had
some
limited
results
from
them.
I
don't
think
they
have
any
good
CI
for
this,
yet
I
think
there
just
running
it.
E
But
if
someone
wants
to
step
up
and
like
say,
I
have
a
bunch
of
powered
pcs
I'm
going
to
like
run
the
conformance
tests
from
head
produce
the
results.
The
release
team
will
probably
be
more
than
happy
to
start
walking
on
that
after
seeing
like
maybe
a
week
or
two
of
good
signal
and
I
think
that's
probably
the
right
way
to
get
CI
for
those
I.
Don't
think
it's
very
likely
that
we're
going
to
somehow
turn
like
say
our
GCP
credits
into
our
machines.
M
A
M
A
I'd
be
curious:
I
could
follow
up
in
specific
details
about
what
you're
trying
to
do.
I
know,
that's
something
that
I've
worked
a
little
bit
on
with
generating
codes
so
kind
of
some
background
there,
as
you're
may
not
be
familiar.
There
is
a
fair
amount
of
generated
code
in
kubernetes,
and
most
of
it
follows
this
kind
of
kubernetes
Pacific
code
generator
pattern.
There's
this
these
frameworks
and
case
dot,
io
/
code,
gen
I
came
aboard.
It
is
right
now,
I
think
there's
one
code
gen
and
there's
a
few
different
things,
and
so
basically
they.
A
Basically,
you
have
to
kind
of
declare
and
build
files,
all
your
inputs
and
this
kind
of
just
operates
across
the
entire
tree,
so
we've
kind
of
been
trying
to
figure
out
how
to
make
that
work
well,
but
then
also
yeah.
You
run
into
other
things
where
a
lot
of
these
tools
expect
to
run
in
a
valid,
go
path,
with
valid
go
route,
and
so
we've
written
some
custom
basil
rules
to
do
that.
So
the
open
API
code
generator
is
actually
working
in
basil
and
there's
some
uncoming
support
to
support
other
code
generators.
A
But
it's
kind
of
a
very
case
to
case
not
really
we
haven't
invested
time
in,
like
you
know,
making
a
like.
It
would
be
great.
You
know
again,
this
was
something
that
people
were
interested
in.
We
could
potentially
write
Basel
rules
or
custom
rules
that
would
live
what
next
to
the
code
in
a
repository
that
you
could
import
into
your
kubernetes
project
and
rather
than
having
to
like,
generate
your
all
of
your
generated
code,
your
client
go
code
and
your
API
machinery
and
all
this
stuff.
Potentially
there
would
just
be
a
Basel
rule.
A
You
would
like
run
this
on
your
repository
and
then
basil
would
build
everything
you
would
have
to
check
it
in
the
inquire.
So
you
know
the
side
effect
of
that
is
now.
Somebody
can't
go
get
your
repository,
which
is
kind
of
an
open
question
to
of
like.
Is
this
you
know?
Which
way?
Do
you
want
to
go
down
with
the
path
we
want
to
go
down?
We
want
stuff.
If
you
go,
get
a
bowl,
there's
a
whole
lot
of
other
considerations
there
in
ways
we
can
attack
that
that's
kind
of
another
down
into
the
weeds.
A
A
Think
another
kind
of
interesting
question
which
I
would
like
to
see
is
right
now
I
feel
like
you
know,
we
have
a
hundred
and
fifty
repositories
and
kubernetes
or
something
like
that,
and
you
know
how
do
you
actually
build
whatever
the
relevant
artifacts
from
that
repository
in
some
cases
is
a
makefile
in
some
cases
it's
read
the
readme
and
see
what
it
says.
I
think
a
lot
of
them
are,
you
know,
go
build
or
something
that
maybe
there's
a
make
file,
but
it's
a
different
make
target.
Some
of
them
are
hermetic.
A
Some
of
them
are
not
so
there's
a
very,
very
non-standard
way
of
building
things,
and
it
might
be
interesting
to
have
a
more
standard
way
of
building
stuff,
but
also
coming
to
that
consensus
might
be
difficult
with
this
many
repositories.
So
I,
don't
know
I
don't
know,
has
anything
people
are
interested
in
I.
Think
there's
a
lot
of
interesting
problems
here
and
I
think
we
can
make
a
lot
of
improvements
at
getting.
A
Reproducible
builds
and
hermetic
builds,
and
you
notice
making
things
predictable,
such
that,
if
you're
contributing
to
one
committees
project,
you
should
be
able
to
go
to
another
committees
project
and
know
how
to
run
the
tests,
know
how
to
build
things
and
have
that
expectation
and
right
now
we're
not
there
for
sure.
So.
B
A
D
K
G
H
A
I
guess
yeah
another
show
of
hands
who
here
generally
uses
like
just
make
build
or
go
build
or
go
test.
This
directly
calls
make
or
go
or
whatever,
who
uses
basil
all
right.
There's
actually!
Well.
That's
interesting.
There's
a
lot
more
well,
I
guess
that
were
pretty
heavily
weighted
on
my
team,
but
there
was
a
lot
more
basil
people
here
than
I
expected,
but.
D
A
H
M
So
I
don't
work
on
Cubans
itself.
Yes,
I
sent
Pierre's
from
time
to
time,
but
for
our
internal
staff
at
a
lesson
we
for
our
project
we
use
bozo
and
I
recently
did
a
lot
of
work
to
switch
a
bunch
of
built.
You
know
code
that
builds
the
code
to
basil
from
shell
Python
make
and
all
this
recursive
stuff
that
it
yeah
one
thing
called
nothing
call
nothing
and
it's
pretty
nice
actually
I
think
I'm
really
happy
with
the
result,
but
the
current
issue
is
the
code
generation
or
the
client
generation.
M
Third
generation
defaulting
and
so
on
would
be
really
nice
to
replace
that.
But
I
don't
think
I
can
do
it
myself
without,
like
modifying
the
way
the
generated
code
is
generated.
So,
like
you,
don't
know
what
the
outputs
will
be
and
it's
not
compatible
with
how
basil
expects
things
to
be
so.
It's
I
need
Cuban,
I
just
to
change
so
that
I
can
you
know,
do
something
about
yeah.
A
I
know
I
have
some
thoughts
about
this.
I
haven't
turned
it
into
a
dock,
but
I
think
there's
I
have
some
ideas
of
how
we
could
better
support
the
communities
generated
code
inside
in
basil,
I
kind
of
emphasize
this
for
a
bit
because
it
was
uncertainty
about
whether
we
wanted
to
actually
move
this
stuff
from
checking
it
in,
but
I
can
certainly
follow
up
and
maybe
then
somebody
can
come
along
and
try
to
implement.
It
might
be
a
good
good
first
step,
yeah.
I
I
would
say,
looking
like
Jeff
recently
updated
how
we
do
the
open,
API
generated
code,
and
that
has
a
fairly
good
basil
rule
for
it.
So
maybe
looking
at
that
would
be
a
good
place
to
start
before.
Jeff
stock
exists
and
then
for
writing
basil
rules.
It's
definitely
it's
actually
fairly.
It's
like
a
variant
of
Python,
more
or
less.
So
it's
not
actually
that
bad.
I
N
Do
maintain
some
parts
of
the
clock
provider,
GCP
repo
and
it's
like
a
set
of
fairly
trivial,
go
binaries
and
I
really
would
hate
its
using
Basel
now
for
everything
and
building
images
and
everything,
but
I
really
wish
it
didn't
it
like.
For
that
use
case.
Basel
is
overkill,
it
will
be
fine.
We
just
go
build
in
like
a
shell
script.
So
to
your
point
of
having
like
a
unified,
build
system,
it
might
be
just
too
much
work
to
set
it
out
for
some
simpler
repos
I.
H
Basel
lets
you
basically
define
functions
that
are
a
set
of
outputs
for
a
given
thing
and
I've
found
that
to
be
very
helpful
over
a
makefile,
because
you
can
very
easily
say,
I
want
these
three
binaries
to
each
have
a
binary,
a
tarball
and
a
container
all
generated
by
them,
and
it's
it's
a
bit
easier
to
manage
I
think
than
having
a
bunch
of
different
make
file
dependencies
for
all
those
things
and,
like
short
sourcing,
shell,
scripts
and
stuff,
it's
a
little
harder
to
trace.
What's
going
on.
I
Are
we
okay,
potentially
I,
don't
know
if
we
want
to
talk
more
about
Basel,
but
throwing
you
know,
potentially
another
monkey
into
the
system.
I
feel
like
a
large
part
of
our
built
complexity
is
trying
to
support.
You
know
max
terrible
versions
of
everything
right,
which
is
the
primary
reason
why
we
build
in
a
container
and
whatnot
in
like
have
we
considered
like
dropping
support
for
Mac
builds,
you
know
what
would
building
on
Mac.
Well,
you
know
what
would
make
people?
What
do
people
think
about
that
idea?.
O
I
J
Friction
but
guys
I
haven't
really
hit
stuff.
It's
just
people
forget
that
people
occasionally
run
go
tests
on
Macs
or
whatever,
and
that's
usually
the
when
I
do
go.
Look
when
those
get
hit,
they'll
usually
be
a
PR
open
for
it.
So
it
is
kind
of
a
there's
enough
people
that
there's
some
pressure,
but
it's
not
a
huge
pressure
and
it
certainly
happens
frequently
that
people
forget.
O
Yeah
so
like
a
lot
of
the
issues
that
I
had
we're
just
sort
of
like
you
know
slight
mismatches
between
you
know
what
you
know
the
new
make
on
you
know:
Apple
versus
you
know
everywhere
else
on
the
planet
or
you
know
some
for
some,
some
library
doesn't
exist
or
you
know
some
test
assumes
it's
being
run
in
Basel
with,
like
you
know,
the
work
space
being
switched
and
stuff
like
that.
Is
you.
K
O
J
Have
1200
commits
in
the
kubernetes
repo
that
says
it's
possible,
but
it's
painful
and
like
basil
does
have
a
lot
of
improvements
like
a
lot
of
the
things
we've
done
to
improve
back
support
or
improve
the
max
support
actually
hurt
me
because
I
don't
run
docker
for
Mac,
for
instance.
So
it's
like
some
of
this
is
like.
J
Maybe
we
don't
do
a
great
job
of
pulling
the
community
who
is
on
Mac
like
what
they
actually
need?
I
mean
the
reproducibility
of
basil.
I
haven't
any
problems
with
basil
on
Mac,
since
they
finally
fixed
like
this?
Is
the
sandbox
sandbox
stuff
started
working
again,
so
it's
more
I
think
basil
is
better
than
it
ever
has
been
in
the
the
rest
of
the
support
has
atrophied
a
little
bit.
O
I
Sort
of
minor
thing
that
Christoph
did
over
in
the
testing
for
repo
was
essentially
you
know.
There
was
something
where
I
think
there
was.
Basically,
we
just
depended
on
a
version
of
said,
and
obviously
that
did
not
work
on
the
standard
Mac
thing,
so
we
just
did
a
check
to
make
sure
that
gee
said
was
installed
and
and
if
it
wasn't,
you
know
give
people
an
instruction
for
how
to
install
it
on
their
Mac,
and
that
has
been
helpful.
Obviously,
it's
not
perfect
and
I'm
sure
there's
other
things
that
will
run
in,
but
that's
been.
N
H
I
I
had
we
were
having
trouble
with
and
subst,
because
n
subst
is
a
command-line
utility
that
sometimes
exists
on
linux
and
sometimes
doesn't,
and
I
had
very
good
success
with
I
found
somebody
made
a
go
version
of
it.
I
added
that
as
a
go
dependency
to
basil
and
now
basil
builds
that,
and
we
don't
have
to
worry
about
what
version
is
on
somebody's
machine
because
we
build
our
own
version.
We
know
it's
compatible
because
we
just
built
it.
H
J
Actually
wanted
to
continue
up.
It's
like
I
think
the
Basile
support
now
is
in
a
much
better
state
and,
like
you
know,
I
would
be
the
first
person
to
say
unwinding,
Tim's
domination
of
make
and
show
would
be
the
would
be
a
crowning
achievement
of
this
project.
One
of
the
challenges
is
like
on
the
check.
Taking
the
checking
checked
code
out,
like
we've
fixed
a
lot
of
the
previous
conflict,
rebase
issues.
J
You
know,
there's
a
lot
better
now
that
it
was
in
the
past,
but
it
is
interesting
because
today
go
bill
does
mostly
were
code
test,
as
we're
guys
go.
Build,
has
gotten
better
a
lot
of
the
advantages
that
basil
had
over
the
run
like
I,
don't
notice
as
much
anymore,
because
I
do
have
a
warmed
up
go
cash,
so
it
is
interesting.
How,
as
basil,
has
gotten
better
go,
has
also
gotten
a
little
bit
better.
J
Some
of
the
aspects
of
having
everything
checked
in
are
nice
because
we
do
just
play
well
with
vendor
and
other
tools
that
so
we
don't,
you
know,
cause
friction.
So
there
is
kind
of
a
nice
balance
if
we
kind
of
stray
too
far
from
that
we
should
probably
commit
to
it
rather
than
staying
like
moving
out
of
that
comfort
zone,
but
then
not
having
a
middle
ground.
So
that'd
be
a
good
topic
to
explore.
I
think
yeah.
A
And
I
think
one
thing
from
like
supporting
go
like
I,
think
you
know
it
we
discussed
like
if
we
want
to
actually
switch
to
basil
as
the
official
way
of
doing
things
and
like
we
wanted
to
remove
generated
code.
One
thought
was
there's
probably
ways.
Maybe
physical
modules
or
you
know
other
things
like
there
might
be
ways
we
could
export
the
like.
We
could
build
all
of
the
generated
decode
and
then
export
it
to
a
git
repository
kind
of
like
we
do
with
the
staging.
So
we
would
like
the
master.
A
Kubernetes
repository
would
not
have
anything
checked
in,
but
we
could
still
have
something
we
would
export
that
people
could
go
get
and
so
stuff.
That's
in
staging
is
a
great
example,
but
we
discovered
that
people
also
apparently
depend
on.
Are
they
vendor
Kate's
Deo,
slash
kubernetes,
which
was
not
expected,
like
maybe
expected,
I
discovered
this
when
I
remove
some
generating
code
that
broke
stuff?
So
so
that's
kind
of
I
think
you
know
still
something
to
be
discussed.
A
I
think
what
other
comments
wanna
make
on
the
merge
conflicts
where
it
used
to
be
a
really
big
deal
on
build
files
and
I
think
well.
Both
those
have
improved
I
think
what
is
great
to
hear,
but
also
there
is
some
experimental
support,
lose
connection.
I
talked
about
this
a
little
bit
as
well.
This
tool
called
Auto
Gazelle,
which
will
actually
automatically
generate
these
build
files,
and
you
won't
have
to
check
in
most
of
the
stuff
it
would
just
automatically
when
you
run
basale
figure
out
all
the
stuff.
That's
already
in
your
go
files.
A
H
I'm
gonna
do
a
FIFO
on
this.
The
original
point,
when
I
raise
my
hand,
is
that
I,
don't
think
basil
provides
much
advantage
for
just
where
you
would
otherwise
be
running.
Go
build
like
go.
Build
is
a
perfectly
fast
tool
that
has
caching
and
all
of
that
sort
of
stuff,
but
we
do
a
lot
of
weird
stuff.
On
top
of
that,
like
we're,
generating
yeah
Mel
we're
generating
like
code,
we're
generating
all
of
this
stuff
and
all
of
that
stuff
already
doesn't
work
super
well
with
go
build.
H
We
just
put
it
in
the
right
place
where
it's
expected
and
I
think
it's
those
once
you
stray
a
little
bit
off
the
beaten
path
of
what
go,
build,
sort
of
natively
does
that's
where
basil
really
shines,
because
you
don't
have
to
worry
about
making
sure
that
your
environment
is
exactly
right
and
on
the
auto
gazelle
front,
it's
I
tried
to
use
it
for
a
personal
project.
I
was
building
and
I
did
run
into
some
trouble
filed
a
bunch
of
issues.
H
The
maintainer
of
rules
go
Jake,
Conrad
is
very
responsive
and
very
friendly,
so
he
is
working
on
them.
I
think
it's
not
ready
for
something
like
cake
gates
yet,
but
I
think
it
could
be
ready.
For
you
know
a
smaller
repository
that
you
know
every
once
in
a
while.
You're
gonna
have
to
blow
away
your
cache
and
start
from
scratch,
and
that's
not
going
and
that'll
take
a
minute
out
of
your
day,
not
two
hours
out
of
your.
A
So
I
guess
we're
kind
of
out
of
time.
I
don't
know
if
we
actually
got
proper
action
items
out
of
this,
but
I
would
say
the
main
thing
you
know
going
forward.
Is
you
know,
file
feedback
on
what
you
think
works
and
what
doesn't
work?
You
know
always
looking
for
people
to
work
on
stuff,
there's
plenty
of
things
to
do.
I'm
always
around
on
slack.
You
can
find
me
I'm,
like
basil,
channel
or
cig
release
or
stick
testing
kind
of.
We
spend
all
these
various
places.
A
But
you
know,
if
you
have
thoughts
about
this,
you
want
to
improve
things
like
please
step
forward.
Please
help
out.
You
know
we're
always
always
sort
of
people
as
I
think
most
of
the
crew.
Today's
project
is
so
you
know,
let's
take
this
discussion
forward
and
continue
and
figure
out
make
things
better
thanks.