►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Apologies
I,
take
very
little
balloon
for
it,
but
yeah
we
want
to
just
jump
into
it.
Can
everyone
who's
on
the
call
access?
The
link
from
the
calendar
invites
it's
fairly
sure
it's
a
Google
Doc
and
it
has
a
little
bit
of
context
about
the
meeting
but,
more
importantly,
there's
a
new
link
in
it
called
layers
that
I
threw
in
there.
That
should
be
publicly
readable
and
pitiable.
B
So
I
created
with
my
personal
account,
I
jumped
through
many
hoops,
so
hopefully
it
works.
Let
me
know
if
it
doesn't
yeah.
Hopefully
that
does
honor
other.
A
B
B
Cool,
do
you
guys
see
that
yep,
lovely
okay,
so
first
I
just
wanted
to
spell
a
few
myths.
I
keep
hearing
these
things
about
our
end,
o
and
test
infrastructure
that,
if
only
Googlers
are
have
the
permission
to
set
up
and
doing
tests
or
with
the
Google
responsibility
to
send
up
end-to-end
tests
or
that
they're
completely
divorced
from
the
normal
implementation.
Workflow
that
you
add
new
features
and
then
later
on,
someone
adds
new
end
to
end
tests
for
them.
B
D
B
B
So
that's
great
for
Googlers,
because
I
don't
have
to
worry
about
those
costs,
but
if
you
are
trying
to
run
and
doing
tests
in
the
way
that
we
run
them
on
a
regular
basis,
it
might
cost
them
money.
I
don't
know
if
we
have
an
agreement
to
provide
GCP
projects
to
other
collaborators
to
let
them
use
it
and
bility
guru
lab
master.
B
A
A
B
So
so
the
code
for
it
is
open
source.
We
do
run
that
code
on
an
instant
of
gke
that
administrated
by
Googlers.
We
might
have
some
other
people,
maybe
Quinton
kids,
expected
on
the
end
run
team.
We
might
have
some
other
people
from
the
community
who
do
have
permissions
edit,
the
cluster
or
directly
access
the
cluster,
but
the
code
is
open
source
and
the
artifacts
are
open
to
be
read
by
all
but,
like
you
couldn't
go
directly
to
our
kubernetes
cluster
and
start
poking
at
it.
But
you
could
edit
the
code
or
see
job
logs.
B
Did
that
answer
your
question.
Yep
cool,
yeah,
I'm
I'm
browsing
this
guest
right
now,
just
because
I
don't
want
like
email
and
calendar
notifications
to
pop
up
and
I'm
able
to
read
this
bird
with
mine
cool.
So
there
are
a
lot
of
layers
and
the
tests
intra
and
they're
changing
all
the
time.
So
as
soon
as
this
recording
is
done,
I'm
sure
this
will
be
completely
obsolete,
which
is
what
makes
it
all
so
difficult
to
write
a
document
about
this.
B
And
and
I
should
also
mention,
I
scheduled
this
for
an
hour.
This
might
be
the
first
session
of
many
I.
Don't
know
how
long
it's
going
to
take
to
cover
all
this
so
we'll
see
if
we
run
out
of
time,
we'll
just
follow
well
schedule
a
follow-up
so
based
on
the
the
Google
spreadsheet
I
sent
out
before
it
sounds
like
there's
interest
in
learning
about
JSON
and
terraform.
But
most
people
want
to
learn
about
kubernetes
anywhere.
B
So
kubernetes
anywhere
is
a
project
that
existed
prior
to
the
end-to-end
test,
and
it
is
it's
based
off
of
a
sonnet
and
terraform
to
be
able
to
bring
up
a
kubernetes
cluster
and
it
prior
to
our
end-to-end
test.
It
had
no
support
for
two
Badman
I
added
support
for
cube
admin
as
a
path
of
least
resistance
to
be
able
to
get
our
end-to-end
tests
working
so
I'm.
Assuming
everyone
here
is
familiar
with
you
back,
then
it
has
no
concept
of
provisioning
infrastructure.
B
So
provision
of
your
cluster
comes
in
three
phases.
The
first
phase
is
like
the
raw
infrastructure.
I
need
some
VMs
I
need
firewall
rules.
This
is
very
specific
to
your
cloud
provider
so
used
to
is
now
that
I
have
those
resources.
How
do
I
actually
install
communities
on
them
and
get
configured
and
then
phase
3
in
so
phase
1
you're
generally
working
with
the
cloud
providers
at
phase
2
you.
B
If
you
still
need
to
know
about
the
cloud
providers
a
little
bit
to
know
how
to
install
kubernetes,
and
but
it's
generally
after
our
use
case,
it's
generally
just
calling
you
haven't
in
it
or
joined
and
then
phase
3.
All
you
need
is
a
cube
config
file
because
add-ons
you
can
just
use
cube,
cuddle
and
point
out
a
master
to
apply
out
on
there's
better
readme
here.
B
So
the
way
this
works
I'll
do
a
quick
demo,
which
I
have
a
little
remnants
still
left
over,
for
our
people
go
see
my
terminal
now
that
it's
in
a
different
one,
people-
okay,
so
this
is
created
anywhere.
I
can't
see
it.
So
you
can
to
live.
If
I
go
on
full
screen
next
night
and
then
does
it
break
because
I'm
going
full
screen
or
can.
C
B
That
looks
like
this
when
you
first
check
it
out,
there
is
nothing
there,
so
you
have
to
build
this
up
and
in
order
to
configure
this,
we
use
a
tool
called
K
config
which,
if
you've
ever
developed
the
Linux
kernel
or
try
to
compile
the
kernel
yourself,
you
probably
you're,
probably
familiar
with
like
makin,
been
you
can
save
or
make
config
we're
using
literally
the
same
tool
to
build
this
up.
But
we
define
the
three
different
phases:
independently,
there's
a
bunch
of
variables,
you
can
set
balloon,
slabs
take
a
wire
n.
B
Everything
else
is
pretty
much
stream,
and
just
this
completely
defines
how
you
want
us
to
fit
how
you
want
committing
any
work
to
build
your
cluster
and
then
the
way
you
actually
perform
actions
instead
of
having
a
binary
or
or
other
go
coded,
make
file
targets
and
make
these
decisions.
The
project
visited
before
me,
but
you
can
read
through
the
make
file
and
see
that
we
have
different
targets
like
deploy
cluster
destroy
cluster.
B
It
was
a
pretty
self-explanatory.
There
are
also
some
intermediary
steps
so
show,
but
we
also
have
these
depending
on
JSON
and
terraform.
So
if
you
actually
try
to
use
the
nuke
file
and
run
make
targets,
just
in
your
normal
show,
you're
probably
missing
a
lot
of
dependencies.
So
there's
this
convenience
target
that
is
called
decorative
and
it
builds
a
doctor
image
that
has
all
the
dependencies.
B
You
need
and
drops
you
in
a
shell
with
the
current
working
directory
up
under
opt
combinated
anywhere,
so
any
changes
you
are
actually
reflected
in
your
in
your
parent
environment,
all
right.
So
if
I
do
make
dr.
dub
and
I'm
in
the
shell
I
actually
like
make
menu
config,
and
so
this
was
a
very
familiar
you've
ever
tried
to
configure,
is
Linux
kernel
question.
B
B
So
what
K
config
gives
us
is,
we
can
define
configuration
and
I
like
to
show
you
really
quick
list
of
each
phase.
Has
its
own
cake,
a
signal
that
describes
the
parameters
at
it
that
it
wants
it
to.
You.
Have
these
blocks
that
say:
hey
I
have
something
called
a
number
of
nodes.
It
has
a
default.
It
has
a
range,
a
little
description
and
then
there
has
some
basic
if-then-else
logic,
so
you
can
even
say
things
like
to
go
back
into
it.
B
So,
in
my
phase
3
to
point,
add
ons:
if
I
don't
want
to
run
the
add-on
manager
and
there's
no
reason
to
ask
me
about
specific
add-ons
to
run
there's
a
little
bit
of
like
submenu
supports,
and
things
like
that.
So
right
now,
I
have
it
set
up
to
just
revision.
3
notes,
my
cloud
provider
is
GCE.
I.
Have
my
project
set
up
my
DCP
project
and
phase
2
is
where
I've
simply
said:
I
want
to
use
two
Badman.
B
So
the
rest
of
communities
anywhere
as
assessment
is
based
on
a
sonnet
and
form,
and
so
it
had
a
very
high-level
terraform
takes
configuration
files
that
describe
your
cloud
infrastructure
and
allows
you
to
say
tariff
or
multiply.
Here's
my
configuration
if
you're
starting
from
zero
and
you
have
no
resources
created,
it
will
create
all
those
resources
and
if
you
already
have
those
resources
and
you
change
your
configuration
file
and
say
terraform
apply,
it
will
reconcile
the
changes
in
your
configuration
against
your
current.
B
The
current
stage
of
your
overall
infrastructure,
I
think
I
see
comments
coming
in
maybe
if
anyone's
actually
reading
those,
please
shout
them
out,
which
I
don't
have
a
good
view
to
see
everything
all
the
ones
sliced
Tara
forms
configuration
is
a
static
in
that
I.
Don't
believe
that
it
supports
much
alik
templating.
B
So
in
order
to
address
that
because
we
do
want
to
have
a
general
definition
of
what
a
GCE
cluster
looks
like
a
general
definition
of
what
and
as
your
cluster
looks
like
and
then
just
fill
in
some
variables,
that's
where
JSON
it
comes
in
so
JSON
takes.
These
gives
an
example
so
for
GCE,
which
I'm
most
familiar
with
sorry,
the
other
cloud
providers.
Although
there
are
other
examples
there
we
have
this
juicy
judges
on
it
and
JSON.
It
has
has
definitions
for
how
to
write
functions.
How
to
deal
with
different
scopes.
B
You
can
pass
arguments
around.
You
can
do
parameter
substitution,
so
we
can
generically
describe
what
a
cluster
looks
like,
but
then
plug-in
specifics
based
on
your
configuration
and
then
when
we
run
JSON
it
against
this.
You
can
see
this
kind
of
looks,
JSON,
ish,
I.
Think
it's
actually
a
superset
of
Jason
and
the
output
of
that
step
is
going
to
be
a
full
JSON
file
that
is
completely
justified,
so
see
so.
I'm
just
going
to
run
may
be
deployed.
Cluster
you'll
see
it's.
We
generate
a
token
for
cube
admin.
B
G
B
And
those
just
because
Evan
is
kind
of
a
difficult
thing
to
do
when
you,
when
you're,
trying
to
automate
this
and
sort
of
a
VM
and
then
capture
the
output
of
a
running
command
and
then
use
that
as
the
input
to
another
command
yeah.
So
we
don't,
we
actually
don't
have
endo
in
coverage
for
that
kind
of
scenario.
B
It's
just
easy
enough
status
to
put
together,
but
if
Diaz
of
how
to
to
script
that
feel
free,
we
can
have
other
ways
of
initializing.
We
also
always
do
normal
discovery
instead
of
file
based
discovery
or
HTTP
discovery.
So
at
the
end
of
an
test,
a
level,
that's
untested
right
now,
okay,
so
here
we
go
so
this
is
done.
I
have
a
cube
and
clustered
have
output.
B
Terraform
allows
you
to
specify
these
are
things
that
are
important
to
output
after
you're
done,
so
our
master
IP
is
pretty
important,
but
you
can
also
see
some
intermediary
artifacts.
So,
even
though
we
were
working
with
this
flat
docking
sig
file,
where
you
can
see
period,
there
use
as
a
delimiter
between
basically
the
difference
menu
levels,
we
actually
want
to
feed
in
something
that
looks
like
this
into
juice
on
it.
So
there's
a
little
utility
that
converts
between
the
two
because
JSON
it
wants
JSON
input.
B
B
You
can
see.
This
is
a
lot
more
verbose,
I'm
going
through
page
by
page,
a
lot
more
difficult
to
read
and
managing
to
configure
versus
the
level
of
abstraction
that
this
gives
you,
which
is
one
of
the
goals
of
communities
anywhere,
was
to
make
it
easy
to
to
set
up
and
accurate
to
provision.
So
it
should
be
reproducible,
but
the
configuration
should
be
concise.
But
if
you
really
want
to,
you
should
be
able
to
dig
into
those
other
artifacts
to
see
exactly
what's
happening.
B
A
B
Is
it
right
so,
ideally,
the
different
phases
would
be
completely
decoupled
and
originally
I.
Have
it
somewhat
B
couple?
It's
it's
a
little
weird
and
that's
just
because
it's
easier
when
you're
provisioning,
a
VM
to
say,
here's,
the
startup
script,
please
run
it
and
to
provision
the
VM
and
then
wait
for
it
to
come
alive
and
then
send
a
command
against
it.
But
that's
something
that
we're
really
bad
men
upgrade
tests
and
so
I
think
we
may
do
some
restructuring
here
to
actually
be
couple.
B
These
and
I'll
show
you
exactly
what
I
mean
in
phase
1
GCE,
here's
the
configure
BMC
Kurtz
and
if
you're
not
using
cube
and
the
other
implementation
for
phase
2
is
ignition,
and
so
what
we
do
in
this
G
Sonic
configuration
is
when
we
define
the
startup
scripts.
We
actually
say
what
we're
going
to
import
this
generic
configure,
BMS,
H
and
then
based
on
your
phase
2
provider,
we're
also
going
to
append
these
other
subscripts.
So
if
either
for
ignition
or
for
cube
admins,
and
so
the
generic
configure
VM
scripts.
B
Install
dunker
and
basically
just
insulting
start
stacker
and
then,
if
you're,
installing
or
who's
using
Q
admin.
What
we
append
to
that
our
steps
to
specifically
fetch
the
token
from
the
metadata,
sir
figure
out
what
version
of
cube
admin
would
be
using
and
a
version
of
committed
that
we
should
be
installed
when
using
it
and
I'll,
explain
this
entire
ugly
block
in
minutes.
And
then,
if
you
are
a
master
role,
if
you're
starting
up
as
the
master,
then
we
do
cubeb
indicates
if
your
node
would
you
have
enjoy
with
other
they're
going
this.
B
All
of
this
is
set
up
with
generic
enough
layers
that
we
could
easily
say
well
why
we
want
end-to-end
tests
to
run
against
a
sure
or
AWS,
and
we
should
be
able
to
swap
out
the
images,
but
right
now
the
only
fully
supported
and
end
workflow
is
GZ
D,
plus
two
bad
men
using
Ubuntu
images,
I
believe
Debian
images
working
out,
but
before
the
old
latest,
stable
Debian
image
didn't
have
secrets
to
have
properly
at
boot
yeah.
So
we
should
be
able
to
add
CentOS
or
whatever
other
false
images
that
we
want
right
now.
B
A
lot
of
and
you'll
see
why
this
was
difficult
to
get
it
to
beta,
because
this
is
this
was
actually
the
simple
part,
adding
cubeb
and
support
here.
It's
all
of
the
layers
above
it
to
get
it
integrated
with
our
testing
from
to
actually
be
able
to
run
into
incest.
But
now
that
everything
is
in
place,
we
have
people
that
have
been
adding
in
so
another.
B
Another
assumption
is
that
you're
going
to
use
the
we've
met,
I've
seen
a
provider
as
if
someone
else
has
started,
work
and
I
think
it's
it's
in
review.
It's
pretty
close,
although
I
could
get
stole
a
little
bit
use
other
C&I
providers
instead.
So
if
other
people
are
motivated
to
want
to
into
and
test
other
cloud
providers,
but
they're
all
s
images
I'm
free
to
offer
guidance,
I
don't
have
the
bandwidth
to
do
it
all
myself,
but
I
can
definitely
walk
you
through
the
touch
points.
B
B
Me:
okay,
we
throw
it
on
the
UPS
and
it
helps
get
added
to
communities
in
it.
If
you
specify
communities
version,
we
also
want
to
test
be
able
to
test
the
default.
If
you
don't
specify
three
insurgents,
that's
what
a
lot
of
people
do,
I
think
based
on
the
version
will
default
to
stable
one,
seven
or
stable
one
6
or
whatever.
Now
the
cube
admin
version
is
a
little
hacky.
You
can
specify
right
now.
You
can
specify
the
string
stable,
which
means
I
want
you
to
use
the
stable
sources.
B
I'm
also
may
need
to
add
an
else
for
unstable,
because
we
just
have
Cuban
Eddie's
email,
unstable
and
then
we
could
set
up
end-to-end
tests
that
will
will
actually
verify
when
we
do
when
you
build
and
pushes
an
unstable
Channel
right
now,
we're
usually
kicking
off
manual
into
end
tests
before
notice
is
stable,
but
it'd
be
nice
to
support
unstable.
Here
we
just
do
an
applicant
update
after
install
the
component
that
we
need.
B
Otherwise,
you
can
also
specify
a
Tibetan
version
that
looks
like
a
Google
Cloud
Storage
URL,
and
this
is
how
we
plumb,
through
in
our
are
actually
doing
tests.
If
you
have
a
pre
submit
test
or
in
destroying
a
CI
test
after
committed
run,
we
upload
the
artifacts
to
a
pretty
long,
uniquely
qualified
URL
in
angular
cloud
storage.
And
so,
if
that's
the
case,
you
specify
that
string.
B
Here
we
use
the
GS
util
to
sink
down
entire
directory
to
a
temporary
directory,
and
then
we
use
the
package
I
for
the
artifacts
that
we
want
ignoring
failures,
because
if
you
just
use
D
package
directly,
it's
not
going
to
automatically
fetch
your
dependencies.
They'll
just
say:
I
tried
to
install
these
and
I
couldn't
find,
or
you
don't
have
these
dependencies
installed.
So
then
we
immediately
followed
up
with
a
command
to
fix
that
and
fix
the
state
of
our
apps
packages
and
install
dependencies.
B
A
B
Now
that
you've
admin
is
supported,
people
have
just
stopped
caring
about
the
ignition,
but
since
I,
very
selfishly
only
added
cube
admin
supports
in
a
very
limited
scope.
Just
to
gte.
With
the
blue
two
images,
we
can't
just
immediately
deprecated
what
we
could
deprecated,
but
not
immediately
remove
ignition,
because
if
you
want
to
deploy
with
Azure
rate
of
us
or
any
other
cloud
providers,
ignition
is
the
only
supported
mechanism.
I
also
think
that
ignition
is
currently
broke,
four
one
six
and
one
seven
clusters,
so
I
don't
know
if
anyone's
actually
using
it
anyway.
A
What
I
was
thinking
about
is
like.
Can
somebody
test
it
with
one,
seven
one
one
six
and
like
if
it's
broken,
we'll
just
remove
it,
and
if
we,
if
we
do
that,
we
can
also
get
rid
of
cube
DNS
and
maybe
some
other
that's
cubed.
M
automatically
deployed
I,
don't
and
we
can
add
the
say,
actual
face
to
cube
admin
thing
which
will
be
much
more
more
compatible,
so
it
could
use
other
OS
images.
I
mean
it
could
be
one
just
an
if
block
like.
If
all
its
release
is.
A
I,
don't
know
what
mentoring
mentioning
is
probably
that
the
GS
URL
is
coming
from
like
basil
dead
build,
but
we
have
another
like
duplication
thing
there,
where
we
have
actual
release
dubs
and
we
have
duplicated
basil.
That's
coming
from
CI
yeah.
B
So
I'll
talk
about
the
way
those
jobs
get
orchestrated
a
little
later,
but
you're
right
so
so
this
presupposes
that
we're
using
the
Deb's
that
were
generated
by
the
basil
scene,
I
jump
in
case.
Anyone
else
didn't
follow.
So
we,
when
we
do
official
releases,
we
actually
have
different
logic
like
this
would
be
the
well
supported
path.
Is
we
have
the
release,
repository
news
release
and
then.
B
Sub
directories
for
Deb
and
rpm
Debbie
Union
rpm
for
building
these
artifacts,
and
so
when
we
do
full
releases,
we
actually
run
scripts
here
to
build
them
in
one
way
and
as
a
own
saison
experiments.
But
there
is
momentum
around
building
them
in
another
way,
using
Babel,
which
is
better
for
future
means,
but
isn't
it
doesn't
have
feature
parity
or
these
are
our
build
targets?
B
Don't
have
feature
parity
with
the
release
scripts
right
now,
so
we
don't
fully
trust
it
so
not
use
for
our
real
reduces,
but
we
do
use
it
for
our
CI
testing
now,
just
because
it's
a
bit
impractical
to
use
these
release
scripts
during
our
CI
testing.
Building.
All
of
our
Debian
images
take
something
like
an
hour
which
is
not
fantastic
for
a
pre
submit
job.
That's
blocking
your
PR,
but
we
are.
We
do
have
momentum
toward
moving
everything
onto
Basel
and
I.
Don't
know,
will
defecate
a
lot
of
this
release.
B
Stuff
I
don't
know
if
will
defecate
make
file
support
if
everyone's
just
using
Basile
instead,
but
it's
nice
that
we
have
our
Basile
artifacts
in
the
main
repo
versioned
with
the
rest
of
our
code,
whereas
we
actually
just
had
a
plug
where
we
made
it
a
change.
Someone
made
a
change
to
this
release,
repository
for
to
Badman
Kampf
for
our
debian
and
rpms
and
said
this
doesn't
get
versioned
with
our
our
code.
The
change
was
only
supposed
to
apply
to
1/8
clusters,
but
was
accidentally
built
into
1/7
releases
and
it
caused
a
bug.
B
A
Just
touch
on
the
regression
layer:
we
leave
the
deck
out
there
again,
the
Deb's
support
versioning,
with
stable,
unstable
and
nicely
I
like
different
tranches.
But
so,
in
this
case
we
would
just
tell
like
put
it
in
the
unstable
directory.
Only
is
not
stable,
but
this
again
it
seems
like
we
don't
have
feature
parity
with
rpm
again,
possibly
in
the
release,
gifts.
B
And
up
until
very
recently,
we
didn't
have
versions
stamping
working
correctly,
so
we
would
always,
if
you
queer,
the
version
of
the
API
server,
iOS
8
0,
dot,
0
dot
0
instead
of
action
and
using
the
proper
build
version.
I
think
that
sticks
now,
I
think
Jeff
Grafton
is
basically
been
playing
whack-a-mole,
with
the
remaining
gaps
between
our
basil
builds
and
our
make
file.
Docker
based
builds,
but
if
across
builds
are
one
of
the
last
big
things
preventing
us
from
using
basil
for
full
releases.
B
Cool
so
that's
kind
of
an
overview
of
kubernetes
anywhere
layer,
and
so,
as
I
mentioned,
the
main
interface
here
is:
we
want
to
create
a
config
file
and
then
we
want
to
run
make
file
targets
like
make
deploy
cluster
or
make
destroy
cluster,
to
bring
them
up
and
bring
them
down.
So
how
we
do
that
in
the
in
doin
setting
if
we
jump
out
one
layer,
you.
B
We
ignore
it
by
default.
There's
some
temporary
files
there's
some
state
when
you
want
to
set
up
the
service
account
used
for
GCD.
Do
you
have
to
create
a
service
count
and
stored
in
this?
It
assumes
that
you
name
this
file
exactly
account
dot.
Jason
I
use
a
personal
development
project
that
happens
to
be
build
to
Google,
but
I
set
it
up.
It's
not
like
a
shared
thing.
When
our
end-to-end
test
run,
we
have.
B
We
have
these
notions
of
a
pool
of
projects
that
get
used
and
they're
all
owned
by
Google
right
now
and
the
way
the
plumbing
works
is
I,
think
it
gets
set
as
an
environment
variable
by
prowl,
so
that
our
job
can
reference.
It
and
I'll
show
how
that
actually
becomes
account
on
Jason,
but
that
is
something
like
youyou
could
absolutely
set
up
a
GCP
project
and
just
create
a
service
account,
and
when
you
do,
when
you
set
up
your
configuration,
you
specify
right
here.
You
can
see.
B
This
is
that's
my
personal
project,
but
Google's
account
Google
projects.
I
don't
know,
I've
been
actually
answering
your
question,
but
Google's
projects
are
coming
through
a
different
way
and
they're
managed
by
a
different
pool
service
called
previously.
Our
endo
and
tests
have
a
hard-coded
product
that
they
should
use
and
they
knew
how
to
get
the
service
accounts
because
proud
does
magic
with
it.
A
A
D
B
Yeah
good
point,
so
so,
if
you
look
at
the
McDonnell
target,
if
you
do
deploy,
it
does
a
deploy
cluster,
this
is
actually
just
a
phony
target
that
depends
on
deploy
cluster,
which
is
phase
one
and
bring
up.
It
runs
a
separate
script
for
validation,
and
then
it
run
add-ons.
So
we
look
at
the
definition
of
the
validates
it
uses.
First
of
all,
it
sets
the
QQ
config
correctly
and
then
runs
util,
slash
validate.
So
there's
this
just
a
little
utility.
D
B
And
one
of
the
things
that
I
completely
glossed
over
because
I
want
to
get
other
things
without
I'll
mentioned.
If
you
were
actually
trying
to
run
into
man,
tests
there's
another
flag
that
you'll
probably
want
to
use.
So
when
you
do
make
deploy
cluster,
it
won't
actually
validate
that
your
cluster
came
up
and
it
can
that
can
communicate
to
it,
communicate
with
it
via
a
cue
cuddle.
It
just
relies
on
piriform
exiting
successfully,
so
it
believes
everything
should
work.
B
The
old
ignition
based
phase
two
was
actually
able
to
construct
your
cute
perfect
file
locally
on
the
client-side
with
the
credentials
you
needed
to
talk
to
the
master
in
Mattoon,
so
after
Jason
and
terraform
ran,
you
had
a
cute
config
file
locally.
That
was
completely
valid
because
it
was
generating
the
certs
using
JSON
or
some
combination
of
JSON
and
terraform
and
sort
of
locally
which
you
use
to
Badman,
even
though
we're
feeding
in
that
initial
token,
to
use
so
that
the
the
nodes
are
able
to
join
it.
B
The
certificates
that
we
need
and
the
full
queue
config
file
that
we
need
is
generated
on
the
master
and
stays
on
the
master.
Unless
you
specifically
request
that
it
gets
pulled
back,
I'll
show
you
what
that
looks.
Like
I
mentioned
this
new
script.
So
when
you
do
a
deploy,
if
you
pass
it
or
if
you
set
this
environment
variable
that
I
want
to
wait
for
cubic
again,
this
is
optional
because
you
might
be
an
environment
where
you
bring
up
a
cluster
that
is
air-gapped.
Maybe
you
don't
necessarily
want
to
be
able
to
talk
to
it.
B
It's
just
running
something
automated
or
if
you're,
using
the
phase
2
provider
of
ignition.
You
don't
even
need
this,
but
if
you
set
this
environment
variable,
then
it
runs
this
fescue
config,
which
knows
how
to
SSH
to
the
machine
and
actually
grab
this
to
su
config
file
from
disk
in
this
horribly
ugly,
long
command,
where
we
can't
log
in
as.
D
B
But
only
root
has
permission
to
the
file,
so
we,
instead
of
doing
SCP,
we
actually
do
sudo
catch
this
file
and
capture
its,
but
this
will
actually
bring
back
the
cubed
root
file,
the
correct,
correct
file
that
able
to
communicate
with
the
master.
So
if
you
didn't
send
this
likely,
your
validation
failed
because
I
was
just
shooting
the
wrong
team
config
and
it
was
probably
being
denied
instead
of
not
finding
the
nodes,
and
it
was
just
hidden
in
the
output
of
that
utility
script.
A
B
Yeah
and
we
test
one
in
doing
flow
very
well,
but,
like
I
said
before
there,
there
are
many
other
branching
workflows
that
we
want
to
do
like
like
Tim
mentioned.
Instead
of
feeding
it
the
token
our
instructions
usually
tell
people
to
just
you
to
babinicz
and
copy,
because
the
token
when
you're
doing
Joran
I'd
love
to
test
both
cases.
So
you
making
breaking
these
things,
that's
their
more
composable
so
that
we
can
more
easily
test.
B
All
the
different
scenarios
would
be
fantastic,
but
I
also
have
a
desire
to
have
less
reliance
on
our
end-to-end
tests,
just
because
they're,
flaky
and
cost
money
to
run
and
they're
constantly
breaking
so
I
would
actually
love
a
stronger
integration
test
framework.
But
that's
I
have
a
different
doctor
that
that's
not
ready
yet
so
I'll
I'll
table
that
for
now
any
more
questions
about
to
anyone
all.
D
B
B
B
A
large
part
of
her
efforts
are
going
to
be
around
refactoring
things
to
make
them
more
composable
to
make
it
not
that
phase
one
is
tied
intimately
to
Phase
two
tight
intimately
to
the
way
we
invoke
cube
admin,
but
allow
you
to
send
arbitrary
commands
to
a
cluster
or,
at
the
very
least,
since
criminals
anywhere
is
a
lifecycle
project
like
it
brings
up
and
destroys
clusters.
It
has
no
concept
of
upgrades.
B
You
can
do
scaling
by
rewriting
or
editing
your
dock
and
gig
file
to
change
the
number
of
nodes
and
then
do
another
deploy
and
care
reform
will
scale
it
up
or
down.
But
upgrading
is
another
really
important
life
cycle
event
that
it
has
no
conflict
up
now.
So,
in
order
to
support
that
for
our
in
doing
test,
we
want
to
do
into
and
test
of
Cuba
and
upgrades.
I.
B
The
layer
above
that
that's
actually
integrated
with
our
test
stack,
is
called
cube
test
and
if
you
have
ever
tried
to
run
end-to-end
test
before,
especially
maybe
a
few
months
ago,
you
probably
saw
a
command
that
was
something
like
go
run
e
to
a
hack,
not
go
that
command
is
deprecated,
and
this
file
is
now
just
a
thin
wrapper
around
to
test.
Obviously,
this
is
not
a
fantastic
interface
to
use.
That
means
all
of
your
code
has
to
be
in
one
file.
B
So
when
you
run
two
tests,
the
main
flags
are
going
to
have
our
up-down
and
tests
and
some
pinko
parameters
and
up
means
I,
want
you
to
provision
a
brand
new
cluster
when
we
actually
go
to
the
flags
a
lot
of
likes,
so
up
means
preventing
the
cluster
down
means.
Tear
down
my
cluster
and
test
means
run
some
set
of
our
end-to-end
tests
against
it
and
our
automated
tests
to
do
up
testing
down.
B
But
if
you
were
testing
locally,
you
might
want
to
bring
up
the
cluster
once
and
then
do
multiple
iterations
of
end-to-end
tests
against
it
before
you
bring
it
down
just
to
save
yourself
time,
which
is
why
these
are
separated,
but
there's
this
concept
of
a
deployer
or
deployment,
and
this
is
how
we
extract
away
how
to
initialize
your
cluster.
So
before
this
interface
existed,
it
just
assumes
you
are
always
deploying
with
Cuba
SH,
but
there
are
many
more
options
now
so
this
interface
lets,
you
specify
the
implementation
for
it.
B
Instead
of
calling
Cuba
Cuba,
we
just
call
it
bash,
but
you
can
also
specify
I
want
to
bring
up
a
GK
cluster,
your
cops
cluster
or
now
a
kubernetes
anywhere
cluster.
So
our
endo
end
jobs,
use
communities
anywhere
with
some
configuration
and
the
implementation
of
that
deployment
is
in
anywhere
dot
go.
So
we
did
this.
You
can
see
the
flags
that
are
specific
to
this.
We
have
to
know
where
created
anywhere
is
because
it
has
our
make
file
and
our
care
reform
JSON.
It's
a
templates
and
all
that.
B
So
it
assumes
that
you
have
already
checked
out.
Kuben
is
anywhere
somewhere
and
give
it
a
path.
The
thing
is
to
provider
you
want
to,
you
want
to
use,
so
the
default
is
ignition,
because
ignition
is
what
pre-existed
@q
madman
support,
but
you
can
set
this
cue
madman
and
then
just
the
same
parameters.
I
showed
you
in
the
config
file,
like
what
version
of
cube
admin.
Do
you
want
to
use
what
version
have
proven
in
these
two
monkeys?
B
You
can
also
optionally
give
it
a
cluster
name
and
a
timeout
is
pretty
useful
and
you'll
see
this
go
templates
is
basically
writing
that
dot
config
file
that
we
want
using
the
command-line
parameters.
So
some
of
these
are
hard-coded.
We
can
always
plumb
in
more
flags
and
change.
These
two
parameters
to
substitute
so
like
all
of
our
end-to-end
tests,
use
four
nodes
and
they
all
currently
use
GC.
But
if
you
want
to
add
support
for
something
else,
just
add
a
new
flag
and
plumb
it
through
to
where
we
execute
this
template.
B
Let's
go
up
yeah
so
when
we
are
implementation
to
bring
a
cluster
up,
is
this
exactly
what
I
showed
you
manually?
Is
we
just
run
this
deploy
targets?
Deploy
is
actually
a
superset
of
deploy
cluster.
That
goes
through
all
three
phases
and
we
make
sure
that
we
passed
this
environment
variable
so
that
we
wait
for
the
cue
config
that
we
pulled
back.
That's
kind
of
it,
although
we
have
our
own
separate
validation,
to
wait
and
make
sure
that
the
number
of
nodes
matches
what
we
expect
and
and
similarly,
when
we
bring
down
our
cluster
well.
B
First,
when
you
find
the
cube
path,
but
we
recall
destroyed,
and
we
we
pass
through
this
other
option
that
automate,
if
you
actually
just
do,
make
destroy
manually,
it'll
prompt
you.
If
they
are,
you
absolutely
sure
you
want
to
bring
down
this
cluster
and
there's
an
environment
variable
just
disable
that
and
make
it
completely
automated.
But
so,
oh.
A
B
So
Cooper,
usually
where
existed
at
the
product
before
we
run
the
end
doing
tests
and
it
and
I,
don't
know
how
popular
project
it
is
now
that
there
are
so
many
more
tools
to
bring
up
clusters.
I,
don't
know
if
anyone
uses
it
for
production
or
if
anyone
really
uses
it
to
play
around
with.
If
there
are
other
options
like
cops,
but
as
much
as
possible,
I've
tried
to
utilize
it
as
a
tool
to
allow
us
to
do
end-to-end
tests
without
completely
distorting
it
and
making
it
only
for
end-to-end
tests.
B
If
that
makes
it
so
as
a
product,
it
makes
sense
to
independently
want
to
verify
that
your
cluster
came
up
correctly
and
I
actually
think
it's
kind
of
a
bug
that
I
have
more
validation
here.
I
think
this
was
before
I
think
originally
I
was
just
running
stage.
One
and
two
and
I
didn't
rely
on
the
validation,
so
I
wondered
Gate,
village
and
I
can't
remember,
but
this
we
can
probably
safely
remove
this
and
depend
on
each
validation.
B
A
I
was
just
thinking
if
we,
if
we
remove
like
ignition,
if
we
test
it,
it
it's
broken
and
we
can
remove
it
and
then
without
Cuba
into
Phase
two
and
remove
the
obsolete
I.
Don't
we
should
then
be
able
to
get
rid
of
validate
I
mean
we
should
be
able
to
convert
this
in
something
that
actually
works
and
actually
is
shaped
for
all
well
good
to
utilize
for
easy
tests
other
than
just
being
a
project
flying
around
without
having
any
one
music
or
it,
because
it's
like
program,
yeah.
B
I,
don't
know
how
many
people
use
it
for
its
original
intended
purpose
versus
I'm
sure
it
gives
a
lot
of
use
now,
just
in
our
automated
end
doing
test
I've
tried
to
not
step
on
toes
and
overly
perverts.
The
codebase,
just
for
my
purposes,
which
was
sending
setting
up
into
my
chest,
but
I
we
can
put.
We
can
make
an
issue
in
the
repository
and
say
hey.
A
B
Cool
so
so
invoking
queue
tests
is
how
this
stuff
happens,
how
we
bring
up
the
cluster
and
we
actually
invoke
Inc.
Oh
if
you,
if
you've
seen
our
end-to-end
test,
actually
go
very
briefly,
show
what
those
look
like
so
endo
in
test
themselves
that
we
run
against
the
cluster,
live
in
the
kubernetes
repository
and
in
the
subdirectory
test
plan.
Each
meeting
and
every
test
has
a
kind
of
tags,
and
if
you,
if
you've
seen
what
our
ginkgo
Flags
would
like,
you
can
opt
into
or
out
of
different
tags.
Let
me
find
something
that
might.
A
B
See,
okay,
so
in
the
description
of
the
test,
not
only
is
there
something
that's
human
readable,
but
you
can
specify
these
tags.
That's
a
serial
I!
Don't
think
this
test
should
be
run
in
parallel,
because
it's
just
going
to
have
a
race
condition
with
itself,
probably-
and
it's
also
disruptive,
because
it's
changing
states
and
maybe
doesn't
clean
up
that
state
afterward,
and
we
have
a
lot
of
in
demand
tests
that
are
marked
conformance,
which
which,
basically,
if
we
run
all
of
the
conformance
tests
against
a
candidate
kubernetes
cluster,
and
they
all
pass
that
is
I.
B
Don't
think
we
officially
certify.
But
we
said
it's
pretty
much
good
enough
to
tell
you
that
that
this
is
function,
whereas
there
are
a
lot
of.
There
are
a
lot
of
end-to-end
tests
that
are
targeting
a
very
specific
feature
that
might
not
be
enabled
everywhere
they
might
be
targeting
a
specific
cloud
provider
which
is
obviously
not
applicable
everywhere,
and
so
the
conformance
tests
are
the
bare-bones.
You
know
I
want
to
create
some
pods
delete.
The
pods
at
the
very
basic
test.
B
I
should
pass
it
on
any
cluster
that
called
itself
the
community
cluster
and
so
most
of
our
into
wind
test.
Suites
just
run
all
of
our
conformance
tests
and
up
until
very
recently,
all
of
them
only
ran
all
the
conformance
tests
until
we
had
these
new
tests
that
were
for
kube
and
features
like
bootstraps
over.
So
someone
can
opt
into
these
tests
by
saying,
I
want
to
additionally
run
tests
for
the
future
bootstrap
pilgrims
or
their
tokens,
I
Merkle's.
Anything.
G
B
So
let
me
go
back
to
the
dock
I'm
going
to.
We
only
have
four
minutes
left
I'm
going
to
briefly
describe
these
so
that
when
I
setup
follow-up
sessions,
I'll
try
to
target
you
one
or
two
layers
to
see
who's
interested
in
what
layers
and
I'll
try
to
dive
in
as
deeply
as
I
need
to
there.
But
just
so
you
have
an
idea
of
what
they
do.
B
If
we,
if
we
go
all
the
way
to
the
outermost
layer,
Wow
is
our
job
Orchestrator.
So
it
is
the
way
it
has
a
configuration
file
to
define
what
all
of
our
end-to-end
jobs
look
like
I'm,
not
only
in
doing
tests,
but
also
any
pre
submit
tests
which
might
just
be
doing
a
Akio.
Our
validations
fix
our
baffled
run
jobs
to
do
bubble,
building
bubble
tests.
It
allows
us
to
find
what
those
look
like
it
has.
B
Some
chaining
behavior,
which
Luke
is
alluded
to
because
our
end-to-end
jobs
are
actually
chained
of
the
outputs
of
the
baffle
run.
Jobs
and
prowess
is,
as
a
product
runs
on
a
crudities
cluster.
We
happen
to
run
our
instance
on
gkg,
but
it
should
run
on
any
communities
cluster,
which
is
maybe
the
only
dog
fooding.
A
lot
of
us
do
of
our
own
projects,
and
so
that's
where
we
define
what
the
overall
job
looks
like.
B
We
have
a
custom
image
to
get
all
of
the
to
have
all
the
dependencies
that
we
need
like
JSON
it
terraform,
potentially
a
few
other
things.
I
can't
remember,
which
differ
from
the
rest,
our
end
to
end
jobs.
There's
a
generic
e
E
kids
image
that
most
n
doing
jobs
use,
but
since
we
use
Cuban
is
anywhere
has
its
own
dependencies.
We
have
our
own
custom
images,
links
peers
if
you
how
its
get,
how
'get
built
that
invokes
bootstrap
tag,
which,
even
though
it's
prefixed
with
Jenkins,
is
actually
the
main
entry
point
for
any
job.
B
B
Bootstrap
it's
its
responsibilities,
are
also
checking
out
the
source
repositories
that
you
might
be
running
tests
against.
So
if
you're
doing
a
pre
submit
cool
job,
it
will
know
I
should
check
out
kubernetes
at
this
commits
and
I
should
grab
these
commits
from
your
pool.
Job
and
I
should
do
the
merge
so
that
all
of
the
other
layers,
underneath
it
don't
have
to
worry
about
how
do
I
do
emerge.
How
do
I
pull
the
references
from
a
peer
job?
B
Bootstrap
just
set
them
out
for
you
in
a
local
directory,
so
you
don't
have
to
worry
about
it.
Most
of
our
job
configurations
in
this
special
file
called
job
/
config,
dot
Jason.
You
can
grep
through
it
for
cube
admins
to
see
what
those
jobs
look
like.
It
specifies
what
scenario
to
use
and
the
flags
detached
the
scenario
we
use.
The
community
intervention
argue
all
the
scenarios
are
written
in
Python
for
some
reason,
so
we
just
kubernetes
integrand,
and
it
just
assumes
that
it's
I
in
the
scenarios
directory
and
the
responsibilities
of
this
layer
are.
B
It
has
some
flags
that
aren't
flags
of
cube
test,
but
it's
mainly
a
wrapper
around
cube
tests,
which
is
a
wrapper
around
gingo,
but
you
can
specify
things
like
I
have
a
file
of
environment
parameters,
so
the
communities
into
end
scenario
will
read
that
file
and
actually
set
them
that
environment
environment
parameters
before
invoking
cube
tests,
whereas
to
test,
doesn't
have
any
conflict
of
that.
It
just
assumes
that
you've
set
your
environment
primers
correctly,
and
that
is
a
very
brief
overview
and
we
have
one
minute
left.
Are
there
anything
quick
last
minute
questions?
B
Otherwise,
I'll
go
through
the
same
thing,
maybe
next
week
at
the
same
time,
will
work
for
everyone,
but
I'll
figure
out
what
layer
to
cover
next
and
do
a
deep
dive
as
a
follow
up
any
laughing.
A
H
B
One
of
my
colleagues
Jacob
Simpson
different
Jacob,
put
this
together
of
how
to
do
local
testing
revered,
develop
using
all
of
your
local
artifacts.
You
just
do
a
local
build
and
you
can
use
communities
anywhere
to
bring
up
your
entire
cluster.
This
might
be
a
little
dated.
It
might
be
a
little
easier
to
do,
but
at
least
it's
a
good
good
capturing
of
the
overall
workflow
and
you
could
potentially
two
different
densities.
A
new
way
to
do.
It
looks.
B
Yes,
if
you
want
to
change
that,
it's
actually
really
easy,
so
would
be
only
the
only
place
that
knows
that
it's
Google
Cloud
Storage
is
this
configure
VM
cube
admin
script,
and
it
just
has
this
branch
logic
that
said
that
it
looks
like
Google,
Cloud,
Storage
and
I
know
how
to
pull
it
down
and
ingest
util.
If
you
want
to
store
it
in,
like
s3
just
put
like
L,
if
you
match
s3
and
then
whatever
tool,
you
have
AWS
s3
copy
to
do
that,
and
it
should
be
super
easy
to
do.
B
H
D
H
B
Yeah
and
I
should
clarify
so
I
took
advantage
of
proving
AIDS
anywhere
because
of
my
interesting
tube,
admins
and
testing
it.
I
am
NOT
one
of
the
collaborators
or
maintainer
of
convenience,
anywhere
I've
done
a
lot
of
pull
requests
against
it,
but
actually,
let's
merge
to
it.
I
can't
I
never
pay
attention
to
issues
there,
just
because
I
I'm,
a
consumer.
B
Good,
so
sorry,
if
no
one
is
paying
attention
to
new
issues
that
you
might
have
filed,
I
would
assume
that
you're
good
to
go
like
the
entire
purpose
of
the
project
is
to
support
more
and
more
implementations
of
each
things,
and
just
just
do
it
I,
guess:
okay,
remember,
cool
we're
three
minutes
after
my
github
might
be
linked
somewhere.
If
you
have
more
questions,
I
have
the
same,
handle
on
slack
I'm,
pipe
Jacob
right
here
and
I'll
be
in
touch
on
this
issue
or
you're
the
next
in
the
series.