►
From YouTube: Kubernetes SIG Testing 2017-07-18
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk
A
Okay,
hi
everybody
today
is
Tuesday
July
18th.
This
is
sick
testing.
We
creating
on
the
agenda
today
mortgages
raise
up
a
great
request
from
service
catalog
project
which
is
sort
of
what
is
the
onboarding
process
for
new
projects
who
wish
to
use
what
we
offer
in
testing
for
istic
testing
is
the
documentation
for
this.
What
components
could
be
used?
What's
the
process
for
this,
like
cetera,
et
cetera
and
I,
think
Caleb
Caleb,
if
you,
if
you
want
to
talk
about
this
you're,
welcome
to
or
I,
can
try
and
scramble
some
tabs
together
real
fast.
B
A
One
of
the
things
to
note
here
so
first
off
these
are
all
the
tests
that
say
the
API
machinery
cares
about.
They
got
here
by
doing
a
pretty
easy
thing,
which
I
believe
is
going
on
as
a
result
of
the
Google
right
now
where
there
has
been
movement
to
rename
some
of
the
end-to-end
tests
and
include
the
sig
name
in
the
test
definition,
so
sig
API
machinery
did
it
in
a
way
that
was
pioneered
by
Ryan
Mitchell
for
six
storage.
A
Where
inside
of
the
ete
directory,
there
are
starting
to
be
a
lot
of
directories
that
correspond
to
individual
sakes,
so
API
machinery
happens
to
have
their
own
directory
and
you
can
see
there
are
tests
related
to
main
spaces
and
that
the
e
failure
and
whatnot
and
the
way
they
prefix
all
of
their
test
names
would
say.
Api
machinery
is
by
using
a
function
cig
describe.
Instead
of
the
coop
describe
function,
that's
used
for
many
other
tests.
We
wait.
Stop.
C
A
Right
so
I
can't
necessarily
speak
to
that.
Like
I,
said
I
sort
of
noticed
that
this
appears
to
be
happening
organically
for
a
number
of
different
six,
including
a
geography
and
storage,
and
many
of
them
are
following
this
pattern.
There
had
been
some
conversation
at
one
point
about
moving
everything
into
its
own
directory
and
then
figuring
this
out
from
the
directory
name,
but
I
believe
a
ginkgo
doesn't
expose
that
information,
so
Erickson
would
say
something
yeah.
D
I
can
speak
to
that
I.
Think
you
know
the
the
goal
is
to
put
everything
in
their
own
directory
so
that
we
can
set
up
owners
files
to
have
on
changes,
get
routed
to
someone
who
hopefully
is
associated
with
that
sig,
so
that
they
can
actually
review
the
changes.
Some
random.
You
know
test
in
for
a
person
who
has
no
idea
what
really
the
intent
is.
So
that's
the
why
we're
putting
it
into
its
own
directory
and
then
right
now
we
don't
have
a
good
way
like
a
test
grid.
D
Has
it's
easy
to
filter
off
of
the
name,
but
there's
not
a
great
way
to
filter
off
of
other
on
information,
so
the
most
expedient
way
to
sort
of
just
show
the
cig
API
machinery
tests
is
to
put
that
in
the
test
name
on.
Eventually
we
want
to
get
to
where
there's
other
mated
metadata,
that
we
could
then
filter
off
of
and
clean
up
the
test
names
but
sort
of
the
most
expedient
idea
right
now
and
then
yeah
in
terms
of
I.
D
Think
the
idea
with
having
it
you
know
be
a
cig
describe
in
each
folder.
Is
that
that
way?
So
I
guess
the
pattern
is
rather
than
having
a
single
function
in
the
framework
directory
is
to
put
that
method
in
each
folder
so
that
the
tests.
Essentially,
if
we
move
a
test
from
cig,
a
to
cig
B,
we
don't
actually
have
to
change
anything,
but
maybe
that's
not
the
best
call
I,
don't
I
think
that's
sort
of
just
a
recipe
that
was
given
to
people
and
so
that's
what
everyone
is
following.
D
A
A
C
A
As
far
as
it
is
this
document
that
anywhere
as
far
as
I
can
tell
I've
asked
and
no
there's
no
overarching
issue,
there
are
a
number
of
requests
and
I
can't
promise
it,
but
I'm
trying
to
like
sort
of
collate
some
of
the
information
that
I
could
reverse-engineer
this
into
document.
If
we
want
to
I'm
just
sort
of
trying
to
track
like
which,
because,
like
I'm
finding,
not
a
ton
of
consistency
in
some
of
the
tests
name
test
names,
so
some
things
have
to
sync
things
at
the
beginning.
A
If
they
follow
this
particular
pattern,
all
the
tests
are
prefixed
with
this.
But
there
are
other
things
like
six
storage,
which
have
some
tests
with
six
storage
here
and
then
the
other
tests
that
have
stig
storage,
sorry
I'm
not
doing
this
right
for
the
camera,
on
the
left,
hand,
side,
prefixed
or
sometimes
on
the
right
hand,
side,
for
example,
e6
storage
and
some
tests
related
to
using
big
maps
which
don't
currently
live
inside
of
the
storage
directory.
So
anyway,
like
I,
said,
I
can
try
and
piece
together
more
information
on
this.
A
C
I
think
anything
that
is
to
do
is
better
than
nothing.
So
I
know
that
this
is
much.
This
is
part
of
Michael's
original
proposal
right,
Michael,
I've,
written
oppose
old
I
commented
on
and
you
had
commented
on
and
if
some
other
people
did
which
had
this
as
one
of
the
means
by
which
to
track
and
follow
things,
the
the
one
other
pattern
that
seems
to
be
emergence,
that's
going
on
now
as
part
of
this
fix
it
or
otherwise
is
that
folks
are
some
folks
are
shedding
me
to
me
and
preference
for
integration
tests
I.
C
A
Yeah,
that's
really
cool
if
you
have
any
example
of
example,
suppose
to
throw
in
that
meeting,
it's
that
would
be
great,
I.
Think
like
as
a
thing
to
like
champion
and
highlight
and
say
please
do
more
of
this
I
would
be
fantastic
because,
yes,
some
of
the
scheduling
tests
from
the
flakiest
and
took
the
longest
to
run
to
later
call
okay
I
want
to
go
back
to
the
test
grid
dashboard,
real,
fast,
Eric
I.
A
Think
you
tried
to
point
this
out
to
me
on
Friday
I'm,
still
kind
of
catching
up
on
backlog
for
states
that
still
have
something
left
to
do.
They
get
notification
on
their
dashboard
this
to
do
tab.
Let's
see,
there's
the
description
of
what
this
tab
should
do
exactly
they
can
fill
that
in
and
the
notification
is
telling
them
to.
Please
configure
this
click
on
this
link.
A
It
takes
them
through
to
the
configuration
page,
which
has
a
really
nice
read
me
now
that
is,
it's
continually
getting
improved
through
the
efforts
of
Eric
and
Mishelle
and
other
folks
to
really
spell
out
what
you
can
do
and
some
of
the
awesome
options
you
can
use
to
make
us
better.
So
it's
this
been
really
need
to
see
the
test
grade.
The
experience
improve
and
I
look
forward
to
us
using
this
in
a
more
actionable
sense.
A
E
You
know
III
just
want
to
my.
You
know
that
the
history
of
Service
Catalog
is
fraught
with
this
were
the
first
to
do
a
lot
of
these
so
I'm
sure
there's
many
different
pieces
of
stuff
that
has
been
integrated
into
testing
already,
but,
along
with
you
know,
reusing
some
of
the
API
server
stuff
and
reusing
some
of
the
released
infrastructure,
stuff
and
basically
being
sort
of
this
experimental
cause,
Collbran
of
trying
to
figure
out
what
kubernetes
practices
best
practices
are
I
just
wanted
to.
E
You
know:
I
have
an
interesting
testing
personally
and
so
I
just
want
to
do.
Basically,
what
is
what
is
the
standard
expected
integration
points
compared
to
what
we're
doing
and
making
sure
that
you
know
in
the
future,
it's
documented
and
done
the
right
way.
The
kubernetes
way
the
standard
way,
because
that's
basically
all
what
we've
been
all
about
from
from
the
get
down
here.
Well,.
A
E
Sure
so
you
know
we're
I,
guess
it's
probably
about
not
a
year
into
this
I'm
thinking
the
service
broker.
Stuff
is
about
a
year
old,
but
this
is
probably
you
know
six
or
eight
months
old,
but
you
know
basically
what
hat
when
he
doing
you
starting
project?
Okay,
fine!
You
set
up
your
git
repository.
You
put
some
code
in
there,
okay.
Well
now.
The
next
step
is
I
need
to
make
sure
my
code
runs
and
works
so
I
set
up.
E
You
know
we
have
Travis
to
check
out
the
code
and
run
basic
unit
integration
tests,
except
is
that
what
we
should
be
doing?
Or
how
can
we,
you
know,
integrate
whatever
results
into
the
standard
dashboard?
If
that's
what
we
should
be
doing,
I,
don't
know
then,
okay.
Well,
we
want
to
make
sure
that
our
API
server
actually
installs
into
a
real
kubernetes
deployment.
So
now
we
have
a
checking
server
that
makes
I
think
it's
gke
called
GCE
calls
or
whatever-
and
you
know,
injects
the
stuff
and
runs
some
basic
intent
type
tests.
E
Is
there
an
integration
point
that
we
should
be
using
or
a
result
just
to
playing
I.
Don't
know-
and
you
know,
is
all
of
this
documented
and
such
and
then
another
thing
is
the
release
management?
Okay.
Well,
we
periodically
have
a
release.
You
know,
Travis
automatically
pushes
a
new
image.
Each
bill,
I
think
it's
clay
and
then
you
know
manually.
We
have
a
button
that
somebody
has
to
go
in
and
push
when
we
when
we
tag
the
build
and
then
push
an
official.
You
know
V
0.14,
again
clay.
You
know.
E
E
We
kind
of
did
our
own
thing,
but
the
goal
was
not
to
do
our
own
thing
just
if
the
goal
is
for
everything
to
work
and
if
there's
something
that
we
should
be
doing,
or
that
is
helpful
for
us
to
look
at
to
make
sure
that
the
next
person
who
does
it
doesn't
have
to
do
this,
or
does
it
in
a
very
standard
way
such
that
they
can
plug
in
immediately
without
having
any
problems,
then
I
want
to
help
with
that,
as
well
as
help
specifically
the
Service
Catalog.
Do
whatever
needs
to
be
done.
E
No
I'm,
not
I'm,
not
I,
don't
require
an
answer.
I
just
want
to
do
yeah,
throw
that
out
there
and
make
sure
that
this
was
a
thing
that
somebody
would
want
to
talk
about,
and
I
figure
the
people
on.
The
call
are
the
people
who
would
be
interested
in
maybe
answering
that
question
and
I
do
not
require
an
answer
now
and
I'm,
not
in
a
rush
and
everything
is
everything
is
wonderful
in
six
Service,
Catalog
and
so
I
just
wanted
to
say:
hey
here
we
are
and
I
have.
A
No
absolutely
I
think
I'm
trying
to
like
set
expectations
on
the
fidelity
of
my
response.
I.
Think,
like
my
my
big
question
upfront
is,
it
seems
like
today,
between
prowl
and
test
grid.
We
have
a
relatively
low
friction
way
for
people
to
get
something
to
do
stuff
to
the
based
on
events
from
their
repository
to
run
new
jobs
and
to
display
results.
A
You
cannot
run
a
docker
okay,
Aaron,
I'm
kind
of
looking
in
your
direction
on
this,
but
I
believe
that
it
to
run
it
a
proud
job
is
capable
of
running
shell
scripts
Python
go
line
whatever
is
not
capable
of
connecting
to
a
docker
daemon
and
asking
the
proper
daemon
to
dock
build
docker
run
docker
push
right,
okay.
So,
if
you're
you
want
to
correct
your
letter.
D
I
mean
technically
so
that
that's
more
of
a
principle
of
life
is
a
lot
better.
If
kubernetes
is
managing
your
containers,
not
containers
have
access
to
docker
and
can
do
weird
docker
things
on
the
node
in
which
they're
running
themselves.
But
theoretically,
if
you
absolutely
must
do
that,
I
suspect
it'll
be
a
big
pain
in
the
maintenance,
but
for
you,
but
you
can,
you
know,
send
it
the
docker
socket
and
then
your
container
can
do
whatever
it
wants.
But
I
would
strongly
recommend
against
that
I'm.
A
D
Yeah
yeah
I
think
we
could
provide
a
lot
more
automation
around.
You
know
supporting
multi
repo
situation
in
general.
A
Yeah
multi
repos,
that's
another
massive
massive
thing
to
discuss
at
some
point.
I
still
have
on
lance
daxing
on
here
somewhere
to
go
through
so,
but
to
put
the
question
back
to
you,
Morgan
I
think
it's
like
what
what
drives
you
to
Travis.
What's
working
well
for
you
with
your
existing
Travis.
E
Setup
well,
I
mean
I
mean
you
know
it's
sort
of
easy
to
set
up.
That's
really
the
only
thing
it
was
easy
set
up
at
the
time
and
so
boom
yeah.
This
was
the
other
room.
This
was
eight
months
or
a
year
ago
it
was
easy
to
just
say:
okay
Travis
TM,
we'll
link
it
up
boom.
It
doesn't
say
it
is,
there's
nothing
particularly
enough
to
be
done.
The
Travis
process
I,
don't
believe
that
is
inherent
to
our
build
beyond
you
know.
If
it
sees
a
tag,
it
pushes
an
image.
E
Second,
so
I
don't
think,
there's
anything
particularly
about
Travis
other
than
it
was
easy
to
do.
Okay
and
something
you
already
knew
how
to
table
I
guess
yeah
pretty
much
pretty
much
pretty
much
yeah
and
everybody
turned
on
it.
Everybody
used
it
before,
and
you
know
this
is
a
small
program.
It
doesn't
do
much
or
at
least
it
started
out.
Small
doesn't
do
much
now.
Maybe
we're
growing
bigger
and
require
more
interaction
between
the
things,
but
right
now
we
don't
really
have
too
much
été
test
as
it
is.
E
It's
mostly
unit
test
integration
test
and
then
be
you
know.
50
different
kinds
of
verification
that
we
do
and
code
generation
that
we
do
that's
the
longest
time
taker
yeah.
That's
the
I
think
it's
proud.
It's
like
Flash,
CC
and
slash
assign
and
that
kind
of
stuff
works
in
the
in
the
the
repo
there.
The
other
thing
that
I
think
we
were
interested
in
is
some
of
the
LG
TM
type
process,
where
right
now,
or
at
least
up
until
about
a
week
ago,
we
had
sort
of
manual
yeah.
E
You
click,
you
write
a
review,
you
say
you
know,
approve
and
github,
and
then
you
say
LG
TM
and
then
you,
you
click
a
label
that
says
you
know
LG
TM
one,
because
we
need
three
of
them
now,
82
of
them.
Is
there
any
process
set
up
for
that,
because
I
think
the
core
thing
is
basically
one
and
then
there's
approvers
and
there's
owners
and
and
all
had
all
that
as
all
network.
And
it's
there
a
point
that
we
could
use
that
answer,
sighs,
that's
something
we
should
be
expecting
to
be
using
okay,
so.
A
Let
me
but
the
short
version
there
you
is
that
plugins
file
right,
you
can
use.
You
can
see
there
that
you
can
either
enable
plugins
on
the
organization
level,
a
repo
level
and
so
organization
level.
Everything
within
the
kubernetes
in
computer
organization
has
play
inside.
If
you
want
to
turn
on
the
LG
Chem
functionality
or
maybe
like
slash
time,
whatever
slash
save
whatever
within
you
rico,
you
can
use
the
label
plugin.
We
lack
documentation
on
what
all
of
these
plugins
do
right
now
that
he
isn't
open
issues
with
that.
A
But
plugins
are
all
relatively
self
contained
within
the
plugins
directory
that
you
might
be
able
to
drop
what
they
do
and
we
could
it's
a
pretty
easy
cool
across
process
either
trauma
plugin
for
the
incubator
or,
if
we
think
it's
ready,
like
I,
think
the
slash
sign.
We
do
not
have
that
already.
Oh
yeah,
it
is
yes.
A
Accepts
the
/,
the
/
heart
and
flashy
ox
plugins
could
totally
be
turned
on
for
every
retail.
So
there's
that
we
can
I
get
a
thing
to
merge
stuff
automatically.
For
me,
I
think
that
question
is
backed
up
behind
munch
github,
which
right
now
munch
github
is
basically
a
component
instance
per
repository.
A
The
LG
TM
process
you're
describing
sound
a
little
bit
more
customized
than
what
Monica
hub
currently
offers,
but
so
like
munch
github
is
the
thing
that
handles
the
slash
approved
and
the
assignor
the
assigning
of
things
based
on
owners
files,
which
I
know
at
least
the
slash
approved
thing
is
documented
in
pins.
But
indeed
the
metadata
is
all
handled
prowl
and
then
I
wanted
to
tie
back
to
something
else.
Real,
quick
just
on
the
Travis
discussion.
I
think
that
we
would
like
to
head
in
a
direction.
A
So
right
now,
prowl
uses
one
massive
file
called
a
channel.
You
guys
are
primarily
used
by
the
trigger
plug
trigger.
Plugin
is
the
thing
that
giving
to
get
other
than
goes
and
creates
a
proud
job,
which
is
either
executed
as
a
pod
or
corresponds
to
a
Jenkins
and
getting
kicked
off,
and
so
that's
just
one
huge
file
that
this
intestine
front.
We
would
love
to
get
to
the
world
where
there's
a
proud
up
camera
file
in
each
orito
that
Prowse
trigger
just
like
there's
a
traffic
amplify
on
these
triada.
A
And
then
so
that
way
you
only
have
to
edit
gamal
that
relates
to
like
the
job
sense
of
achievement.
So
that's
not
to
say
that
you
can't
copy
into
the
massive
configu
animal
that
we
do
right
now.
If
you
wanted
to
stand
up
jobs,
but
just
be
there's
just
a
lot
of
DML
in
there
Eric
is
there
anything
you
want
to
add
to
all
that
stuff.
D
A
Appreciate
it
thank
Amon,
Bernie,
yeah
yeah
like
I,
think
and
then
there's
sort
of
the
higher
level
thing
that
is
part
of
the
scope
of
district,
but
we
are
interested
in
participating
in
it
is
that
formation
of
like
a
playbook
for
new
projects,
it
seems
like
you're
Vanessa.
Don't
it's
just
like
I
kind
of
hard
bottom
I
get
something
clear
to
me
what
the
CNCs
technically
requires
of
a
project
to
be
adopted
by
the
CNCs.
It's
also
unclear
to
me
what
we
technically
require
up
a
project
to
graduate
from
incubator
to
kubernetes.
A
You
use
these
tools
or
those
tools.
A
checklist
sort
of
thing
would
be
fantastic
and
then
log
checklists,
yeah
they're
a
doctor's
use,
and
why
can't
we
not
enough?
Doctors
use
them,
but
yeah
protectors
are
great.
Okay
and
then
the
other
thing
I
have
very
little
understanding
of.
Is
your
question
about
and
and
tests
I
think
I
would
again
probably
have
to
punt
over
to
Eric's
or
whether
or
not
this
beings.
We
need
to
expand.
E
Line
and
for
EDD
tests,
you
know
we
want
to
have
one
half
we
want
to
have
a
basic
cluster.
We
want
to
then
run
against
that
cluster.
You
know,
install
the
API
server,
install
and
helm
chart
maybe
install
a
different
helmet
art.
That
is
our
broker
that
we're
going
to
attach
to
and
then
you
know,
run
some
basic
tests
that
make
that
use
the
whole
infrastructure
all
set
up.
The
other
thing
is
making
sure
our
back
works,
because
we
need
that
making
sure
DNS
works,
because
we
need
that
because
we
use
services
by
name.
E
There
was
one
more
thing,
but
basically
our
back
is
broken
in
mini
cube
and
our
vet
needs
to
be
set
up
in
gke.
So
we
need
to
make
sure
that
works.
So
we
need
to
have
a
couple
of
little
pokey
tube
commands
that
run
before
then.
I
can't
remember
details
right
now,
but
there
was
one
more
thing
that
we
need
to
have.
The
cluster
needs
to
have
before
it
works
fully
for
us,
but
it
was
real
basic.
A
A
So
in
the
context
of
want
to
stand
a
cluster-
and
we
may
possibly
want
to
run
some
additional
keep
CTL
commands
to
configure
our
back
appropriately
I
make
sure
the
right
things
are
installed
before
we
run
and
then
tests
for
service
catalog.
That
sounds
like
we
want
to
make
a
service
catalog
scenario
rather
than
reuse.
It
would
be
existing
trooper
that
easy
to
e
scenario,
I.
D
Say
visit
I
mean
the
to
say
the
two
things
that
immediately
come
to
mind.
You
know
is
one:
you
can
use
cube
tests
to
deploy
your
cluster
so
like
there's
flags
as
you
pass
cube,
destined
to
sense
the
same
cube
test
up
without
passing
test,
and
then
you
could
run
whatever
additional
things
you
want
to
like
configure
our
back
or
whatever,
and
then
call
cube
test
test
down
to
run
tests
and
turn
down
the
cluster
on
the
conundrum.
E
D
Another
idea
is:
there's
the
concept
of
different
deployment
strategies,
so
we
have
like
a
cop
strategy
cluster
like
a
bash
strategy,
kubernetes
anywhere
strategy.
Theoretically,
this
could
be
like
a
new
strategy
which
would
give
you
more
control
over
like
what
actually
happens
when
we
turn
up
a
cluster.
You
could
then
add
your
additional
stuff
after
that,
but
yeah
cube
test
is
kind
of
our
main
launcher.
That's
kind
of
like
the
interface.
D
A
Question
was
trying
to
ask
because
to
me
it
seems
like
a
different
to
test
strategies
are
kind
of
a
one-to-one
mapping
with
different
places
or
ways
to
deploy
kubernetes.
So
there's
a
strategy
for
G
K,
a
strategy
for
cob
strategy
for
trigger
Nettie's,
anywhere,
right
and
so
service.
Catalog
is
the
thing
you
want
to
employ
for
a
kubernetes
cluster,
nomad
I'm,
not
particularly
concerned
about
how
it
gets
deployed.
I
just
need
a
cube
right
and
that
made.
A
D
Yeah,
maybe,
although
the
yeah
I
guess
that's
I,
don't
know
it
seems
fairly
similar
the
cube
mark
right,
cube,
mark
you
deploy
a
cluster
than
you
futz
with
it
to
make
it
into
a
bigger
fake
cluster,
and
that
actually
happens.
They
actually
have
some
flags
inside
of
cube
test,
as
probably
isn't
the
best
way.
I
think
this
needs
some
further.
You
know
thinking
I'm,
not
it
could
be
a
scenario
that
might
work
okay,.
A
Yeah
we
can
carry
this,
take
a
take,
a
sock
off
line
to
me.
It's
just
like
it's
an
architectural
boundaries.
Question
I
thought
the
cube.
Mark
scenario
seemed
to
be
like
a
one-to-one
mapping
to
the
kubernetes
on
the
score
provider:
environment,
variable
adult
cluster
shell
script
days,
and
that's
what
not
scenario
strategy
I'm
mixing
my
works,
but
everything
that
used
to
be
a
different
directory
in
the
cluster
shell
script
days
got
turned
into
a
different
strategy
in
the
coop
test
world
and
then
the
scenario
was
instead
of
a
big
long.
A
Shell
script
called
ete
test
runner
or
something
it's
not
my
phone
script
that
calls
cube
test
with
the
rights
and
a
flag
instead
of
setting
out
a
crap-ton
of
environment
variables
and
that
college
metrics
elf,
shell
streams
cool.
So
we
can
hash
out
what
the
right
here
for
you
to
add
your
son
of
eat.
We
test
somewhere
in
between
that
it
could
be
between
those
two
boundaries
or
there
might
be
one
post.
We
can
find
one.
E
Last
question:
please:
if
I
wish
we
have
one,
is
there
basically
goes?
It
goes
back
to
the
whole
Travis
and
extra
environment
thing.
Is
there
a
defined
you
guys
have
hardware
or
money
or
whatever,
that
supports
projects
to
run
things
or
is
it?
You
know
we're
at
the
mercy
of
providing
our
own
of
infra?
Bon
seems
to
run
these
things
that
I.
A
Hear
your
question:
that's
something
I'm,
not
sure
I
can
answer
right
now.
That's
right!
Right,
like
the
technical
answer,
is,
if
you
were
to
add
a
job
to
config
God
animals
such
that
crowd.
Would
if
you
set
everything
up
so
that
proud
would
kick
off
a
job
that
job
would
end
up
running
on
a
Google
in
a
Google
cloud
project
right
by
Google,
right
and
we're
all
friends
here
we
love
making
sure
our
projects
are
super
well
tested.
E
A
We
are
running
along
and
I
guess
sorry,
no,
no,
it's
absolutely
fine.
I
think
that's
all
I
had
something.
I
will
try
to
see
if
I
can
get
specific
people
to
show
up
or
either
next
week
or
the
following
week,
because
I
have
a
particular
interest
in
things
related
to
the
snake
you
and
build
cough
and
test
controls,
specifically
I'd
like
to
get
a
better
understanding
for
what
those
roles
are
responsible
for,
and
my
big
unknown
was
like
how
do
I
know
if
and
when
the
release
is
blocked
or
at
trouble.
A
Let's
wish
and
hopefully
meeting
either
next
week
or
the
following
week,
if
you're
interested,
please
ping
me
offline,
we'll
get
some
time
to
discuss
that
with
that
said,
I
think
that's
it
for
today,
happy
Tuesday
everybody
and
thanks
for
your
time.