►
From YouTube: Kubernetes SIG Testing 2019-06-18
Description
A
A
We're
gonna
have
Daniel
Roth
talk
to
us
a
little
bit
about
kima
project
I
can't
actually
pronounce
it
they're
their
usage
of
prowl,
which
I'm
pretty
excited
about
and
with
time
left
over,
we
can
talk
about
maybe
how
I
want
to
handle
prowl
configuration
for
namespaces
or
the
mechanics
I'm
going
to
take
to
the
prowl
into
its
own
sub
project.
So
with
that
I
will
hand
it
over
to
you
Daniel.
B
You
should
see
my
desktop
god:
I
can
see
kids
Sears
lots,
yeah,
alright,
so
I'm
talking
about
kima
and
prowl
building
kima
with
prowl,
so
it
started
off
with
some
infos
about
kima,
just
so
that
everyone
barely
knows
what
it
is.
What
is
kima
kima's
our
open
source
project
initiated
by
ASAP,
designed
natively
to
work
on
kubernetes?
We
also
leverage
K
native,
and
it's
for
us
a
flexible
and
easy
way
to
connect
and
extend
enterprise
applications
in
a
cloud
world,
meaning
we're
using
kima
to
extend
our
enterprise
applications
with
that.
B
B
We
have
several
us
functions
that
are
living
in
there,
the
service
itself.
We
can
connect
enterprise
services
like
services
from
TCP
through
our
service
catalog,
and
an
application
connector
allows
us
to
register
enterprise
applications.
Does
not
necessarily
any
s
ap
application
this.
It
can
be
any
application
via
API
calls
or
events
through
the
K
Native
Nats
eventing
to
the
kima
cluster,
where
you
can
build
services
or
service
functions
that
consume
things
from
these
enterprise
applications.
B
B
How
do
we
use
prowl?
We
have
two
projects
in
github
that
we
use
mainly
github.com.
Slash
keema
project
is
our
main
project
where
kima
lives.
There's
also
our
website
kima
project
at
I/o
that
we
built,
and
we
have
a
CLI
tool
that
actually
integrates
with
cube
cuddle
as
well.
Were
you
can
install
kima
into
a
cluster
currently
limited
to
GCP
or
working
on
Azure
and
AWS
as
well.
Our
kima
incubator
project
is
mainly
for
projects
that
we
are
working
on
that
are
not
yet
mature
enough
to
be
in
our
main
project.
B
For
example,
at
Marcus
is
our
marketing
framework
to
mark
enterprise
applications
that
connect
to
kima,
where
we
can
test
integrations
and,
for
example,
for
some
of
our
internal
s,
AP
applications.
We
have
some
marks
ready
to
get
developers
started
to
start
integrating
with
these
applications
without
having
a
full
deployment
of
one
of
these
applications,
so
they
can
start
working
on
api's
and
integrations
with
events
already
with
the
marking
framework.
Then
we
have
a
visual
studio
and
extension.
B
We
have
octopus,
which
is
our
testing
framework,
which
replaced
helm
test
for
us
because
it
was
not
feasible
or
not
good
enough
for
us.
We
we
build
something
called
that
octopus
and
it's
running
our
tests.
Basically,
and
we
also
have
another
project
for
marketplace-
integrations
like
the
GCP
marketplace,
where
you
can.
One
click
install
kima
that
all
is
built
on.
Prowl
want
to
go
over
to
some
architecture,
slides,
basically,
I
have
one
link
here
and
I
hope
yeah.
It
opens,
and
you
should
hopefully
see
that
so
we
have
to
github
tcp
projects.
B
Currently
one
is
hosting
our
problem
cluster
to
run
all
the
components
like
deco
horologium
playing
hook,
the
plugins
that
we
have
sinker
the
built
projector
branch
protector,
GCS
web,
which
we
just
added
recently,
and
we
have
another
cluster
another
TCP
project
that
is
mainly
responsible
for
running
our
jobs.
Actually,
because
we
separated
that
just
actually
today
that
currently
we
before
we
were
running
all
our
jobs
in
the
same
cluster,
and
we
just
separated
that
out
today,
because
we
want
to
give
developers
more
access
to
the
actual
logs
without
having
access
to
the
pro
cluster
itself.
B
B
All
this
running
with
our
own
configurations,
I,
can
talk
a
little
bit
about
the
configurations
if
you
guys
want
other
than
that,
there's
from
other
from
how
we
and
and
which
plugins
we
use
in
which
components
we
use.
There's
nothing
really
fancy
to
our
installation.
It's
just
that.
It's
connected
to
our
github
projects
and
pointing
to
some
of
the
configurations.
B
We
have
basically
for
everything
that
we
do.
We
have
a
separate
documentation
for
all
our
stuff,
for
example,
for
installing
prowl.
We
have
a.
We
have
a
couple
scripts,
for
example,
to
create
the
secrets
like
t-d
GCP
service
accounts
that
are
required
for
installation
prowl.
We
have
a
workflow
for
updating,
proud
that
requires
developer,
to
create
a
fork
of
prowl
and
install
it
into
an
own
GCP
cluster.
B
So
we
can
test
new
functionality
or
or
newer
versions
of
prowl
when
we
try
to
update
for
that,
we
also
have,
for
example,
and
how
to
update
prowl
document,
which
is
very
interesting.
You
had
to
do
some
research
because
we
always
base
and
cherry-pick
our
installations
of
prowl
from
the
commits,
for
example,
this
one
here.
This
is
our
current
running,
commit
where
we
make
sure
that
all
the
changes
that
that
you
guys
push
into
the
main
main
prowl
we
can
pick
from
and
like.
B
There's
someone
complaints,
it's
not
working,
pull
requests
are
not
being
billed
or
something
and
a
crowd
complaints
and
someone
will
will
fix
it
for
us.
We
don't
have
that
crowd
yet
all
over
the
world,
so
we
have
to
figure
out
how
to
do
that
ourselves
and
we
want
to
build
some
monitoring
around
that.
So
we
can
do
this.
B
Second,
we
have
a
lot
of
bash
scripts
to
run
our
software
currently
mainly
spinning
up
clusters
with
the
provided
configuration
like
DNS
settings
network
settings,
namespaces
that
we
create
labeling
for
that
we
set
up
everything
with
bash
scripts
right
now.
We
want
to
move
away
from
that
and
report
that
to
rolling
we
have
plans
of
integrating
tied
four
out
emerging
of
PRS.
That's
one
of
our
bigger
things
as
well.
B
We
have
tied
running
I
think,
but
we
are
not
leveraging
it
for
automating
right
now,
and
another
wish
from
our
quality
guys
is
that
we
display
test
results,
I'm,
desperate,
where
we
want
to
use
the
the
proud
history,
the
kubernetes
test
great
for
for
maybe
adding
our
results
to
that
view
as
well.
So
we
can
have
a
an
overview
of
that
with
that.
You
could
find
us
in
these
URLs
I
can
share
the
presentation
later
on
yeah
and
I'm
open
for
any
questions.
B
If
I
should
talk
about
any
in
detail,
please
feel
free
to
ask
Oh
our
prowl
in
the
wild
is
status
kima
built.
If
you
want
to
see
that
we've
currently
merged,
like
I,
said
the
separate
workload
cluster.
That's
creating
some
issues
right
now,
because
we
haven't
set
up
the
proper
VMs
yet,
and
some
jobs
are
failing
right
now,
but
the
rest
should
be
fine.
C
Sorry,
I
don't
know
if
this
was
something
you're
aware
of,
but
currently
my
team
is
adding
a
whole
monitoring
to
proud.
Currently,
there's
like
a
set
of
integrations
with
a
couple
of
components,
pushing
for
me
these
metrics
and
then
sort
of
the
kubernetes
project
has
a
core
fought
against
to
the
Sackville.
A
drone
case
that
I
know,
but,
what's
being
added
today
to
the
repo
is
sort
of
a
state
alone
set
of
configs.
That
will
give
you
a
monitoring,
stack
and
then
just
kind
of
alerts
and
dashboards
that
we
found.
There's
oh
yeah,.
B
We
have
a
crow
phone
already
as
well,
but
it's
not
really
showing
a
lot
right
now.
I
think
it's
just
showing
the
number
of
jobs
that
are
running
currently
or
in
parallel.
So
there's
not
a
lot
and
we
just
want
to
add
some
more
metrics
and
check
queer
more
health
endpoints
to
see
that
everything's,
fine,
but
definitely
good,
knowing
that
your
guys
are
working
on
that
as
well
and.
C
B
The
developers
are
requesting
actually
like
SSH
access
to
containers
because
for
some
failed
jobs
they
need
to
know
what
exactly
failed
on
the
VM
or
in
the
in
the
cluster,
and
we
don't
want
to
give
just
any
one
admin
rights
to
a
proud
cluster
where
all
the
secrets
live
and
all
the
conflict
for
the
prowl
lives.
Well,
so
we
just
separate
that
out
into
a
new
GCP
project
were
so
are
you?
Are
you
saying,
like
SSH
access
to
the
kubernetes.
B
So
I'm
not
sure
what
the
exact
issues
are
that
the
teams
are
facing,
that
they
want
to
investigate
it's
it's
just
that
they
were
saying
that
the
logs
of
the
build
are
not
enough
for
them
to
figure
out
what
exactly
went
wrong.
I
think
it's
sometimes
network
issues
between
the
components
that
they
have.
B
That
might
be
a
problem
where
they
need
to
see
what
exactly
happened
and
and
like
we
usually
delete
clusters
that
we
create
in
a
job
like
right
after
we
ran
the
job
like
if
it
fails
it's
deleted.
If
it
succeeds,
it's
deleted,
but
they
were
asking
for
example
I.
Can
we
extend
that
deletion
time
to
an
hour
after
it
ran
so
that
they
have
time
to
go
into
that
cluster
and
check
what
I
actually
happened?
I
see.
C
B
So
the
jobs
usually
spawn
a
cluster,
for
example,
there's
integration,
integration,
jobs
that
would
run
I,
cannot
find
Ram
right
now.
That
would
run
and
create
a
cut
cluster
where
they
install
Kiba
into
then
test
the
whole
thing
with
our
integration
tests
and
then
basically
shut
that
down
after
so
they
would
go
into
that
one,
not
not
any
class
or
any
node
that
would
be
spawned
by
by
prow
like
okay.
So
then
your
did
you.
So
are
you
scheduling.
C
B
A
I
feel
like
either
way
the
scripts
and
the
docs
are
things
we
may
want
to
take.
A
look
at
I
feel
like
some
of
the
diagrams
and
Docs.
He
showed
look
amazing
to
somebody.
Who's
never
had
to
stand
up
proud
before,
because
they
were
written
by
somebody
who
hasn't
been
working
with
the
proud
codebase
for
years
and
just
to
hearing
Lee
gets
it
and
I
feel
like
we're
kind
of
missing
that
sort
of
perspective
in
our
people.
So
maybe.
A
C
C
A
B
Yeah,
well,
we
we
have
our
so,
for
example,
the
pro
installation
forks
is
our
document
for
hey
I,
wanna,
upgrade
prowl,
or
something
and
I
need
to
install
prowl
on
a
separate
fork
to
test
all
of
this
plus
the
job
configuration
that
we
usually
run
and
the
developer
will
just
follow
this
script
to
set
up
their
own
prior
cluster,
and
then
they
basically
have
the
exact
copy
of
our
cluster.
But
the
problem
is
that
I
ran
into
when
I
started
in
that
project.
It's
like
hey,
can
you
just
create
all
these
service
accounts?
B
First
and
like
it's?
It's
a
lot.
I
also
find
out
found
out
that
it's
not
all
of
them
that
you
need.
The
install
script
asks
for
a
little
bit
more.
It
fails
couple
of
times,
so
we
we
created
a
script
to
actually
create
these
service
accounts
for
you,
or
at
least
a
minimum
set
required
to
run
it,
and
and
basically
everything
should
be
described
in
here,
and
we
can
definitely
have
a
look
and
and
see
what
what
we
can
provide
for
you
guys
to
put
that
back
into
the
prowl
kubernetes
I.
B
B
I
think
we're
doing
pretty
much.
The
same
thing
like
very
similar
I
saw
your
bumpass
age
script
that
you
use,
but
for
us
that,
like
when
I
ran
it,
it
gave
me
the
I
think
the
last
10
builds
and
the
last
10
builds
are
maybe
all
from
today,
and
we
cherry
pick
usually
something
from
from
your
document
here.
So
no
there's
a
different
document.
I,
don't
know
where
to
find
it
right
now
that
it's
basically
your
release,
notes
and
there
we
say:
hey
date,
March
26
that
feature
arrived
and
the
update
ran
for
five
days.
B
C
A
So
I
think
the
thing
that
steam
is
referring
to
is
where,
like
we
have
a
PR
that
gets
opened
up
against
our
repo
and
it's
periodically
refreshed
as
new
things
are
built
and
included
in
that
PR
is
like
a
diff
that
shows
you
like
between
the
version
like
the
versions.
It's
asking
you
the
bump
from
into
what
are
the
actual
differences
in
the
source
code
in
our
testing
for
repository,
because
that's
something
you've
taken
a
look
at
or
is
this
not
yet
something
you've
seen
I.
A
C
B
On
what
you
want
to
test?
So
if
we
just
want
to
see
that
the
crowd
components
start
up
for
an
update,
we
do
that
first
and
sometimes
we
we
do
actually
put
the
whole
job
conflict
there,
but
on
the
developers,
own
keema,
Fork,
basically
so
every
developer
that
is
doing
that
would
have
their
own
github
art
account
or
use
their
own
account.
For
that.
D
And
then,
to
keep
in
mind
there,
just
with
using
your
own
body
account
for
testing
purposes.
Is
that
the
body
count
that
prowl
uses
is
expected
to
be
different
from
what
users
use.
So
if
you
use
your
own
body
count
as
the
prowl
bot
account,
sometimes
the
Bob
will
actually
ignore
messages
from
you,
because
it
thinks
that
they
are
messages
from
the
bot.
So
you
can
get
some
weird
behavior.
If
you
are,
if
you
don't
actually
create
a
separate
bot,
just
FYI
yeah.
B
C
B
B
B
A
C
Okay,
all
right,
we
didn't
get
to
your
attendance
Steve.
We
should
do
just
a
quick
blurb
for
the
Gypsy
meeting.
Just
the
casting
was
interested
yeah.
So,
as
we
talked
about
last
week,
we're
doing
a
meeting,
that's
at
2:30,
GMT,
7:30
Pacific,
just
to
help
everyone
get
to
the
meeting,
it's
very
different
time
zone.
The
first
meeting
is
going
to
be
next
Friday
at
2:30
GMT
and
the
link
to
the
document
for
agenda
notes.
Meeting
agendas
is
in
this
agenda,
so
if
you
want
show
up
throw
some.