►
From YouTube: Kubernetes SIG Apps 20180813
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
We
can
see
it
alright,
so
so
thank
you
guys
for
having
me
here
today.
What
I
wanted
to
talk
about
was
this
project
we've
been
working
on
called
brigade
and
I
want
to
show
a
demo
of
some
of
our
new
work
that
we've
been
pushing
through
over
the
summer,
but
I
thought
I'd
give
a
little
bit
of
an
overview
because
I
know
Brigade
is
new
for
many
of
you,
so
Brigade
we
describe
as
a
pipeline
builder
for
kubernetes
sounds
catchphrase,
easy
right,
but
basically
what
we
were
after
is.
B
We
all
use
shell
scripts
and
we
use
them
a
lot
and
we
get
a
lot
of
value
out
of
them
and
why
well
because
shell
scripts,
they
provide
us
a
way
to
wrap
some
flow
control
around
the
programs
that
are
already
installed
on
our
system.
So
you
know,
I've
got
a
wealth
of
readily
available
tools
on
the
command
line
for
an
eight,
for
example,
but
a
lot
of
times
I
just
want
to
say,
do
this,
and
if
that
this
happens
then
do
that.
Otherwise
do
this
other
thing
over
here?
B
That's
a
very
useful
thing
to
be
able
to
do
so.
We
said
all
right,
given
this
general
utility
of
being
able
to
do
flow
control
around
programs.
What
if
we
swapped
out
the
idea
of
programs
with
the
idea
of
containers
and
put
this
whole
thing
in
kubernetes
right
and
that's
where
the
idea
for
Brigade
was
sort
of
born?
Ok,
Brigade,
wraps
flow
control
around
containers,
so
we
can
say,
execute
this
container
then
take
the
output
of
it
and
pipe
it
to
that
container.
B
And
you
know
if
that
errors
out,
do
this,
otherwise
send
it
over
there
right.
So,
basically
we're
trying
to
mimic
the
model
of
shell
scripting,
but
inside
of
kubernetes.
So
so
here
on
one
slide,
I
tried
to
give
sort
of
like
all
the
nomenclature,
all
at
once
right.
So
we
chose
JavaScript
as
the
language,
because
JavaScript
is
the
most
used
language.
We
looked
at
multiple
surveys
and
it
just
seems
you
know
for
better
or
worse,
that's
where
we
are
as
an
industry
right.
B
So
so
that's
our
base,
scripting
language,
so
Brigade
JS
file,
contains
the
code
that
listens
for
events,
so
this
is
kind
of
like
the
Apple.
Scripting
model
listens
for
events
and
then
executes
say
build
where
a
build
is
composed
of
zero
or
more
discrete
steps
which
we
would
call
jaw
okay,
so
we're
wrapping
flow
control
around
what
we
call
jobs
and
a
job
is
just
a
wrapper
for
executing
a
pot
okay,
so
so
to
make
this
kind
of
much
easier
to
grasp
right.
B
Let's
just
look
at
a
very
simple
demo,
so
here's
an
example
brigade
script
where
we
basically
okay,
so
we
import
the
Brigadier
library
into
our
JavaScript,
which
is
the
main
thing
that
contains
all
our
good
stuff
right
and
then
we're
going
to
register
an
event
handler.
So
we
say
events
not
on
my
event.
So
when
my
event
triggers
then
run
a
build
with
this
particular
function,
so
this
function
says
all
right
create
one
new
job.
B
This
job
is
just
going
to
use
the
alpine
pod
and
when
it
runs
it's
going
to
execute
inside
of
this
alpine
pond
alpine
pod
echo
hello
and
then
run
it
okay.
So
when
we
execute
this,
it's
when
a
new
event,
that's
on
my
event
gets
triggered.
It's
going
to
run
this
function,
which
is
going
to
create
a
new
pod.
It's
gonna
execute
echo
hello
inside
of
it
and
it's
gonna
exit,
so
that's
kind
of
the
core
model
right
there
right.
We
have
a
way
of
executing
these
jobs
inside
of
a
scripting
model.
B
That
whole
thing
is
going
to
execute
inside
of
your
kubernetes
cluster
and
we
can
pipe
data
back
and
forth
between
those
jobs.
But
one
of
the
things
we
wanted
to
make
sure
we
could
do
is
leverage
the
fact
that
we're
running
in
a
cluster
and
a
cluster
you
know
is
very
conducive
to
being
able
to
run
large,
concurrent
jobs,
and
so
we
wanted
to
make
sure
it
was
easy
out-of-the-box
to
run
both
parallel
jobs
or
to
be
able
to
run
them
sequentially.
B
B
That's
going
to
aggregate
that
data
and
do
something
else,
so
that
would
be
your
typical
fan
in
fan-out
kind
of
fan,
out,
fan
in
kind
of
pattern
and
brigade
is
designed
to
be
able
to
make
this
kind
of
stuff
very
easy
to
do.
In
fact,
by
default,
things
will
run
in
parallel,
but
here's
a
little
example
that
we
can
look
at
of
running
a
couple
of
things:
sequentially,
okay,
so
I've
got
two
jobs
now,
in
this
case
job
one
and
job
too.
Okay,
one
is
gonna,
echo,
hello
and
two
is
gonna
echo
goodbye
now.
B
Clearly,
I
would
fer
that
I
said
hello
before
saying
goodbye,
so
I'm
gonna
run
these
in
sequence,
so
I'm
gonna
import,
this
group
library,
oops,
there's
a
typo
there.
It
should
be
group,
not
groups,
I'm,
gonna
import,
this
group,
library
and
I'm
gonna
run
each
of
them
in
order
so
that
last
line
there
group
dot
run
each
j1,
j2
is
gonna,
say,
run
run
the
first
pod
until
it
exits
successfully
then
run
the
second
pod.
B
So
we
can
see
already
that
this
idea
of
being
able
to
run
in
sequence
or
run
in
parallel
is
sort
of
just
a
built,
in
fact
of
brigade,
okay,
so
that's
kind
of
the
the
big
notion
of
how
we
script
Brigade
in
general
right.
But
how
do
we
trigger
those
events?
I
said
that
this
was
an
event
based
system,
so
the
idea
is
something
happens
and
Brigade
gets
notified.
Hey
this
event
triggered
and
then
it
spins
up
your
script.
B
So
we
use
gateways
to
do
this
in
a
gateway
is
basically
something
that
takes
an
external
stimulus
of
some
sort
and
converts
it
into
a
brigade
event.
So
we
can
talk
about
like
a
cron
job
gateway
which
we
which
we
can,
which
we
have.
That
basically
says
hey
every
time.
It's
noon
run
this
ok,
but
we
also
have
more
sophisticated
ones.
So,
currently,
we've
got
stuff
like
github,
bitbucket,
gitlab
gateways.
That's
three
different
gateways
that
convert
like
a
pull
request
on
git
lab
into
a
brigade
event,
or
something
like
that.
B
Then
we've
got
some
to
handle
this
sort
of
current
generation
of
cloud
events
right
so
with
cloud
events
itself
is
actually
a
spec
event
grid.
Things
like
that
where,
where
your
public
cloud
or
your
private
cloud
will
emit
an
event
and
this
gateway
captures
it
and
converts
it,
there's
some
for
kubernetes,
so
we
can
convert.
You
know
like
a
pod
failure
event
to
trigger
some
particular
Brigade
script
and
generic
web
hooks
and
so
on.
B
Right
basic
idea
was:
it
should
be
easy
to
create
gateways
which
merely
have
to
listen
for
one
kind
of
stimulus
and
converted
into
a
brigade
event
and
then
brigade
itself
as
fairly
generic
and
how
it
handles
all
of
those
okay.
So
all
of
that
was
set
up,
because
what
I
wanted
to
show
today
was
this
gateway.
We've
been
working
on
this
sort
of
our
next
generation
of
the
github
gateway.
B
So
for
those
of
you
who
follow
github
recent
developments
not
too
long
ago,
they
introduced
a
new
set
of
web
hooks
called
the
check
sweet
web
hooks,
and
these
were
designed
to
make
it
vastly
simpler
to
run
checks
on
incoming
PRS
and
push
requests
and
things
like
that
and
and
then
run
and
and
then
display:inline
inside
of
github
the
results
of
those
checks.
So
let's
take
a
look
at
that
here.
So
I've
got
this
old
piece
of
code.
It's
called
solver.
It
was
an
old.
B
You
know,
kind
of
like
a
Saturday
afternoon
kind
of
project
when
I
was
bored
and
I
just
was
kind
of
tinkering
around
with
it
a
little
bit
and
found
all
kinds
of
little
tiny,
formatting
errors,
and
things
like
that
I
had
made
a
long
time
ago.
So
it's
working
on
this
code
here
and
you
can
see
I've
already
staged
up
a
couple
of
different
changes
and
I'm
already
on
a
branch.
My
example
for
branch
right,
so
I'm
going
to
go
ahead
and
commit
this
and
push
this
up
as
a
PR
and
github.
B
B
So
it's
going
up
there
and
pushing-
and
here
we
go
I
see
my
new
push
come
up
here.
I'm
gonna
create
a
pull
request
off
of
that
and
there
we
go
alright.
So
at
this
point,
I've
I've
pushed
up
that
pull
request
and
it's
going
to
kick
off
my
an
event
through
that'll
pipe
from
github
on
through
to
my
github
gateway
app
for
Brigade,
and
we
can
see
that
some
checks
already
kicked
off
here.
We've
got
four
different
checks
that
just
kicked
off
immediately.
B
So
here's
where
the
new
github
thing
comes
in
I've
got
this
new
checks.
Tab
up
here
and
I
click
through
this,
and
this
is
my
detailed
viewer
to
see
the
checks
that
are
running
the
DCO
one
passed
already.
Let's
go
take
a
look
at
that.
Okay,
so
the
DCO
checks
succeeded
because
it
could
tell
that
I
had
signed
off
by
as
matt
butcher.
B
We
saw
that
when
I
chose
that
off
of
my
commit
menu
there
looks
like
while
we
were
looking
at
this,
the
test
one
passed
so
I
can
go
in
here
and
see
the
output
of
it
running
my
unit
tests.
The
coverage
one
is
done:
coverage
isn't
a
pass/fail
one,
it's
kind
of
like
a
just
some,
an
informational
one.
So
we
get
this
gray
square
instead
of
a
green
checkmark,
and
it
says
that
my
unit
test
is
pretty
bad.
I've
only
got
28.9%
test
coverage
on
this
library.
B
So
this
is
not
a
very
good
commit
here
and
if
I
go
in
and
look
it's
checks,
I
can
see
why
so
here's
the
DCO
tab.
Oh
my
commit
message.
I
didn't
sign
off
on
it,
so
it
shows
me
what
my
commit
message
is.
It
says
it's
ineligible
for
merging
and
it
tells
me
where
to
go
to
learn
more
about
what
a
DCO
is
and
why
I
need
to
sign
off
on
it.
B
I
click
over
here
on
the
tests
and
I
can
see
that
the
compilation
file
failed
down
here
because
of
an
error
and
the
coverage
is
going
to
fail
for
the
same
reason
because
it
couldn't
compile,
but
I
can
see
that,
for
whatever
reason
my
style
checking
did
not
fail,
but
instead
it
gave
me
a
whole
bunch
of
little
stylistic
things
that
I
might
want
to
take.
A
look
at
this
is
the
cyclomatic
complexity
of
some
of
my
libraries
are
a
little
bit
on
the
high
side,
but
nothing
there
that
caused
a
fatal
error.
B
So
here
what
we've
seen
is
basically
a
couple
of
different
outcomes
from
running
this
same
check.
Suite
now,
let's
take
a
look
behind
the
scenes
and
see
what's
actually
going
on,
so
the
brigade
gateway
that
we
executed
here.
Really
all
it's
doing
is
its
intercepting
those
check,
sweet
events
and
it's
forwarding
it
on
to
Brigade
Brigade
is
not
a
CI
system.
Brigade
is
a
pipeline
environment
right
all
that
CI
logic
is
actually
happening
right
here
in
this
one
brigade
j/s
file.
So
if
I
go
look
in
this
brigade,
J's
file
I
immediately
start
to
see.
B
Oh
okay
seen
this
line
before
right,
I'll
I
load
in
the
core
libraries
here,
I'm
declaring
some
event
handlers.
So
these
are
the
events
that
I
know
that
get
generated
by
the
github
check,
suite,
API
and
I'm,
just
sending
them
all
to
this
same
run,
suite
method.
So
if
I
look
at
run,
suite
oops
wrong
button,
I
can
see
here's
my
four
test:
D
Co
unit,
test
style
and
coverage
they're
all
going
to
be
running
in
parallel.
Okay,
so
I'm
just
gonna,
run
all
four
of
those
tests
and
and
report
the
results.
B
So,
let's
take
a
quick
look
at
a
couple
of
the
tests,
will
do
this
D
Co
test
and
then
the
unit
tests
we
don't
have
to
go
through
all
of
them,
because
the
pattern
is
pretty
repetitive
after
the
first
two.
So
here's
the
DCO
check
now
the
event
that
I
get
from
github
contains
all
kinds
of
useful
information,
including
the
commit
message
so
right
here
in
JavaScript,
I
already
have
access
to
the
commit
message
and
all
I
need
to
do
is
make
sure
that
it
was
signed
off
correctly.
B
Now
this
is
an
oversimplified
DCO
check,
but
it
gets
to
the
basic
idea
right
so
I
check
to
see
if
there's
a
so
I
grab
that
commit
message
out
of
the
data
that
I
get
back
from
github
and
I
run
that
regular
expression
on
it
to
make
sure
that
it's
got
a
signed
off
marker.
If
it
does
then
I'm
down
here
right
in
this
notification
section
if
it
if
it
doesn't
have
the
sign
off
I
want
to
admit
the
failure
and
point
them
to
helpful
documentation
and
so
on.
B
If
it
passes,
then
I
just
send
a
nice
convenient,
passing
message
here
and
here's
where
I
run
that
job
okay,
so
we're
just
using-
and
this
is
a
good
example
where
this
you
can
see
how
this
is
inspired
by
shell
scripting
right,
because
sometimes
when
your
shell
scripting,
what
you
want
to
do
is
fairly
trivial
and
you
just
do
it
inline
in
the
shell
script
other
times,
you
want
to
pipe
it
out
and
have
something
else.
Do
all
the
work
for
you
right.
B
So
that's
what
run
unit
test
does
so
in
the
run
unit
test
command.
We
want
to
do
two
things.
We
want
to
send
the
notifications
up
to
github
and
that's
containerized,
but
we
also
want
to
build
our
source
code
and
run
the
tests
on
that
source
code.
We're
gonna
want
to
containerize
that
too,
because
otherwise,
you
know
because
I
want
the
go
tool
chain.
I
want
Depp
and
I
want
to
be
able
to
do
all
kinds
of
stuff.
So
here's
the
kind
of
basic
pattern
we're
doing
here.
First,
we
create
a
notification
object.
B
Okay,
that's
the
thing:
that's
gonna
notify
github
that
we're
doing
a
test.
Then
we
create
our
little
job
object
here,
and
this
is
like
the
examples
we
saw
before,
except
I'm,
creating
what
is
called
here,
a
go
job
which
is
and
we'll
take
a
look
at
that
in
a
second
and
I'm
telling
that
job
okay
just
run
make
test.
B
Then
I'm
calling
this
function
called
notification
wrap.
So
here's
what
it's
going
to
do.
Notification
wrap,
gets
its
basic
notification
and
its
basic
job
and
the
first
thing
it's
going
to
do
is
it's
gonna.
Go
over.
It's
gonna
send
github
a
notification
saying
I'm
running
the
test
now
and
this
little
thing
is
gonna
turn
yellow
and
then
it's-
and
this
is
a
sequential
process
right,
so
I've
got
four
jobs
running
in
parallel,
but
this
particular
job
we're
looking
at
is
running
multiple
steps
in
sequence.
So
it's
going
to
go.
B
Send
that
notification
turn
the
github
dot
yellow.
Then
it's
going
to
execute,
make
tests
inside
of
the
target
environment
and
it's
going
to
wait
around
while
the
tests
run
and
then,
when
the
tests
run,
it's
going
to
check
whether
they're,
successful
or
not,
and
if
they're
successful
it's
gonna
come
back
and
notify
github.
It's
gonna
change
it
to
a
green
check
box.
Otherwise
we're
going
to
get
output
like
this,
where
it
says
it
failed,
and
here's
all
this
stuff
that
just
went
wrong.
Okay.
B
So
so
we
can
see
kind
of
the
high
level
pattern
that
is
emerging
in
this
particular
script.
Right
we've
run
a
bunch
of
things
in
parallel
inside
of
each
of
these
to
each
of
these
functions,
we
might
call
several
things
in
sequence.
This
style
test
actually
runs
a
whole
bunch
of
stuff,
but
that
notification
wrap
function,
isn't
something
that's
built
into
brigade
again.
B
The
idea
is
we
give
the
primitives
to
allow
you
to
very
easily
be
able
to
extend
things,
and
so
this
notification
wrap
is
actually
just
one
little
function
that
does
some
asynchronous
processing
and
says:
okay,
create
a
notification
then
run
the
job,
then
capture
the
logs
and
notify
github
that
we're
done.
Okay
and
likewise
that
go
job
down
here
is
actually
just
a
class
that
extends
job.
B
So
what
we
try
to
do
with
this
JavaScript
idea
and
brigade
was
make
it
so
that
scripts
could
be
as
simple
as
the
stuff
we
looked
at
to
get
going
right
and
oftentimes.
My
scripts
actually
do
look
like
that.
There
may
be
like
two
or
three
jobs
that
get
triggered
because
of
a
chronopath
right
or
four
or
five
small
tasks
that
I
can
just
chain
together
with
async/await
kinds
of
stuff,
but
we
can
also
build
increasingly
sophisticated
stuff.
So
this
one
here
that
we're
looking
at
is
probably
mid
tier
in
the
sophistication
category
right.
B
But
but
the
important
part
is
that
we
were
attempting
very,
very
vigorously
to
create
an
environment
in
just
like
shell
scripting,
where
you
can
do
some
fairly
easy
things
in
a
fairly
easy
way,
but
then
build
up
from
there
to
meet
your
level
of
sophistication.
So
I
wanted
to
show
off
that
new
gateway,
because,
because
I
think
github
has
made
a
major
step
forward
with
this
check
suite
functionality
and
making
it
very
easy
to
kind
of
report
and
collect
data
and
stay
within
the
github
interface
without
having
to
log
in
to
another
system.
B
Well,
that's
what
I
wanted
to
show
off
this
morning
and
give
you
a
little
peek
at
what
we're
working
on
on
the
Brigade
gateway
for
your
github,
but
also
kind
of
show
off
some
of
the
features
that
are
already
there
for
github
I
am
I
mean
for
Brigade
and
I'm
happy
to
answer
questions,
but
that's
that's!
All
I
got.
A
A
B
A
B
The
the
generic
github
gateway
came
out
as
part
of
brigade
0.1,
so
it's
been
there
since
the
beginning
about
partway.
Through
when
this
check
sweet
API
came
out,
we
decided
we
wanted
to
kind
of
pivot
over
and
use
the
check
suite
function.
So
we
we
have
been
hacking
on
it
and
that's
the
result
of
our
hacking.
We're
gonna
open
source
that
is
a
separate
project
and
then
deprecated,
the
older
github
gateway,
I
think
that's
kind
of
our
current
plan.
B
A
C
A
A
A
So
this
is
a
so
for
any
of
those
of
us
who
have
worked
with
client
go.
We
know
that
it's
it's,
each
API
is
always
changing
and
they
do
increment
the
major
version,
but
in
the
last
year
it's
incremented
four
times
because
it's
always
changing-
and
this
is
a
simple
go
client
for
kubernetes.
It's
tagged
as
a
slimmed-down,
go
client
generated
using
the
kubernetes
protocol
buffer
support.
A
A
But
no
so
so
for
anybody
who's
interested
in
this
kubernetes
client
go.
It
can
be
difficult
to
work
with,
especially
for
a
lot
of
simple
circumstances,
and
this
may
be
an
alternative.
I
would
suggest
anybody,
who's
not
familiar,
go,
take
a
look
at
it
and
see
if
it
works
for
them,
because
a
slimmed-down
client
is
something
many
of
us
have
been
talking
about
for
a
while,
including
folks
who
work
on
API
machinery,
because
client
go
is
just
complicated
and
it
shifts
with
the
internals
of
kubernetes,
and
that
can
be
hard.
A
The
second
thing
that
was
on
the
agenda
is
okay,
so
so
what
are
the
cases
and
scenarios
that
folks
are
looking
for
for
functions
as
a
service
on
kubernetes,
and
this
is
partly
inspired
because
cane
native
is
now
out.
We
have
three
or
four
different
functions
as
a
service
on
kubernetes
and
there's
things
like
brigade
that
we
saw
today,
which
is
another
event-driven
thing
very
function,
based.
A
So
aside
from
something
like
what
what
Matt
showed
us
today
with
Brigade,
what
are
the
other
use
cases
in
scenarios
that
folks
would
want
to
use
functions
as
a
service
for
in
kubernetes
I'm
trying
to
draw
out
some
of
the
use
cases
the
reasons
behind
it?
You
know
what
are
the
things
above
it
because
just
having
something
is
nice,
but
when
you
know
what
we're
you're
doing,
you
can
know
who's
going
to
do
that,
and
then
you
can
start
to
drive
experiences
around
that
that
hit
the
need.
C
A
case
that
I've
seen
that's
successful
in
common,
it
seems
to
be
a
good
fit
for
function
as
a
service
in
general
is
when
you
have
a
particular
piece
of
your
application
that
needs
to
that's
advantageous
to
scale
in
its
own
way
and
perhaps
independently
from
the
rest.
For
example,
you
have
humans
who
are
submitting
videos
that
need
to
get
transcoded
or
documents
that
need
to
be
sent
through
OCR,
or
something
like
that,
where
you
want
to
be
able
to
isolate
that,
maybe
it
comes
in
bursts.
C
You
want
to
be
able
to
scale
up
and
down
your
capacity
just
for
that
portion
of
your
application,
because
it's
so
in
compute
intents
and
maybe
the
type
of
resources
that
you
grab
when
you
want
to
scale
that
are
different
from
the
rest
of
your
application.
Maybe
you
need
GPUs
for
that
kind
of
job,
but
not
the
rest
of
gratification,
that's
or
the
claim.
In
that
case
you
really
advantageous
to
isolate
just
that
job
into
functions,
to
service
or
something
similar
and
be
able
to
scale
and,
of
course,
kubernetes.
A
So
if
I
were
gonna,
do
something
like
media
transcoding
I
need
to
have
the
media
transcoder
so
I'm
either
calling
out
to
a
software-as-a-service
which
I
could
easily
do
in
a
function
and
then
wait
for
the
the
media
to
come
back
or
I'm.
Gonna
have
to
have
an
image
somewhere,
that's
custom
that
has
the
media
transcoding
material
on
it,
something
like
ffmpeg
right
and
so
now
I've
got
a
custom
image.
So
it's
not
just
a
function
that
does
that
work.
A
It's
other
extra
supporting
material
and
so
for
today,
I'm
doing
this
on
Prem
and
I'm,
doing
it
in
kubernetes
and
I'm,
not
calling
out
to
a
sass,
wouldn't
I
need
something
like
well
in
this
case
like
Brigade,
that
wrapped
it
with
logic
and
then
call
that
to
a
job,
to
an
image
that
had
that
material.
In
order
to
do
that
or
is
there
another
way,
I
could
just
nicely
wrap
that
in
a
function,
I
guess
I'm
thinking
next
up
like
what
would
that
be
like.
E
I
think
one
thing
that
has
allows
at
least
that
the
paradigm
allows
that
people
really
like
from
a
cost
perspective
is
pay
for
what
you
use
or
bill
for
what
you
use
right,
like
I,
think
that's
the
real
enabler
because
ultimately
capacity
right-sizing
there's
a
problem
that
we
try
to
solve
within
core
kubernetes,
as
well
with
her
horizontal
and
vertical
pot,
auto
scaling
various
cluster,
auto
scalars
for
the
underlying
infrastructure.
In
order
to
turn
up
new
images,
you
need
additional
capacity
and
release
the
misery
or
longer
years,
so
I,
don't
I,
don't
think.
E
Faz
is
really
from
that
perspective
new
to
the
pattern
of
right
sizing,
but
the
idea
that
we're
going
to
meter
individual
functions
and
try
to
build
back
based
on
those
and
the
ability
to
do
that
inside
of
kubernetes,
using
something
like
native
or
open
pass.
If
you
have
a
multi-tenant
organization
that
is
using
the
same,
compute
would
be
a
real
value.
Add
that
it's
you
can
do
that
with
containers,
but
it's
it's
kind
of
different
and
then
maybe
developer
agility
right
like
if
you're,
using
a
deployment
to
turn
up
of
applications.
E
For
instance,
you
can
incorporate
a
horizontal
pod
autoscaler
in
order
to
scale
that
deployment
up
and
scale
at
deployment
down,
based
upon
like
either
CPU
duty
cycle
or
memory
utilization
like
either
the
length
or
the
width
of
the
shape
of
the
pot,
but
with
a
faz.
It's
an
easier
tenant
like
it's
an
easier
unit
of
tenancy
for
developers,
I
think
to
think
about
right,
like
I
write.
This
function,
I
deploy
this
function
of
the
cloud,
the
resources
get
handled
for
me.
It
auto
scales
up
and
it
auto
skills
down
and
I.
E
F
Yeah
I
was
gonna,
cut
her
a
plus
one.
What
Kenneth
said
as
well
and
was
gonna
say:
I
mean
when
redhead
there's
a
lot
of
customers
and
what
they're
looking
to
do
is
really
to
unify
kind
of
their
entire
compute
platform,
and
so
when
they
they
really
love
about.
The
functions
of
the
service
is
the
ultimate
flexibility
and
they
and
the
end
user
consumption.
F
There's
some
that
aspect
of
kind
of
ease
of
use,
thought
of
it,
but
really
the
ultimate
and
flexibility,
because
when
we
talk
to
a
lot
of
people
are
running
these
data
centers
they
get
more
and
more
pressure
to
reduce
cause,
have
ultimate
flexibility,
and
then
they
have
lots
of
things
they're,
just
and
they're
consuming
ton
of
resources,
even
though
they
don't
really
need
it.
We're
all
familiar
with
this
problem.
F
It's
for
that
kind
of
ultimate
flexibility
of
scaling
when
it
needs
it
and
then
going
away
when
it's
not
there
and
also
using
the
same
platform
as
a
would
for
even
GPU
workloads,
as
well
as
some
other
pieces
that
sorts
their
consistency.
You
don't
want
to
have
admins
I,
have
to
know
and
learn
multiple
systems
in
the
backend
as
well.
F
A
So
in
some
ways,
what
we're
talking
about
here,
I
think,
is
a
good
way
to
put
it
as
the
scale
to
zero
problem.
How
do
you
scale
stuff
down
to
zero?
That's
just
not
in
use
right,
because
then
you
don't
have
to
end
up
paying
for
it,
even
if
you're
in
an
on-premise
data
center,
so
I
think
that's
where
folks,
but
that's
in
many
ways
more.
Maybe
server
lists
and
functions
as
a
service,
but
I
think
it
gets
into
the
same
place.
A
E
For
it's
a
different
cost
model
for
on-prem
deployment
than
cloud
right
like
for
Vaz
and
auto
scaling.
If
you
have
cluster
right
sizing,
you
couldn't
be
rescale
your
clusters
down
to
zero
and,
depending
on
the
bill
model
for
the
actual
control
plane,
you
you're
definitely
reducing
cost
from
the
perspective
of
on-prem,
you
already
pay
for
the
SKUs.
So
if
your
opinion
for
the
SKU
is
the
best
you
can
do,
is
utilize
them
as
much
as
possible
and
over
time,
when
you
make
your
next
purchase,
potentially
make
a
smaller
purchase
for
the
number
of
SKUs.
A
But
it
ends
up
getting
you
a
better
utilization
model,
which
is
where
the
cost
comes
in
on
problem.
It's
really
about
utilization
there,
and
even
in
public
cloud.
You
know
it
may
be
a
slightly
different
billing
model,
whether
you're
on
Prem
or
not,
but
everybody
ends
up
having
to
build
back
and
end
users
want
to
pay
for
the
least
amount
they
have
to
in
order
to
get
the
most
out
of
it
and
and
scaled
to
zero
of
your
your
stuff
kind
of
helps
with
that.
But
you
know
part
of
this
I'm
thinking
of
as
users.
A
You
know,
because
we're
talking
about
developers
today
right
when
a
developer
wants
to
do
stuff.
Where
is
their
interest
right?
Just
having
all
of
these
bits
and
parts
together
that
they
have
to
go
deal
with,
is
separate
from
the
kinds
of
use
cases
and
the
ways
they'd
want
to
do
it,
which
is
why
I
kind
of
dug
in
a
little
bit
on
something
like
media,
transcoding
or
OCR,
because
if
you
actually
get
to
it,
you're
not
going
to
do
the
media
transcoding
inside
of
your
function
right,
you
need
other
blocks
there
to
help.
A
You
do
that
I've
never
seen
a
JavaScript
function
that
can
do
the
media
transcoding
you're,
always
calling
out
to
some
external
thing.
Now,
if
I
were
an
AWS
I'd
be
piecing
these
parts
together
right
and
then
once
those
parts
are
pushed
together.
This
is
just
another
step
that
says
maybe
call
off
to
say:
AWS
is
transcoding
service
and
then
use
that
and
then
put
it
where
I
want
it,
and
it's
just
the
intermediate
logic
right.
A
So
how
would
I
go
about
it
because
that
end
developer
experience
I
think
a
huge
part
when
we
talk
about
developer
experience
just
having
pieces
that
you
can
plug
together
that
don't
quite
you
know
it's
like
when
I'm
using
Legos
and
I'm
like
okay
I've
got
a
handful
of
parts
and
I
put
them
together,
I'm
like
but
I
wish.
There
was
a
piece
that
kind
of
did
this
or
this
other
thing
so
I
could
solve
it.
A
You
know
like
something:
that's
a
wheel
or
something
that's
a
hinge,
but
if
I
don't
have
it
that
creates
problems
if
I'm
trying
to
build
something
that
needs
it
so
I'm
trying
to
identify.
Maybe
what
some
of
those
Legos
are
based
on
the
actual
experience
of
building
stuff
and
that
that's
kind
of
where
I
was
asking
about
this.
C
Something
an
approach
is
to
think
of
that
function
like
the
main
function
of
a
mini
application
and
treat
it
basically,
the
same
import,
stuff
use
third-party
libraries
that
gets
into
logistics
of
how
do
you
make
sure
that
stuff
is
available?
I
guess
back
to
the
image
building
you
talked
about
before,
but
I
think
that's
fairly
common
pattern.
A
So
we
drilled
are
there
any
other
scenarios
that
folks,
besides,
you
know
like
right-sizing
workloads
and
certain
use
cases
where
you
need
to
to
scale
differently,
for
maybe
the
rest
of
your
application,
like
media
transcoding,
which
I've
used
a
lot,
do
folks
have
other
situations
that
they
see
using
faz
for
or
know
of
customers
or
folks
you'd
want
to
use
it
for
I.
Think
one
thing
that
I've
seen
a
lot
of.
E
But
if
their
platforms
are
already
there,
you
know,
and
all
you
need
to
do
is
glue
together.
A
bunch
of
the
existing
platforms
to
add
some
new
business
logic
that
actually
provides
a
service
or
a
new
feature
functions
ease
the
development
effort,
at
least
from
again.
What
would
developers
I've
heard?
Other
developers
say
like
this
is
why
I
like
lambda?
This
is
why
I
like
open
pass.
E
A
You
and
I'll
tell
you
I'm
doing
some
work
on
the
upstream
charts
repo
coming
for
helm
and
don't
be
surprised
if
I'm
using
Brigade
and
some
of
this
stuff
to
make
some
of
that
work.
We've
got
to
do
some
some
maneuvering
over
there
with
labels
and
some
other
checks
and
stuff
like
that.
We're
looking
to
improve
on
it
and
I'll
might
take
a
look
at
using
Brigade,
in
particular
some
of
the
jobs
and
stuff
like
Matt
demo
today
to
do
that.
A
A
E
There
is
a
a
email
to
the
say,
gaps
mailing
list
earlier
today,
I
think
from
Tomas
yeah,
requesting
that
we
do
have
like
more
workloads,
API
discussions
and
we
really
haven't
had
too
many
in
a
while.
But
that's
also
because
the
workloads
API
has
been
a
little
more
quiet,
sits
at
when
GA.
Is
that
something
that
people
are
amenable
to
I
mean?
Maybe
we're
like
one
thing
that
it's
probably
clear
is
that
maybe
we're
not
discussing
the
API
enough?
If
there's
more
of
a
demand
and
we're
not
meeting
it,
does
anyone.
A
E
Job
is
the
only
one:
that's
not
GA
right
now.
There
are
some
other
discussions
that
we
might
want
to
have
about
job,
which
I
mean
a
lot
of
this
happens
on
github,
as
opposed
to
like
spending
dedicated
time
inside
of
the
cig,
but
so
I
mean
one
thing
that
I
was
kind
of
thinking.
Is
that,
like
we
have
20
minutes
right
now,
for
instance
right?
And
it's
not
like
we're,
usually
so
pressed
for
time
that
we
don't
have
time
to
talk
about
it.
We
usually
do
have
at
least
10
minutes
for
open
discussion.
E
At
the
end
of
every
meeting,
I
mean
it's
very
rare
that
we're
running
so
tight
that
we
can't
fit
it
in
I.
Don't
think
it's
a
bad
idea
to
dedicate
time
bi-weekly
as
was
proposed,
but
it's
also
not
clear
to
me
that
we
don't
already
have
enough
time
to
bring
up
specific
issues
as
necessary
and
just
put
them
on
the
agenda
so
I'm
just
kind
of
looking
for
what
you
know.
Our
members
want
to
do.
G
Yeah
I'm
gonna
reply
doing
that
email
as
well,
but
since
I'm
already
here,
I
think
I'm.
Part
of
reasoning
is
I,
honestly,
don't
care
about
the
rest
of
the
stuff.
So
we
do
have
like
the
open
discussion
at
the
end.
Like
say
how
more
other
stuff,
it
doesn't
really
relate
to
my
work
and
I'm
sure
it's
interesting
for
other
people,
but
I
kind
of
see
that
if,
like
a
lot
of
groups
here
with
different
agendas
or
different
type
of
work,
my
someone
works
on
helm.
G
E
That
makes
a
lot
of
sense.
So
what
I'm
hearing
is
that,
by
dedicating
their
specific
time
to
talk
about
the
workloads
API,
it
allows
people
to
get
in
on
the
conversation
that
they
want
to
have
and
not
have
to
attend
and
listen
to
a
bunch
of
other
conversations.
That
might
not
be
of
particular
interest
to
their
current
work
and
their
focus.
And
that
makes
sense
to
me.
G
Yeah
one
thing
that
came
to
my
mind
is
I.
Don't
think
the
small
issues
someone
like
I
add
up
to
people
feel
comfortable
to
put
like
the
small
issues
we
should
talk
about
in
the
world
like
guys
to
be
a
Chindi,
and
there
are
only
other
folks
that
are
but
don't
care
about
that
as
well.
I
think
it
goes
the
other
way.
E
Right,
it
goes
both
ways
right,
like
other
people
who
don't
want
to
who
want
to
drop
off
or
whatever
and
aren't
interested
in,
like
minimal
features,
aren't
forced
to
listen
to
us.
Go
into
detail
about
the
semantics
of
job
termination.
G
E
E
G
G
A
What
it's
worth
there
have
been
issues
with.
Github
xapi
is
recently
returning
all
the
information
for
some
queries,
so
that
has
been
causing
some
issues
over
the
last
week
or
so
I'm
not
sure
about
before
that.
But
there
have
been
issues
recently.
It
was
affecting
our
merge
automation
in
the
kubernetes
world,
which
uses
that
stuff.
G
A
E
But
one
thing
we
did
that
already
it
was
a
while
ago,
but
it
kind
of
got
quiet
that
was
kind
of
thing,
but
Thomas
did
present
this
to
say
gaps
several
months
ago.
I
think
it
was,
and
we
did
talk
about
lifecycle,
hooks
and
talked
about
maybe
atom
in
the
deployment.
First,
then,
on
very
useful
migrating
them
into
the
other
workloads
api's,
and
we
had
a
conversation
about
various
use
cases
for
them
how
they
were
used
and
implemented
on
deployment,
config
and
openshift.
So
there
I
mean
it
wasn't
like
this.
E
This
already
has
been
discussed,
I
guess
the
some
things
kind
of
been
lost
and
now
we're
ready
to
move
forward
with
it.
What
are
the
next
steps?
Some
of
the
interesting
things
is
because
it's
v1
and
it's
going
to
add
things
too.
The
v1,
API
I
think
we're
also
gonna
have
to
take
it
to
save
architecture.
I
mean
we
do
have
to
get
general
consensus
with
insta
gaps.
It's
something
we
want
to
do,
but
we
may,
under
the
new
rules,
have
to
go
through
Sega
architecture
as
well.
E
E
E
E
So
it's
like
in
the
interim
I
guess
what
they're
asking
for
is
to
get
approval
for
the
fields
or
going
to
add
to
be
one
objects
and
make
sure
that
as
we
promote
them
through
the
API,
we
stay
consistent
and
don't
cause
backward
and
compatibility
problems
between
first
released
versions
of
Cooper.
How
do
you
say,
then?
They,
it's
kind
of
gotten
really
people
gotten
really
sensitive
to
that.
So.
E
What
people
are
doing
now
for
alpha
annotations
is
stripping
or
alpha
fields
is
stripping
the
field
out
of
the
storage
format
prior
to
persisting
it
inside
of
a
API
server.
So,
basically,
when
alpha
is
enabled,
you
can't
store
any
of
those
types,
even
if
you
use
the
API
Bryan's
main
concern
is
that
when
you
add
these
types
to
the
the
the
types
go,
it
does
appear
in
the
swagger
step,
spec
so,
and
the
discovery
api's.
So
from
a
discovery
standpoint.
E
People
can
actually
take
a
look
at
it,
even
though
it
can't
be
persisted
into
ED
CD
and
that
could
cause
weird
client
interactions.
So
I
mean
it's
just
something
that
we
were
discussing
this
with
respect
to
the
various
pot
proposal
for
different
Isolators
like
G,
visor
and
cRIO,
and
so
forth.
Sandboxing
they
got
the
same
feedback.
E
F
G
E
No
one
is
happy
about
having
anything
that
users
interact
with
stored
as
an
annotation
or
anything
other
than
like
a
simple
string
or
numeric
value
stored
as
an
annotation,
because
storing
JSON
blobs
as
annotations
is
a
really
awful
UX
right.
So
if
no
one
wants
to
do
that,
the
question
is
like
for
for
so
for
the
sandbox
in,
for
instance,
they're,
adding
they're,
trying
to
add
a
field
to
pod
that
Kubla
would
consume
in
order
to
implement
the
sandboxing.
E
But
the
majority
of
the
api
they've
been
advised
to
implement
as
an
extension
that
would
be
sponsored
by
sig
note
kind
of
same
way,
the
app's
resources
and
that
may
ship
with
kubernetes
by
default,
but
isn't
being
developed
in
court.
So
if
we're
not
gonna
take
back
path,
I
think
we
need
to
take
a
look
at
making
sure
that's
not
an
option
and
then
having
a
good
story
about
why
we
want
to
do
this
in
court.
I.
E
Mean
it's
very
tightly
coupled
to
the
controllers.
So
to
me,
it's
like,
from
my
perspective,
I,
wouldn't
see
this
a
good
option
to
do
as
an
add-on
project,
but
we
might
get
feedback
from
single
architecture
that
we
do
so
we
need
I
mean
at
this
point
earlier.
Rather
than
later,
we
should
start
considering
taking
the
API
to
Sagarika
texture
and
seeing
what
their
feedback
is.