►
From YouTube: SIG Interoperability Meeting - July 7, 2022
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
B
C
Yeah
is,
is
fati
or
or
justin
or
any
of
those
folks
joining
melissa.
B
D
E
H
A
H
My
wife
is
a
is
a
software
developer
in
the
javascript
world
and
is
going
to
a
conference,
that's
held
at
like
this
outdoors
resort
in
in
oregon,
and
it's
like
a
very
chill
there's
like
activities
planned
for
like
canoeing
and
stuff
like
that,
and
then
they
have
conferences
during
the
day
and
I'm
on,
like
the
significant
other
track
of
like
I'm
gonna
show
up
and
like
do
fun,
resort
things
and
they
offer
like
child
care.
And
so
it's
amazing.
H
I'm
curious
are
any
of
you
on
the
call
writers.
Do
you
like
do
technical
type,
writing.
D
H
H
All
right,
okay,
so
we're
three
minutes
after
so,
let's
maybe
go
ahead
and
get
started.
I
H
Okay,
great
sweet,
so
welcome
everyone
to
sig
interoperability,
just
to
refresh
your
memory
on
what
it
is
we
do
here,
it's
basically
about
letting
projects
and
people
work
together
better,
and
that
includes
like
ideally
not
having
to
write
a
bunch
of
glue
code
when
you,
when
you
stitch
various
ci
cd
components
together.
H
That's
the
that's
the
mission
at
least,
and
so,
if
you
ever
have
to
write
glue
code,
that's
a
perfect
opportunity
to
bring
up
for
discussion
and
see
how
we
can
help
and
fix
that
kind
of.
In
this
topic
we
are
going
to
be
chatting
with
yuffizi,
who
are
kind
of
in
this
space
and
are
yeah
gonna
gonna
talk
to
us
about
ephemeral
branch
like
environment
creation,
on
like
a
per
branch
basis.
H
So
when
you
like,
submit
a
pull
request,
you
get
a
magic
event,
our
magic
environment,
that
you
can
go
poke
and
stuff
at
and
remind
me
again
who
is
here
among
us
from
uthesi.
C
Yeah,
so
so
I'm
josh,
I'm
a
co-founder.
I
primarily
consider
myself
the
chief
evangelist
for
all
things:
ephesy
amongst
wearing
many
hats
and
then
grayson
adam.
You
guys
want
to
introduce
yourselves.
E
Yeah,
I'm
grayson
also
co-founder
and
kind
of
lead
our
project
roadmap,
day-to-day
sort
of
act
as
a
sort
of
scrum
master
and
other
things
like
that,
so
and
and
they
probably
chief
document,
documentation
writer
as
well,
although
that's
kind
of
everybody's
job,
I
think
on
our
team,
but
I
do
a
lot
of
that
as
well.
F
E
H
All
here,
and
if
you
want
to
take
it
away
and
tell
us
about
if
easy,
what
it
does
and
kind
of
what
sort
of
stuff
you're
you're
up
to
that'd,
be
super
helpful.
C
Yeah
sounds
good
thanks.
So
much
for
having
us
justin
the
you
mentioned,
you
know
glue
code
and
interoperability,
so
there's
kind
of
two
main
elements
that
we
try
to
problems
we
try
to
solve.
One
is
an
operability
problem
between
what
developers
use
locally
and
then
the
infrastructure
world.
So
specifically
a
lot
of
teams,
use
docker
compose
for
spinning
up
their
application
locally
and
then,
of
course,
use
kubernetes
for
infrastructure.
C
So
uffizi
is
a
bridge
between
the
two,
so
you
can
use
docker
compose
to
create
on-demand
environments
on
kubernetes
and
then
the
other
interoperability
element
we
work
on
is
it's
designed
to
work
with
any
ci
cd
platform
and
the
specifically
it's
triggered
by
cd
events,
which
I
know
that's
I'm
speaking
generically
because
I've
learned
about
the
cd
events
project
and
that's
important
to
us
too,
because
every
cic
the
system
has
these
events
they're,
maybe
called
something
differently,
but
they're
effectively
the
same
thing
and
you
know
it's
helpful
to
us
to
be
able
to
trigger
on-demand
environments
based
on
commonly
defined
events
so
anyways.
C
I
will
I'll
kind
of
run
through
some
slides
here
to
introduce
challenges
that
we
see
in
the
industry
that
we're
trying
to
solve
and
then
get
to
the
more
important,
exciting
part
which
is
I'll.
Do
a
demo
of
spinning
up
an
environment.
C
C
C
Do
the
demo
called
action
hey?
How
can
you
get
involved
in
our
project?
We'll?
Do
some
q
a
and
then
I'll
dump
a
bunch
of
references
and
key
links
that
people
can
follow
up
with
so
the
the
problem?
Is
we
see
it
so
across
the
software
producing
industry
we're
using
cloud-based
test
environments
that
are
they're
shared,
persistent
and
over
resourced?
C
And
when
you
combine
those
three
together,
it's
inherently
inefficient
and
so
euphysis
is
really
trying
to
solve
this
inefficiency
problem
that
many
or
most
of
us
have
as
we're
taking
code
from
something
that's
written.
C
So
the
biggest
problem
we
see
is
the
sharing
problem
so
we're
using
a
shared
test
environment.
The
the
shorthand
we
use
is
it's
dirty.
You
have
consistently
pushing
new
commits
from
multiple
committers
into
that
environment
when
bugs
are
introduced,
it's
difficult
to
find
out
where
they
came
from,
and
it's
a
time-consuming
process,
typically
for
your
most
senior
developer,
to
manually,
trace
that
and
figure
it
out,
and
they
end
up
acting
as
bottleneck
for
organizations
where
oftentimes
during
a
holding
pattern.
Because,
again
the
environment
is
it's
contested.
C
While
problems
are
being
resolved,
other
people
can't
push
to
it,
okay,
so
the
persistence
problem.
This
is
probably
like
the
easiest
one
to
grasp.
So
when
you
have
persistent
environment,
it
obviously
runs
24
7
365.,
that's
a
ton
of
resources
that
are
being
utilized
when,
in
reality,
how
often
is
someone
actually
testing
something
in
that
environment.
The
simple
math
is
less
than
20
of
the
time-
it's
probably
it's,
probably
more
like
10,
but
I'm
being
conservative
there.
C
So
if
you
take
out
nights,
weekends
and
holidays
you're
down
to
20,
so
if
there's
a
way,
we
can
not
run
these
environments
all
the
time.
It's
gonna
be
huge
resource
savings
and
we
also
have
like
generically
an
over
resourcing
problem.
So
in
the
post,
kubernetes
world
tend
to
think
of
an
environment
as
a
cluster,
and
so
kubernetes
is
designed
for
scaling
and
availability.
Does
that
really
well
in
production,
but
test
environments?
Don't
need
to
scale
or
be
highly
available?
C
So
in
many
cases
you
have
the
kubernetes
cluster
just
by
existing,
is
consuming
way
more
resources
than
the
application
actually
requires,
and
so
we've
learned
this
firsthand.
If,
if
you
go
spin
up
a
gke
cluster
and
you
just
hit
the
default
settings,
you're
gonna
get
three
e2
medium
nodes,
that'll
be
a
combined.
C
You
know,
compute
of
about
12
gigabytes
of
ram
and
about
six
vcpu
there's
going
to
be
some
disk,
there's
going
to
be
a
cluster
management
fee
at
the
base
level,
you're
looking
about
160
bucks
a
month
for
that
and
then
I'm
going
to
deploy,
maybe
a
four
or
five
six
container
application
which
in
total
might
require.
You
know
a
gigabyte
or
maybe
two
gigabytes
of
memory
to
actually
run
that
application.
C
So
in
this
case
I've
I've
got
an
infrastructure
layer,
that's
consuming
way
more
infrastructure
way
more
resource
than
my
application
actually
needs,
and
so
to
develop.
To
talk
through
that
scenario,
a
little
bit
more
like,
let's
say,
you're
a
large
organization,
you've
got
a
development
team,
let's
say
acme
incorporated:
they
require
10
test
environments
in
any
given
month.
Maybe
qa
staging
demo
environments.
Okay,
they
need
10
environments,
so
option
one.
C
I
could
create
10
persistent
kubernetes
clusters,
they
can
deploy
one
application
per
cluster,
and
that
looks
like
this
on
the
left
right
and
a
total
cost
of
about
1600
a
month
baseline
and
then
also
you
know
just
a
huge
amount
of
of
memory.
I
just
the
nodes
alone
are
being
consumed
right
and
then
option
two.
I
could
use
one
cluster
and
I
have
ten
lightweight
environments
within
the
cluster.
C
So
in
that
way
I
could
have
a
much
more
efficient
use
of
my
resources,
and
this
would
cost
me
about
160
bucks
a
month.
So
a
ten-fold
difference
just
in
in
the
price
and
the
resources
consumed.
C
So,
okay,
so
those
are
our
problems
right,
I've
got
persistence,
I've
got
sharing
and
I've
got
over
resourcing,
so
euphysi
endeavors
to
solve
those.
So
our
project
envisions
a
new
testing
paradigm
to
use
on-demand
environments
to
overcome
these
inefficiencies,
and
so
the
characteristics
that
we
are
aiming
for
in
our
environments
are
so
they're
ephemeral,
so
they're
spun
up
in
response
to
demand
or
in
a
specific
event,
and
they
have
a
purpose-driven
life
cycle.
They
only
exist
as
long
as
they're
needed
for
testing,
so
they're
clean,
they're,
not
shared.
So
I
get.
C
I
can
have
an
environment
per
developer
per
branch
per
release.
However,
your
team
wants
to
work
they're
lightweight,
so
they
use
really
only
the
amount
of
resources
that
they
need
and
then
a
whole
nother
kind
of
set
of
problems
that
we
haven't.
That
didn't
even
really
bring
up
that.
C
I'm
not
going
to
talk
too
much
about
days
but
being
developer
friendly,
is
super
important
as
well,
and
and
that's
part
of
our
decision,
reasoning
behind
using
docker
compose
to
to
be
able
to
define
these,
so
they
can
be
configurable
by
really
anyone
on
your
team,
not
just
folks
who
are
smart
on
devops
and
infrastructure,.
C
And
so
euphysis
environments
are
by
definition,
they're
pre-production.
So
it's
not
for
use
in
production,
it's
not
for
load
testing
per
se,
but
there's
a
lot
of
test
cases
that
can
be
used.
So
pr
or
merge
request,
environments,
preview,
environments-
and
some
of
these
are
a
little
bit
synonymous.
But
people
use
different
terms.
So
I'm
going
to
try
to
be
comprehensive
here.
You
can
use
them
for
qa,
for
staging,
for
release
and
for
demo
environments
as
well.
C
So
let's
talk
a
little
bit
about
capabilities
and
limitations,
so
in
the
blue.
This
is
what
it
can
do
in
like
scenarios
where
it's
really
useful.
So
if
I
have
a
microservice
application
that
be
can
be
defined
in
docker
compose,
then
that's
a
good
use
case
for
leveraging
your
feezy
environments
works
well
for
any
services
that
can
be
mocked
or
represented
as
a
container.
So
maybe
in
production
I'm
running
some
containers,
but
I'm
also
using
maybe
a
managed
database
or
maybe
some
other
managed
services.
C
If
those
services
can
be
mocked
up
as
containers,
then
ufezi
is
a
good
solution
works
well,
for
if
test
data
can
easily
be
easily
seated
in
a
new
environment
and
then
also
for
applications
that
anything
that
interacts
with
a
stateless,
stateless
managed
service,
it
works
well
so
where
it
would
be
contraindicated.
So
if
I
have
a
really
large
and
complex
application
that
relies
on
managed
services
that
can't
be
easily
represented
with
a
container,
it
wouldn't
be
a
good
use
case
for
that.
C
If
my
test
data
is
like
really
large
and
complex,
and
I
really
need
it
for
for
my
different
environments,
if
it's
too
just
unwieldy
to
seed
that
can
make
that
could
be
problematic,
I
wouldn't
want
to
use
it
for
load
testing.
It's
not
designed
for
scale,
and-
and
I
mentioned
before
that
these
aren't
designed
for
production.
C
So
put
it
all
together,
so
to
get
the
physio
on
demand
environments
capability,
you're,
gonna,
define
your
application
in
docker
compose
it's
going
to
be
triggered
with
your
cd
events.
So,
and
what
I'm
getting
ready
to
demo
is
is
a
is
a
pull
request.
Event.
You
can
obviously
set
it
up
for
other
events.
Fees
is
going
to
do
the
orchestration
and
then
kubernetes
is
really
the
infrastructure
layer
where
these
sort
of
mini
lightweight
environments
will
be
running
and
just
quick
architecture.
C
So
in
an
open
source
setup,
I've
got
euphysia
app
the
controller
and
the
controller
installed
on
the
cluster
itself.
It's
controlled
by
a
cli,
and
so
those
commands
are
passed
from
the
cli
efficient's
doing
the
work
and
then
the
controller
acts
as
a
smart
proxy
between
ufizi
app
and
the
kubernetes
api.
And
it's
it's
designed
to
reduce
the
amount
of
capability
that
you
fees,
so
your
fee
app
doesn't
have
cart
blanche
access
to
the
kubernetes
api
to
do
whatever
kubernetes
can
do
it.
C
It
has
a
reduced
or
should
be
a
scoped
set
of
permissions
that
help
improve
the
overall
security
of
it,
and
so,
when
the
cli
runs
commands.
Ultimately,
what
happens
through
this
process?
Is
you
get
your
preview
deployments
or
preview
environments
that
are
running
as
individual
paws
within
their
own
namespace
within
your
kubernetes
cluster.
C
All
right
I'll
go
ahead
and
jump
to
a
demo
now
I'll
pause
here.
Does
anyone
have
any
questions
before
I
get
into
the
more
exciting
part.
H
C
Yes,
so
right
now,
it
is
on
the
application
developer
to
do
that
on
our
roadmap,
we're
planning
to
have
our
own
repo
that
basically
helps
people
pull
production,
data,
anonymize
or
sanitize
it
and
then
be
able
to
inject
that
into
these
environments.
But
right
now
it's
it's
a
self-serve
kind
of
situation.
H
And
the
other
one
was
like
standing
up
a
an
environment,
so
I'm
I'm
coming
coming
from
a
world
that
is
largely
vm
based
and
so
there's
a
lot
of
like
in
place
upgrading
where
you're
like
scp
this
binary
to
that
five,
this
location
on
disk
and
things
can
go
wrong
in
that
upgrade
process
that
I'm
not
sure
would
be
caught
in
the
same
way.
If
we
just
stood
up
a
brand
new
server
every
time
does
that
make
sense,
I'm
kind
of
curious
what
your
what
you're
thinking
on
is
on
that
upgrade
testing.
C
Yeah
and
you're
saying
when
you
upgrade
the
the
node
itself,
like
the
virtual
machine.
H
Look
there's
a
difference
between
having
a
having
an
existing
version
and
updating
over
top
of
it
versus
standing
up
a
brand
new,
empty
server
and
putting
the
binary
on
there
to
begin
with
and
yuffizi
does
the
second
and
I
think,
broadly
kubernetes
does
the
second
yeah,
and
so
I'm
just
kind
of
curious,
if
that's
like
part
of
your
y'all's
mental
map
of
like
how
things
are,
and
you
have
guidance
or
is
it
just
like
we
do
the
things
that
kubernetes
does,
and
I.
E
Yeah
yeah
and
I
can
add
a
little
bit
more
context
there.
I
think
yeah
you're,
absolutely
right.
We
are
doing
second
and
I
think,
at
least
in
this,
these
early
stages
we're
taking
the
approach
of
like
cloud
native
and
like
not
assuming
that,
like
the
resource
or
the
environment,
that
was,
there
is
going
to
be
there
in
that.
You
know
clean
slate
mentality
that
that's
a
it's
a
great
use
case
that
I
think
we
should
we'd
love
to
get
feedback
on
in
terms
of
how
we
could
potentially
approach
solving
that.
E
But
in
terms
of
like
where
it
is
now,
it's
it's
yeah.
It's
definitely
more
easy
to
assume
that
like
hey
it,
it's
not
it
wasn't
there
and
it's
not
going
to
be
there
or
don't
expect
it
to
be
there.
You
know
on
the
next
creation
step.
C
Cool
is
this:
can
you
guys
see
my
screen?
Is
that
big
enough
dude
yeah.
C
Good
enough,
okay,
cool!
So
let
me
so
this
is
my
application
here.
So
it's
it's
a
demo
app
microservices
application.
So
it's
got
five
gonna
have
five
services
here
that
are
represented
and
we're
actually
gonna
add
one
more
we're.
Gonna,
add
a
load
balancer
to
it,
and
so
scenario
is
so
I'm
an
engineer
working
on
one
of
these
services.
I've
got
my
own
branch
here.
This
is
my
I'm
working
on
my
double
dogs
branch
here
and
I'm
ready
to
push
that
up.
C
C
So
this
is
a
dog
versus
cats,
voting
app,
and
so
I'm
making
a
functional
change
here
where,
when
someone
votes,
it's
gonna
count
as
two
votes
for
the
for
the
dogs.
C
Cool
so
just
open
that
pull
request,
and
so
what's
gonna
happen.
Let
me
jump
over
to.
C
Okay,
so
within
github
here.
C
And
so
that's
going
to
kick
off
my
github
actions,
workflow,
the
first
phase
of
which
we're
going
to
build
the
images
that
are
defined
in
my
docker
compose
and
then
we're
going
to
push
those
to
we're
using
ecr
here.
But
it
could
be
any
container
registry
and
once
that
steps
done,
the
the
key
euphysis
step
here
is
euphysi.
C
The
euphysis
cli
will
be
running
in
a
github
runner
and
it's
going
to
give
the
command
you
feezy
preview,
create
and
from
that
it
will
pull
the
images
that
were
just
built
again,
as
my
application
is
defined
in
docker
compose,
and
it
will
stand
up
that
environment
on
my
cluster
and
then
the
the
real.
So
what
is
this
process
will
end
with
a
comment
is
going
to
get
added
to
my
pull
request
and
that
comment
will
have
a
preview
url.
C
So
everyone
on
the
team,
all
this
basically
opened
a
pull
request
and
all
my
developers
get
the
preview
url
right.
So
my
testers
engineers,
who
are
other
stakeholders,
can
come
and
review
that
there.
C
And
while
this
takes
a
couple
minutes
so
while
we're
doing
that,
let's
when
I
look
at
my
let's
look
at
our
compose,
so
everyone
get
a
sense
of
that.
C
So
the
only
thing
that's
really
unique
about
set
using
your
compose
for
you
is
you
have
to
have
you
got
to
define
an
ingress,
obviously
because
we're
in
the
cloud
here,
and
so
we
have
this
x,
ufezi
extension.
Of
course
this
is
a
nice
docker
convention
that
they
provide
so
that
this
file
still
works
as
a
compose.
It
ignores
the
fuse
extension
when
I'm
running
it
locally,
but
so
I'm
defining
the
ingress.
My
load
balancer
service
is
going
to
receive
it
and
what
port
it's
listing
on.
C
So
if
easy
defaults
to
125
megabyte
container,
but
I
can
set
those
up
to
four
gigabytes
and
so
for
containers
that
maybe
require
a
little
more
memory
like
this
postgres.
For
example,
I've
got
that
set
up
at
500
megabytes
and
then
one
thing
that's
happening.
Also
in
this
process
is
we'll
be
seeding.
The
database
when
this
environment
stands
up.
C
So,
okay,
it
looks
like
it's
already
deployed
here,
so
this
is
my
comment:
let's
go
check
it
out
cool,
so
my
app
is
up-
and
my
I
can
tell
my
database
is
seated
because
I've
got
15
votes
in
here.
Obviously
this
is
a
brand
new
environment
and
then
I
can
go
check
my
change
here.
A
C
I
voted
once
and
I
got
two
votes,
so
my
change
is
good,
and
so
now
I
can
confirm
that.
Basically,
this
is
this
is
working
code.
This
is
working
as
designed
and
can
go
ahead
and
merge
this,
and
when
I
do
I
mean
it
would
delete
this
environment.
C
Let
me
before
I
do
that.
Let
me
show
off
some
other
cool
features.
C
So
what
if
it
wasn't
quite
right
and
my
feedback
is
hey-
I
wanted
to
make
a
change
here.
So
someone
comes
and
says:
hey,
let's
the
background.
Color
is
not
right,
so
let's
just
make
this.
C
My
container
is
going
to
get
rebuilt
and
then
redeploy
this
environment
so
I'll
keep
this
same
preview
url,
but
my
change
will
show
up
here
behind
without
changing
the
url
out
and
I
think
it's
kind
of
cool
so
the
first
time
I
do
it,
I
get
this
rocket
emoji
and
then,
when
it
gets
updated
again
I'll
get
I'll
get
a
thumbs
up.
Emoji,
I
assume
that's.
C
I
assume
you
can
set
the
different
emojis
there,
but
anyways
they
kind
of,
let
you
know
the
changes
that
have
been
made
and
then
ufc
will
actually
edit.
This
comment
I
mean
say
that
that
it's
been
updated
so.
C
Yeah,
so
this
is
because
I'm
using
our
cluster
and
our
dns,
if
you
installed
this
yourself
on
your
own
cluster,
you
would
use
your
own
dns
service
to
to
set
this
up.
C
So
there's
several
jobs
here
we
talked
about
so
the
event
here
was
the
pull
request:
okay
and
that
kicked
it
off,
and
then
these
first,
I
guess,
four
or
five
jobs-
are
all
about
building
and
pushing
which
are
pretty
standard,
cict
processes,
but
the
kind
of
ex
interesting
part
here
is
this
deploy
ufc
preview,
so
we're
calling
a
remote
workflow-
and
this
is
more
of
a
github
actions
feature,
but
so
we
have
a
github
action
that
does
basically
creates
your
preview
environment.
C
It'll,
update
it
and
it'll
also
delete
it
and
it
uses
the
same
workflow.
So
I
can
effectively
grab
this
preview
action.
Add
it
to
my
workflow
and
you're
getting
all
that
capability
with
with
just
a
few
lines
here
adam
wrote
this
adam,
you
have
any
more
any
more!
You
want
to
say
about
that.
F
E
Yeah
and
then
right
there
that
was
the
the
server
pointing
to
you
easy
hosted
easy,
but
of
course,
if
you
were
had
installed
that
you
would,
this
would
be
your
own
url.
A
C
Obviously
these
are
my
build
and
push
what's
happening
too
is
our
docker
compose
is
being
dynamically
rendered,
so
we're
grabbing
a
template,
docker
compose
and
then
the
images
that
were
just
built
from
this
process
are
getting
effectively
injected
there
and
that's
how
euphysis
knows
to
create
an
environment
based
on
the
images
that
were
just
built
and
then.
C
This
is
the
key
step
here
that
we
just
ran,
so
we
updated
an
existing
preview.
So
all
there's
a
lot
of
all
this
to
say
that
the
efficiency
cli
right
here
runs
preview,
update
and
that's
how
we're
getting
you
know
our
latest
commit
in
the
environment.
F
C
And
okay,
so
back
in
my
pull
request
here,
I
can
see
that
this
comment
was
edited
and
of
course
I
got
this
thumbs
up
too,
which
tell
me
that
there's
been
a
change,
and
so
we
can
go,
take
a
look
and
see
if
my
change
is
made
here,
oh
well,
it
didn't
change.
C
Well,
let
me
let
me
pause
here
or
kind
of
in
the
meat
of
it
for
questions
or
comments.
H
C
Kind
of
curious,
so
we
we
try
to
talk
to
a
lot
of
people
as
many
folks
as
we
can
about
how
they're
you
know
solving
some
of
these
challenges,
and
so
it's,
I
would
say
it's
more
rare
for
me
to
some
come
across
someone
who
isn't
either
like
either
hasn't
built,
something
that
they're
trying
to
solve
or
like
is
actively
seeking.
C
Some
sort
of
solution,
like
maybe
they're
gonna,
try
to
build
something
and
that's
part
of
the
reason
we
created
this
open
source
project
because
we
realized
so
many
people
were
trying
to
solve
the
same
problem,
but
I
would
say,
broadly,
like
I,
I've
almost
never
talked
to
anyone
who's
happy
with
their
solution
and
and
so
anyways.
Our
goal
is
to
run
this
project
and
keep
iterating
in
a
very
focused
way.
C
So
hopefully,
like
things
like
user
experience,
like
the
real
challenges
of
like
like
getting
the
timing,
all
right
to
make
this
like
a
good
seamless
experience,
is,
is
quite
challenging
and
anyway,
so
so
we're
trying
to.
I
guess,
solve
that.
Broadly
for
the
community.
F
C
That's
right,
yeah,
but
a
lot
of
time.
You
know
you
bring
a
good
point
blister
like
a
lot
of
times.
These
can
last
maybe
several
days
someone's
working
on
a
on
a
specific
pr
branch,
and
so
what
a
lot
of
our
end
users
do
is
they'll
set
a
timeout.
So
basically
the
environment
goes
away
at
the
end
of
the
workday
and
I'm
trying
to
come
up
and
show
you
an
example
here.
C
Let's
see
okay,
so
if
you
see
right
here
this
delete
preview
after
stanza,
I
can
set
the
timeout
there
and
yeah,
so
it'll
get
deleted
like
overnight
and
then
the
next
morning,
all
you
have
to
do
is
is
basically
just
reopen
the
pull
request
and
it'll
spin
back
up
for
you
we're
we're
working
to
make
this
even
more
sophisticated,
where
you
could
basically
say
like
hey.
C
C
And
I
will
let's
see
so
we
talked
about
going
away,
let's,
let's
delete,
let's
just
close,
this
pull
request.
C
Okay,
so
I
closed
it
and
then
the
final
thing
we'll
do
is
so
this
that
kicked
off
another
workflow
that
will
run
the
delete
environment
command
and
so
so
it'll
get
cleaned
up
and
then
back
in
my
pull
request,
which
is
now
closed,
you
can
see
that
there's
work
happening,
but
this
will
get
updated
to
say
that
it's
been
deleted.
E
E
One
thing
that
josh
mentioned
is
we're
doing
we're,
focusing
on
developer
experience
and
that's
why
we're
choosing
docker
compose
you
know
and
fundamentally
really
what
a
big
portion
of
what
ufc
is
doing
is
a
mapping
between
compose
and
kubernetes
and,
of
course,
kubernetes
has
much
more
granular
controls
around,
like
you
know,
container
life
cycles
that
type
of
thing
you
know,
but
by
trying
to
stick
with
the
compose
format,
you
know
we're
really
trying
to
like
fit
those
definitions
within
those
contexts
within
that
context,
and
try
to
keep
that
mapping
preserve
that
mapping
and
the
capabilities
that
we
want
to
expose
of
kubernetes
but
exposing
it
throughout.
E
You
know
more
simplified
interface
with
compose.
B
So
one
other
question
I
have
is:
I
focused
on
tying
this
specifically
to
a
pull
request.
Do
you
envision
being
able
to
do
this
at
other
stages
in
your
pipeline,
for
example?
Let's
say:
well,
let's
say
I'm
a
developer.
I've
been
notified
that
there's
a
bug
I
need
to
fix.
Well,
I
don't
know,
I
need
to
reproduce
that
bug
right
and
what's
running
in
production
is
a
particular
version,
and
so
I
might
go
to
my
github
repository.
Go
back
to
that
particular
version.
B
It
would
be
nice
to
launch
an
ephemeral
environment
right
there
to
be
able
to
reproduce
the
bug.
That's
the
first
thing.
I'd
want
to
do
as
a
developer.
Do
you
envision
a
way
to
do
that?
That's
one
one
thing,
and
then
the
other
is
how
about
later
on
in
your
pipeline,
when
you're
running
automated
tests
before
deploying
to
production.
F
Yeah
I'd
like
to
answer
that
yeah
go
ahead
for
the
first
case,
especially
you
can
certainly
run
the
command
line
interface,
either
from
a
container
or
any
workstation.
You
would
probably
whip
up
a
docker
compose
file.
That
pointed
to
the
images,
probably
the
exact
images
that
are
running
on
production.
If
those
are
available
to
you-
and
you
can
tell
you
face
it-
to
deploy
that
docker
compose
file
and
then
you'll
have
an
environment
that
hopefully
represents
production
where
you
can
reproduce
that
bug
and.
F
C
Yeah
and
melissa
another
way
we're
envisioning
this
being
used,
so
the
what
I
just
showed
is
obviously
a
preview
or
pr
environment,
but
using
this
for
release
environments.
So
setting
up
other
cd
events
so
that
hey
I'm
ready
to
cut
a
new
release.
C
But
before
I
go
into
production,
I
want
to
basically
be
able
to
grab
all
these
different
features
that
I
want
to
combine
into
a
release,
branch
and
and
spin
and
spin
that
up.
C
But
and
then
well,
this
is
a
good
time
to
talk
about
a
roadmap,
probably.
C
So
we
have
a
public
road
map.
It's
it's
here
and
kind
of
four
main
efforts,
so
we're
so
we've
gone.
We've
tried
to
go
deep
on
github
actions
and
we
still
have
maybe
a
little
bit
more
refinement
to
make
that
even
a
more
steam
streamlined
process,
but
then
we're
looking
to
expand
to
other.
You
know
major
ci
cd
providers.
C
You
know
it's
so
we
have
this
github
action
right
now
that
anyone
can
grab
having
the
same
type
of
plan
or
workflow
for
every
platform
is
good
or
is
what,
as
our
goal,
adding
support
for
helm.
So
a
lot
of
folks
have
gone
away
from
using
docker
compose,
so
we
want
to
be
able
to
support
those
folks
as
well
who
are
defining
their
application
that
way
so
integrations
with
collaboration
software.
C
So
it's
nice
to
have
the
comment
in
your
pull
request
or
in
github,
but
a
lot
of
teammates
don't
necessarily
work
there.
C
So
I'd
make
it
super
easy
for,
like
jira,
slag
discord,
microsoft
teams
other
places
that
people
do
their
work
and
then
what
you're
touching
on
is
you
know,
developer
being
able
to
run
the
cli
locally
being
able
to
do
basically
a
quick
bi-directional
sync
of
like
what's
on
the
cluster
and
then
using
your
debugging
tools
that
you
have
locally
be
able
to
make
that
really
seamless,
as
well
as
being
able
to
ssh
into
the
individual
containers
and
check
them
out.
That
way
is,
is
something
we're
wanting
to
add
so.
C
Cool
so
call
to
action
we're
actively
seeking
contributors.
I
saw
v
bob's
on
the
call
he
he
submitted
our
first
outside
pull
request
and
that's
pretty
awesome
he's
also
working
on
a
a
dagger
plan
that
uses
yuffizi.
We're
really
excited
about
that,
because
dagger
can
help
us
with
the
the
one
to
many
problem
of
ci.
Cd
also
can
help,
because
you
know
developers
could
run
that
dagger
plan
locally
and
then,
of
course
you
can
run
it
in
your
ci
cd
system.
C
Bebop
wasn't
the
call,
I
think
he
had
made
it
maybe
had
to
draw
still
here.
Oh
bob
say
hello:
are
you
up
he's
probably
working
and
listening?
Oh
my
my.
D
C
No
worries,
no
worries,
man,
you
just
you,
can
join
our
community
on
slack,
there's
a
lot
of
updates
and
discussion
that
happens
there
and
then
we've
got
a
newsletter
that
goes
out
about
once
a
month
with,
with
updates
again
just
go
to
our
repo
and
all
this
you
from
there,
you
should
be
able
to
find
anything
anything
you
need
and
then
I'll
here's
just
a
bunch
of
links
where
you
can
find
information,
though
maybe
I'll
drop
these
in
this
in
the
in
the
chat
here.
C
I
think
that's
that's
it
from
from
our
team
and
we'd
love
to
have
as
many
sidebars
as
people
want
to
have.
C
Melissa,
how
would
like,
how
would
you
imply
there
are
places
you've
worked
or
where
you
work
now,
how
would
you
envision
using
euphysis
and
like
maybe
what
problems
might
have
solved
for
for
for
your
teams.
B
I
do
like
the
preview
environment
sometimes
there's
a
pretty.
Unfortunately,
there
can
be
a
disconnect,
especially
when
requirements
are
communicated
down
to
the
development
team.
It
is
nice
to
be
able
to
get
a
quick
preview
of
something
that
has
been
built
without
polluting
our
main
branch,
especially
if
there
are
open
questions
on
what
things
look
like
or
how
things
are
actually
working.
B
B
Also,
I
mean
something
that
that
you
know
I
always
end
up
doing,
at
least
with
our
container
services
in
the
past.
Is
we
always
have
to
have
this
huge
staging
environment
where
we,
you
know,
place
everything,
and
then
we
need
to
run
it
all
of
our
api
tests
against
it
from
outside,
which
is
exactly
what
you
described.
So
that's
something
I'd
be
interested
in
doing
it
seemed
like
that
environment
was
always
in
contention
because
we
always
had
to
find
out
okay.
B
C
Our
so
our
senior
senior
developer
developers
lydia
and
asked
her
how
how
painful
is
it
on
a
skeleton
she
says
it's
10
out
of
10.
H
Ebay,
just
went
through
a
very
large
process
to
called
staging
get
well,
which
was
taking
this
giant
shared
staging
environment
and
like
focusing
on
availability
and
please
like
don't
break
it.
And
if
you
break
it,
we'll
cut
you
tickets
and
maybe
call
your
phone,
because
it's
important
and
it
blocks
everyone
else
from
working
so
that
that
feels
like
the
alternative
is
to
be
on
call
for
staging,
which
is
oh.
B
B
So
you
know
another
thing
that
I'm
interested
to
see
and
I'd
like
to
play
with
this
a
little
bit
more
just
to
get
my
head
around.
It
is
the
permissions
needed
for
these
ephemeral
clusters,
so
something
that
I've
seen
happen
in
the
past.
Is
you
know
everyone
is
getting
permission
to
access
the
staging
environment
and,
unfortunately,
what
happens
unintentionally
is
someone
might
set?
You
know
local
settings
on
their
machine
to
start
collecting.
B
C
C
Yeah,
I
would
say
so
the
way
we
envision
it,
particularly
if
you're
running
in
the
open
source
scenario
is.
You
have
really
just
a
very
limited
number
of
folks
who
can
adjust
the
cluster
settings,
but
then
you
know
really
all
your
developers
should
have
access
to
adjust.
C
I
say
you
wouldn't
not
to
change
the
automated
settings,
but
the
ability
to
you
know
define
their
own
compose
and
run
that
from
the
cli
in
their
own
environment
is
something
we
envision.
We
want
people
to
do
we
think
that
makes
sense.
So
in
that
regard
you
know,
you
know
any
developer
on
the
team
can
basically
have
effective
their
own
namespace
or
set
of
namespaces
where
they
cannot
affect
or
impact
anything
else
on
the
cluster.
C
And
the
nice
thing:
if,
if
you
break
one
of
these
environments,
you
can
just
throw
it
away
and
just
get
a
new
one,
which
is
nice.
H
C
Oh
yeah
and
melissa
you
had
asked
about
automated
testing
earlier,
so
we
actually
have
end
users
who
they
will.
They
use
it
for
spinning
up
their
api
and
they
use
your
fusey
environments
and
they
run
their
automated
tests
in
a
github
action
sequence
against
the
url,
where
the
api
is
is
hosted.
So
folks
are
already
doing
that.
One
thing
that
we've
thought
about
is:
maybe
I'm
even
adding
the
automated
tests
into
the
compose
and
to
make
it
maybe
a
little
bit
more
seamless.
But
it's
definitely
something.
H
G
Okay,
thank
you
yeah.
I
had
a
question.
I
mean
in,
like
multi-repo
type
of
environments
had
quite
a
few
times
situation
where
I'm
making
a
change
in
maybe
one
service,
and
I
want
to
see
how
that
would
work
with
the
change
in
another
service,
but
without
either
them
being
merged.
C
Yeah,
but,
and
also
on
a
roadmap,
we
would
like
to
make
that
a
more
well
just
required
less
manual
process
on
on
your
end,
to
have
settings
to
do
that,
but
it
can
be
done
manually
now.
H
Cool,
so
we
have
a
handful
of
like
open
action
items
that
I
was
just
adding
to
the
notes
for
the
from
from
past
meetings
yeah.
So
one
of
them
is
on
me,
which
is
to
write
up
intent-based
pipelines.
That's
done,
I
think,
and
I'm
roxanne
who
does
the
publishing
is
on
vacation,
and
so
she
gets
back
next
week.
So
hopefully
we
can
publish
it.
Then
melissa.
You
had
an
action
item
which
was
determining
where,
in
the
best
practices
site
to
add
vocabulary,
documentation,
yep.
B
Yes,
we
need
to
go
back
and
and
look
at
terry's
suggestions
for
where
to
put
that
and
get
some
recommendations
on
how
to
get
that
site
updated.
I
know
both
car
and
I
have
been
out
for
a
bit
for
a
vacation,
so
hopefully,
by
next
meeting
we
can
get
something
up
and
running
there.
I
Okay,
I'll
be
out
on
vacation
that.
A
I
H
B
D
Yeah,
it's
just
my
membership,
putting
me
on
the
readme.
It
is
now
merged
so
quickly.
D
Cool
so
a
quick
couple
of
things
for
josh
and
crew
at
sas.
We
already
built
this
and
I
don't
like
it
so
there
you
go,
but
but
it
works
and
we
haven't
had
to
touch
it
for
a
year,
and
so
nobody
cares
now
so
yeah
I'd
be
interested
our
use
case
and
all
customers
say
this.
Our
used
caps,
our
use
cases
are,
are
unique
to
us.
D
It's
a
lie
that
everyone
tells
themselves,
but
we
have
dug
our
own
holes
by
not
in
40
years,
not
invented
here
and
so
there's
like
our
our
pipeline
stuff,
which
is
what
I
work
on.
We
usually
keep
a
deb
test,
prod
scenario
and
it's
kind
of.
G
D
But
dev's
never
guaranteed
to
be
alive
and
if
you
break
dev,
you
got
to
go
fix
it.
So
we
don't
we
don't
page
people,
we
don't
cry.
We
just
you
know
if
you
break
deb,
go
fix
it
and
we
encourage
people
to
break
things,
because,
if
you're
not
breaking
stuff,
you're
not
doing
anything,
in
which
case
we
need
you
to
contribute
more.
So
you
break
more
so
that
we
know
that
you're
here.
D
F
D
We
auto
deploy
to
dev
and
then
we
have
functional
tests
that
go
with
each
service
and
they
run
at
that
point.
They
kick
off
and
we
use
kafka
for
our
events
and
so
and
then
we've
got
a
audit
system
that
is
like
a
receipt
system
that
keeps
track
of
all
the
stuff
that
we
did
in
that
action
right.
So
the
deployment
gets
recorded,
the
functional
test,
passing,
gets
recorded,
functional
test,
failing,
gets
recorded
and
they
trigger
off.
A
D
Things
it's
all
asynchronous,
so
it's
really
easy
to
find
stuff.
No,
it's
not!
You
can't
find
anything.
Observability
is
really
hard,
but
yeah
we've
kind
of
been
down
this,
so
I'm
interested
to
take
this
off
to
the
side
and
see
if
it
can
help
with
some
visibility.
At
least
we
don't
use
docker
compose
anymore,
we've
gone
full
kind
for
local
development
and
then
everything
else
is
kubernetes
all
the
way
down.
So
gotcha
pretty.
D
Yeah,
I
will
I
hate
helm
by
the
way
and
we
quit
using
helm.
We
went
to
customize
mainly
because
we
got
this
giant
deployment
file
and
then,
when
we
build
a
new
container,
we
just
update
the
uri
to
the
container
and
then
push
the
deployment
out
with
the
cube,
cuddle
command
and
that's
all
wrapped
up
in
some
nasty
python.
So
I
didn't
write
it,
but
I
know
the
guy
did
so
yeah.
I
will
yeah
we'll
throw
the
home
chart
up
there
and
we'll
go
play
around
with
it.
C
It
sounds
good,
let
us
let
us
know
how
it
goes.
Yep.
A
B
Awesome
we'll
keep
you
apprised
when
this
recording
gets
put
up.
I've
already
had
some
questions
for
folks
that
weren't
able
to
come
to
this
meeting
if
it
was
going
to
be
recorded.
The
answer
we'll
put
it
out
on
the
youtube
channel
for
the
cd
foundation.
There's
a
playlist
for
the
interoperability,
sig
it'll
be
added.
There.
H
On
a
on
a
logistics
note
I'll
start
a
thread
in
slack
to
see
if
folks
are
available
for
the
next
meeting
or
if
we
should
just
cancel
it.
So
look
out
for.
B
That
awesome
and
we
didn't
really
get
a
good
intro
for
you
justin,
since
you
missed
last
meeting.
So
this
is
the
official
official
announcement
with
you
here.
You
are
our
new
co-chair,
so.
H
Yeah
yeah,
congratulations!
Yeah!
Thanks
thanks!
I'm
gonna
help
melissa
run
these
meetings,
I
guess
I'll
say
in
one
minute
what
I
do,
I'm
a
principal
architect
at
ebay
and
I
primarily
focus
on
running
our
open
source
program
and
also
helping
teams
understand
the
social
side
of
continuous
delivery,
and
so
it's
a
lot
of
like
hey.
Wouldn't
it
be
nice
if
you
didn't
have
to
do
that
manual
testing
like
let
me
show
you
how
that
works
and
like
no
it's
actually
safe.
Let
me
show
you
how
we
can
believe
it.
H
Sweet
thanks
everyone
for
attending
and
have
a
good
remainder
of
your
day.
Thank
you.