►
From YouTube: CDF SIG Interoperability Meeting - 2020-09-03
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yes,
I
am,
I'm
gonna
be
stepping
away
for
one
minute
here
to
get
my
kids
settled
and
then
I'll
be
I'll
be
back
in,
but
yeah
I'm
ready
good.
C
B
A
D
A
E
E
How
has
it
all
been
yeah?
No,
I
think
it
pretty
good.
I
took
a
couple
of
weeks
off
so
it's
one
of
those
typical
take
a
week
to
relax
on
holiday.
Then
you
get
really
into
it.
Then
it's
time
to
start
something
new.
So
then
it's
like
contact
switch
contact,
switch
but
yeah,
no
feeling
quite
fresh
and
yeah
a
bit
more
ready
to
go.
A
Perfect,
okay,
good,
so
let's
just
get
started
welcome
to
the
in
our
popular
interoperability
scene
and
we
have
no
real
action
items
if
he
was
working
on
one
admin.
Maybe
this
is
fussy
that
would
be
good
but
other
than
that
there's
no,
no
other
action
items
to
go
over.
A
So
the
next
thing
on
our
agenda
was
to
go
over
improvements
to
the
white
paper
that
have
occurred,
which
mostly
have
to
do
with
how
we're
defining
ci
and
cd-
and
I
don't
know
if
any
of
you
would
like
to
speak
about
your
contributions
to
that
paper
or
your
opinions
on
what
is
there
right
now?
There
is
a
link
in
the
agenda
docs
for
this
meeting.
I
think
you
will
have
it.
A
There
you
go,
I
don't
know
if
you've
had
a
chance
to
look
at
it.
We
discussed
a
little
bit
our
definitions
that
we're
working
on
last
meeting
and
I
don't
know
how
much
all
of
you
on
this
call
have
been
thinking
about
that
or
reading
over
that
doc.
E
A
Yes,
christy
wilson
has
been
adding
quite
a
lot
of
thoughts,
as
cameron
has
as
well,
and
a
number
of
other
people
excellent.
D
A
D
Updated
added
my
thoughts
on
there
myself.
I
really
like
the
definitions
that
were
there
that
that
were
added.
I
thought
that
there
were
a
few
things
that
I
kind
of
wanted
to
see
added.
So
that's
why
I
added
my
thoughts
as
well.
I
think
it's
interesting
to
talk
about
like
the
outputs
of
inputs
and
outputs.
Maybe
of
each
of
these,
I
guess
things
that
we're
defining
there
so
like
ci
c,
continues
to
deployment
and
then
continuous
delivery.
D
A
Yeah
I
like
that
as
well.
It
provides
really
strong
definition
between
the
different.
I
guess
categories
as
well
as
like,
emphasizing
what
their
actual
goals
are
to
reduce
these
different
artifacts
and
move
to
the
next
stage.
That's
great!
That's
really
really
good.
D
D
Today,
cool
so,
let's
see
I
have
to
kind
of
remind
myself
what
I
put
down
here
again
yeah,
so
I
talked
about
continuous
integration
kind
of
as,
like
the
processes
in
place
that
is
essentially
taking.
You
know,
every
time,
there's
a
new
change
to
master,
let's
package
up,
all
the
all
the
dependencies
and
the
code
into
one
or
more
artifacts.
I
talked
a
little
bit
about
whose
responsibility
it
is
here.
D
Let
me
increase
the
size
of
this
whose
responsibility
it
is
kind
of
to
maintain
those
the
ci
part
of
the
the
process.
Here,
let's
see,
we
talked
about
continuous
deployment
and
deploying
into
a
particular
environment
versus,
I
think
continuous
delivery
is
kind
of
like
the
whole.
D
I
don't
know
the
whole
flow
of
going
from
ci
to
getting
it
into
the
hands
of
the
user.
I
have
seen
some
companies.
This
is
where
I
was
talking
about
where
I
was
kind
of
like
pushing
the
definition.
A
little
bit.
D
I've
seen
some
companies
a
lot
of
companies
actually
now
starting
to
do
canary
analysis,
blueprint
deployments,
chaos,
engineering,
etc,
and
I'm
not
sure
if
these
are
necessarily
best
for
putting
in
continuous
delivery
or
into
continuous
deployment,
because
I
think
those
are
relevant
to
continuous
deployment
as
well,
and
then
this
this
idea
down
here
of
some
organizations,
kind
of
taking
the
idea
of
continuous
delivery
like
really
applying
it
to
the
whole,
like
sdlc,
so
kind
of
tracking.
D
Taking
the
same
like
methodologies,
let's
say
that
that
we
use
for
ci
and
cd
and
applying
it
to
like
sdlc.
So
like
a
refinement
of
ideas
and
like
you
can
you
can
track
all
of
this?
Sorry,
I
just
woke
up
so
I
I
haven't
had
coffee
yet,
but
I'm
like
you
could,
let's
say,
track
ticket
creation
in
jira
and
like
have
a
pipeline
of
of
how
that
ticket
moves
through
the
development
process.
D
And
then
when
that
gets
cut
off
and
built
into
an
artifact
and
to
commit
and
put
into
actual
code,
you
can
track
all
that
and
it's
pretty
cool
when
you
do.
I
think
you
can
gain
some
valuable
insights
again.
That
doesn't
really
match.
I
think
most
people's
definition
of
what
continuous
delivery
is
so
very
happy
to
have.
It
be
changed,
but
was
curious
on
all
of
your
thoughts
on
it,
and
I
can
stop
sharing
this
one.
E
Yeah-
and
I
think
that's
great
so
for
the
last
year,
or
so,
I've
heard
so
many
different
kind
of
perspectives
on
continuous
delivery
and,
in
fact,
like
we
have
a
definition
on
the
cd
foundation
faq.
But
it's
it's
like
a
pretty
short
one,
but
yeah.
I
think
I'm
increasingly
like
they
seem
to
be
these
two
camps
and
some
folks,
you
know
they
talk
about
the
whole
process
of
getting
something
delivered
which
could
include.
You
know,
testing
and
everything
that
goes
into
it.
So
it's
the
complete
process.
E
So
I
think
there's
something
to
be
said
there
I'd
be
keen
to
just
make
sure
like
within
this
group
and
within
the
other
communities,
we're
all
kind
of
converging
on
something
we
agree
on,
even
if
it
is
not,
let's
say,
what's
traditionally
been
there,
so
we're
making
sure
that
reflects
today's
age
of
software
delivery.
E
A
I
also
really
like
the
continuous
deployment
definition,
the
idea
that
you
deploy
into
any
environment.
I
we
do
this
a
lot
in
jenkins
x
and
I
think
that's
a
very
interesting
way
to
think
about
it:
you're
not
necessarily
deploying
to
production
but
you're,
deploying
two
different
environments
and
that
can
move
through
a
number
of
them
actually
so.
E
Yeah
the
one
thing
for
the
continuous
delivery
definition.
I
wonder
if
like
for
me,
continuous
deployment,
is
a
lot
about
the
automation
side.
You
know
everything
moving
through
the
stages
and
then
being
spat
out
at
the
end,
but
continuous
delivery
as
a
process
has
to
include
kind
of
the
people
element.
The
team
element
the
processes
and
I
wonder
if
there's
a
way
we
can
capture
that
sentiment
or
just
me
who
sees
that.
D
E
F
I
also
comment
on
on
the
continuous
deployment
and
the
so
where,
where
does
continuous
delivery
end,
whereas
continued
deployment
start
off,
if
you
would
see
them
as
two
different
things,
and
I
know
a
lot
of
tools
today,
that
still
say
there
are
continuous
deployment
tools
such
as
argo,
cd
or
captain.
F
I
believe
as
well
and
such
they
include
like
these
canary
deployments
and
everything
within
the
say,
delivery
process
or
where
testing
and
verifying
that
the
deployment
has
been
successful
or
not,
is
also
part
of
the
continuous
deployment
in
that
sense
and
I'm,
strictly
speaking,
I
I
believe
that
the
continuous
deployment
phase
ends
when
things
are
deployed
to
production.
F
G
Yeah,
I'm
actually
a
little
disappointed
with
with
the
definition
that
somehow
by
someone
got
chosen
at
cdf,
it's
lacking
a
lot
now
mind
you
there
isn't
the
there
is
not
there's
not
an
official
definition
of
continuous
delivery,
continuous
deployment
and
the
two
get
intermixed
a
lot.
I
was
hoping
that
we
as
part
of
cdf,
can
set
the
record
straight
once
and
for
all
and
from
what
I've
seen
that's
time.
G
G
And
that
definition
it
seems
to
be
agreed
upon
by
most
when
you
speak
to
people
out
there.
I
actually
had
it
had
a
discussion
about
this.
At
the
last
last
physical
conference,
we
were
able
to
go
to
which
was
last
year
november
with
with
kosuke,
and
he
he
totally
agreed
with
me
too.
G
So
I
think
we
need
to
rally
around
what
you
see
in
this.
I
don't
want
to
repeat
it,
but
just
read
that
the
definition
and
definitely
in
that
link,
which
just
really
talks
about
that
continuous
delivery
is
the
process
of
automating
everything,
whereas
continuous
deployment
must
include
the
thing
that
brings
value
to
the
customers,
meaning
deployment
to
production
and
if
it
doesn't
include
deploying
it
to
production,
you
simply
are
doing
continuous
delivery.
G
Okay,
so
if
we
can
read
that
definition
and
talk
about
it
and
then
I'd
love
to
see
us
submit
something
to
cdf
themselves,
I
know
we're
a
sig,
but
if
we
can
get
cdf
that
the
I
don't
know
the
technical
operating
committee
or
something
at
cdf
to
agree
with
this
and
literally
update
the
definition
on
the
cdf
site,
and
somebody
ought
to
be
able
to
go
to
continuous
delivery
foundation
and
see
a
proper
definition.
B
Yeah,
I
guess
we
just
need
to
get
clear
on
that
because,
like
well,
you've
just
said,
I
would
flip
it
and
I
think,
like
cameron,
has
that
flipped
of
of
delivery
being
a
superset
of
deployment
where
the
the
deploy
to
production
like
continuous
deployment,
is
just
getting
it
out
there.
But
delivery
would
be
delivering
the
whole
thing
in
an
automated
fashion
out
to
production.
So
yeah
I
mean.
G
I'm
just
letting
you
know
what
what's
been
what's
been
discussed
out
there
in
the
world,
it's
actually
the
other
way
around.
I
I
didn't
you
know
I
didn't
set
this
myself,
but
when
you
speak
to
others,
it's
not
just
I've.
I've
spoken
to
to
several
other
people,
and
that's
the
thing
that's
coming
out
is
the
is
exactly
opposite
of
what's
what's
currently
in
the
in
the
white
paper,
but
I'm
happy
to
for
us
to
discuss
it.
E
See
a
big
difference
in
kind
of
the
deployment
versus
delivery
and
what
it
encompasses.
So
I
would
love
to
kind
of
dig
into
that
and
find
you
know.
Where
do
we
agree
and
where
are
the
sticking
points
on
princes,
so
I'm
gonna
take
on
that
discussion
and
I
might
reach
out
to
a
bunch
of
you
individually.
E
E
I
don't
know
if
folks
are
aware,
but
as
of
a
couple
of
days
ago,
I
just
started
in
the
role
as
the
exec
director
of
the
continuous
delivery
foundation,
so
moving
on
from
from
cloudbees
to
yeah,
just
kind
of
really
take
cdf
to
the
next
level,
and
I
think
a
lot
of
the
conversations
we're
having
at
the
board
level
are
pretty
exciting,
and
it
is
just
that
for
cdf
to
take
a
leadership
position
and
to
help
advance.
E
The
icd-
and
you
know,
drive
for
clarity
and
drive
for
unity
and
do
a
lot
particularly
on
on
kind
of
practices.
And
what
does
that
mean,
and
can
we
dig
into
that
and
just
help
set
out
what
modern
software
delivery
will
look
like
and
how
people
can
make
choices
that
work
for
their
specific
situation?
E
Okay
yeah,
so
I
think
I'll
say
yeah,
no
just
happy
to
have
these
conversations
dig
in
and
just
push
things
forward.
E
So
I'll
I'll
start
reaching
out
to
folks
about
yeah
just
various
perspective
and
try
to
to
build
a
picture
of
where,
where
different
people
stand
and
where
there's
consensus
and
what
needs
to
kind
of
thrashing
out
and
some
some
agreement.
G
So
tracy
you
missed
out
on
the
presentation
last
time.
E
I
know
I
have
that
on
my
list
to
to
to
go
catch
up
on,
but
yeah,
no,
just
the
the
timing
of
ticking
taking
some
time
off.
Just
just
didn't
didn't
work
out,
but
yes
apologies,
but
that
is
high.
E
Okay,
so
yeah,
I
think
I'll
just
finish
out
by
saying
just
folks.
Please
feel
free
to
reach
out
to
me
and,
as
I
say,
I
just
gonna
finish,
settling
in
and
then
I'd
love
to
get
stuck
in
and
meet
particularly
with
folks
individually
and
within
this
group.
I
think
it's
a
great
forum
and
everything
that's
happening
on
interoperability
is
going
to
be
high
on
my
list
to
keep
driving
and
pushing
forward.
A
A
F
Did
I
start
yeah
sure
why
not
yeah,
so
we've
had
a
couple
of
meetings.
Now
we
were
formed.
You
might
recognize
some
some
weeks
before
the
summer
time,
so
we
had,
I
think,
like
three
or
four
meetings
during
the
summer.
They
were
not
that
active.
So
far
we
have
mostly
been
discussing
and
fondling
about
about
our
our
purpose
a
bit,
but
lately-
or
I
would
say
mostly
in
the
last
meeting.
F
So
that's
something
that
we
aim
to
work
on
more.
We
don't
have
much
of
a
result
to
show
at
this
point
I
would
say,
but
we
have
some
ideas
that
we
have
started
to
put
down
in
our
minister
meeting.
At
least
you
might
have
seen
some
of
those
already
and
we
intend
to
to
create
a
separate
document
to
to
document
our
vocabulary
proposed
vocabulary
for
for
events
in
a
crcd
context,
and
you
will
see
more
about
that
of
course
going
forward
and
then
also
andreas
we'll
talk
about
that.
F
C
Right,
so
regarding
the
white
paper,
this
is
currently
on
the
process,
so
we
kicked
it
off
and
now
we
are
iterating
it
within
the
work
stream
and
afterwards
we'll
share
our
common
ground.
Our
findings
yeah.
A
A
Great
thank
you,
and
next
we
have
a
presentation
by
dave
vasilia
on
ci
cd.
I
go
spot
check.
B
Yeah
thanks,
so
I
I
don't
know
I
I
would
love
feedback
on.
If
I'm
hitting
the
right
points
in
this,
you
know
just
doing
kind
of
a
I
mean
what
I'm
essentially
presenting
here
is
a
use
case
of
a
small
organization
and
so
yeah.
It
was
kind
of
hard
to
tell
or
decide
exactly
what
what
would
be
most
useful.
B
So
I'm
gonna
not
fly
through
this,
but
kind
of
you
know
go
through
it
at
a
steady
pace
here
and
then
I'd
love
to
just
like
get
have
more
q,
a
time
or
just
feedback
time
on
whether
I've
hit
the
right
things
so
I'll
share
and
yeah.
So
continuous
deployment
delivery
ad
go
spot
check
just
to
talk
a
little
bit
about
like
what
the
use
case
is
here.
The
the
application
that
we
have
is
essentially
field
data
collection.
B
So
the
the
example
I
usually
give
is
your
pepsi
and
you
have
an
agreement
with
walgreens
that
you
know
the
pepsis
will
be
in
stock
and
at
eye
level,
and
they
get
a
discount
for
that.
And
previously
someone
would
be
out
there
on
with
a
clipboard
sort
of
marking
off
these
things
and
now,
like
a
central,
you
know,
the
sort
of
central
administrative
office
makes
missions
that
field
reps
fill
out
and
then
there's
sort
of
close
to
real-time
analytics
that
happen.
B
We
have
a
machine
vision
project
where,
instead
of
them
having
to
mark
off
what
the
status
of
the
fridge
is,
the
pepsi
person
can
just
take
a
picture
of
the
fridge
and
then
there's
like
workflows
and
notifications
that
go
through
around
reporting
and
and
alerting
on
on
those
kinds
of
things.
So
to
do
that,
we
have
you
know
kubernetes
infrastructure,
the
the
app
started
as
a
rails,
monolith
on
heroku.
There
are
some
other
side
services
written
in
rails.
More
recently,
we
switched
to
go
micro
services.
B
We
have
database
like
one
big
database,
that
we
manage
fully
ourselves
in
our
own
infrastructure
plus
couch
for
another.
Like
the
photo
bait
sort
of
services.
We
have
managed
data
things.
We
have
cloud
functions,
we
have
a
machine
learning
pipeline.
We
have
a
data
pipeline,
we
have
mobile
apps,
so
you
know
one
of
the
things
I
was
most
excited
about.
When
I
I
first
learned
about
the
continuous
delivery
foundation
was
it
was
like
ooh?
It's.
B
B
Right
now
we
peaked
at
one
point
at
about
150
of
those
100
about
40
year,
engineering,
qa
and
they're,
sort
of
2.5
ops,
people,
of
which
I
am
one.
So
I
have
one
team
member
and
a
teammate,
and
then
I
have
a
manager
who
tries
his
best
to
be
an
individual
contributor,
but
really
he
manages
three
teams
now
and
so
he's
not,
but
he's
also
like
the
person
who
knows
the
most
about
the
database
right,
so
we're
just
sort
of
in
that
that
stage
of
of
maturity
across
those
engineers.
B
We
have
a
variety
of
of
ops,
competency
levels,
so
we
have
people
who
are
hired
straight
out
of
a
boot
camp
and
into
an
organization
that
was
fully
on
heroku
and
they
literally
don't
know
what
a
virtual
machine
or
a
server
is
because
that's
all
been
abstracted
away
from
them
for
their
entire
career.
We
have
our
team,
which
historically
has
sort
of
been
everything
but
feature
development.
When
my
boss
was
first
hired
that
they
literally
said
like
we
don't
know
what
you
do,
but
we've
been
told.
B
We
need
to
have
one
of
you,
you
know
so,
and
he
was
like.
Okay,
I'll
just
try
to
find
a
way
to
help
right.
We've
tried
to
sort
of
push
over
the
last
couple
of
years.
I
love
the
thumbs
up
from
roman.
Thank
you,
the
you
know
this
idea.
B
The
devops
is
a
culture,
not
a
team,
because
our
team
was
the
devops
team
for
a
long
time,
and
we
have
now
we've
sort
of
changed
our
name
last
year
to
emphasize
you
know
when,
when
we
were
on
heroku
and
really
a
lot
of
this
was
handled
by
heroku,
we,
we
wrote
the
circle
ci
config
files
and
stuff,
but
as
we
now
have
our
own
set
of
infrastructure
to
manage
we've,
we've
sort
of
had
to
push
start,
pushing
more
things
left.
B
So
one
of
the
things
I'm
doing
this
year
is
running
working
groups,
because
we
don't
have
a
like
a
platform
team
to
to
write
these
things.
We
did
it
one
point,
but
what
we
found
was.
B
It
was
never
really
treated
like
a
product,
and
so
you
know
like
when,
when
ramen
was
giving
his
talk
last
week
about
you
know,
building
this
sort
of
internal
product
around
deployment.
I
was
like
yeah.
That's
that's
the
way
it
really
has
to
be
done.
You
have
to
have,
you
know,
have
feedback
from
the
users
and
and
go
through
this
sort
of
iterative
development
cycle
and
and
we
had
a
platform
team,
but
they
sort
of
got
broken
up
and
pushed
out
onto
multiple
teams.
B
So
this
year,
I'm
trying
to
build
a
an
internal
platform
with
distributed
input,
because
just
because
of
the
size
that
we're
at
we're
sort
of
in
this
weird
space
between
like
when
we
were
on
heroku,
we
were
one
of
their
top
10
customers
by
income
ascent
or
like
in
terms
of
what
we
paid
heroku
and
we
literally
just
kind
of
outgrown
their
platform.
B
But
we're
not
big
enough
to
have
a
team
that
builds
things
right,
that
where
we
just
kind
of
like
yeah
you,
you,
four
people
go
off
and
build
the
the
platform
and
one
of
the
things
we
we've
kind
of
discovered
over
the
last
year
and
a
half
two
years
is
support
for
us,
like
our
customer
support,
team
really
needs
to
have
a
seat
at
the
table,
because
one
one
of
our
big
differentiators
in
the
market
in
our
market
is
customer
support,
and
you
know
they
want
to
be
aware
of
when
things
are
going
down
and
having
issues
and
a
big
conversation
that
we've
had
when
I
started
talking
about
continuous
deployment
and
continuous
delivery.
B
Was
that
the
head
of
our
tier
2
support
team
said
man.
I
tried
to
google
how
you
support
continuous
delivery
and
there's
nothing
out
there.
It's
all
just
written
by
engineers
about
how
you
do
it,
and
I
was
like
well
micah.
Maybe
you
and
I
are
going
to
have
to
write
that
article,
because,
because,
if
you're
pushing
20
changes
a
day,
you
know
they,
how
do
they
become
aware
of
that
and
and
how
they
become
aware
of
the
status
of
canaries
and
things,
so
they
know
to
be
ready.
B
If
there
are
issues,
I
think
there's
an
in
built
assumption
about
canarying,
a
continuous
delivery
pipeline
that
constantly
there's
a
chance
that
something
will
be
going
wrong
for
a
percentage
of
customers
right,
and
so
that's
that's
like
a
major
conversation
that
we're
we're
having
internally.
As
we
try
to
build
this
out
more
so
the
existing
tech
we
had
when
we
started
our
migration
over
to
more
cloud
native
stuff
was
we
had
heroku,
we
had
circle
ci
doing
ci
stuff.
B
We
were
using
sumo
logic
for
logs
and
we
had
new
relic
for
observability
in
in
the
main
app.
B
I
came
onto
the
team
about
in
february
of
2018,
and
I
was
brought
in
largely
to
help
guide
this
migration
out
of
heroku
and
into
something
else.
So
we
pretty
early
decided
to
go
to
kubernetes,
largely
sort
of
as
a
future
proofing
thing.
It
was
kind
of
like
well,
we
know
we're
going
to
make
this
big
leap.
We
know
it's
going
to
be
uncomfortable
and
inconvenient
for
a
while,
because
it's
it's
very
new,
but
we
won't
have
to
do
this
again
in
a
couple
years.
B
Oh
now
we
want
to
jump
again
jump
ship
again
over
to
kubernetes,
so
the
market
was
very,
you
know
immature
at
that
point
in
terms
of
support
for
things
and
and
all
that
you
know,
every
tool
we've
had
has
gone
through
multiple
breaking
changes
over
the
last
couple
of
years
and
and
that
was
okay
and
and
known
and
expected,
but
one
of
the
first
things
we
did
was
we
signed
up
with
harness
sort
of
mid,
like
you
know,
late
summer
of
2018,
because
and
I'll
kind
of
talk
about
the
the
needs
for
that
in
a
bit,
but
part
of
it
was
it
was
the
only
thing
out
there
that
that
really
supported
a
lot
of
the
use
cases
we
had
towards
late
2018.
B
We
started
getting
our
first
like
real
cloud
native
apps
deployed
into
kubernetes
that
were
we're
going
through
a
full
sort
of
cd
process,
but
they
were
for
newer
products
or
you
know,
sort
of
deep
back
end
kind
of
micro
services.
B
They
weren't
a
lot
of
really
customer
facing
high
impact
things
early
2019.
We
got
our
sort
of
our
first,
not
legacy,
but
sort
of
existing
application
migrated
from
heroku
over
and
by
late
2019.
We
had
almost
everything,
including
our
main
rails
monolith,
migrated
over
and
now
we're
sort
of
in
this
long
tail
of
of
applications
that
we
need
to
migrate,
while
also
new
things
are
starting
really
just
by
default.
B
B
B
You
know
that
maybe
we're
not
actually
paying
anyone,
but
even
then
you
know
we
were
early
to
deploy
jager
and
prometheus,
but
as
the
vendors
that
we
use
have
started
supporting
open
tracing
and
prometheus
metrics,
like
we've
shifted
over
to
great,
we
will
pay
you
to
manage
this
infrastructure
it
it's
difficult
for
us
to
do
prometheus
at
scale
right,
because
that
that
requires
someone
managing
that
whole
system,
that
we
don't
have
the
manpower
for
there.
There
need
to
be
buttons
to
push.
We
have.
B
You
know
on
our
path
to
just
sort
of
full,
automated
continuous
delivery.
We
still
have
manual,
you
know,
approval
steps
and
things,
and
we
have
qa
team
members
who
are
less
technical.
They
might
be
sort
of
familiar
with
with
a
command
line,
but
not
enough
to
dive
deep
into
yaml
and
and
that
sort
of
thing,
that
kind
of
they
need
to
be
able
to
work
with
the
app
and
then
come
in
and
be
like.
Yes,
this
is
good
to
go.
B
He
needed
to
be
adoptable
and
adaptable
in
the
sense
that
you
know
developers
could
take
on
this
piece
and
then
this
piece
and
sort
of
you
know
build
out,
as
their
comfort
grew
one
of
our
biggest
learnings
in
this
space
in
terms
of
getting
our
developers
on
board
with
these
pieces
has
been
not
throwing
too
much
at
them
at
once.
You
we
can't
just
like
get
someone
to
drop
into
like
an
automated
continuous
delivery
pipeline,
because
there's
just
too
much
to
learn
and
too
much
too
much
practice
change.
That
has
to
happen.
B
So
I
have
a
number
of
times
like
sat
down
with
someone
and
taught
them
all
about
kubernetes,
and
then
six
months
later
had
to
do
it
all
again,
because
that
was
when
they
were
ready
to
do
it
or
you
know
like
what
what
we've
sort
of
started
trying
to
shift
to
having
is
like,
I
had
jager
set
up,
and
then
I
was
waiting
for
the
first
time
that
someone
said:
where
is
this
request
dying,
I'm
so
frustrated?
I
can't
do
this
and
I'd
be
like
guess
what
I
have
for
you.
B
I
have
distributed
tracing
let's.
Let's
talk
about
what
that
is
because
they
weren't
they
weren't
ready
to
put
bring
that
into
their
schema
of
how
things
worked
until
they'd
had
that
experience,
and
now
that
more
and
more
people
have
that
experience,
it's
easier
and
there's
a
little
more
institutional
knowledge,
that's
starting
to
spread,
but
but
yeah
it's
we've
had
to
be
really
careful
about
not
giving
people
too
much
too
early,
but
trying
to
have
stuff
ready
for
them
ahead
of
time
of
what
we
know,
the
issues
will
be.
B
B
Conceptually
then
it's
much
easier
for
people
to
dive
in
to
configuration
as
code,
but
things
that
start
and
are
purely
configuration
as
code
have
been
really
frustrating
for
for
people
just
stepping
into
this
world,
because
it's
harder
to
figure
out
that
schema
and
and
how
things
relate
you
know,
jumping
jumping
into
you
know.
B
What's
a
an
app
versus
a
service
versus
a
workflow
versus
a
pipeline
like
all
the
things
in
the
rosetta
stone
you
know,
document
that's
being
built
are
like
harder
to
grok,
just
when
you're,
stepping
purely
into
like
a
bunch
of
yaml
right
and
a
big
requirement
for
us
was
that
our
continuous
delivery
pipeline
did
not
just
support
kubernetes.
B
One
of
the
things
that
we
found
looking
at
a
bunch
of
different
tools
that
existed
at
that
time
was
that
you
know
they
were.
They
were
cloud
native.
They
were
kubernetes,
specific
I'll
kind
of
call.
You
know
try
to
call
out
a
couple
later
that
I
think
on
a
slide,
but
but
that
was
a
difficulty
for
us.
We
have
cloud
functions.
We
have
static
sites,
we
have
mobile
apps.
We
have
you
know
all
these
things
that
do
not
exist
in
kubernetes
and
a
lot
of
the
newer
pipelines
at
that
point.
B
That
really
were,
like
you
know,
pushing
full
cd
sort
of
pipeline
capability
were
very
kubernetes
specific
because
again
we
have
all
these
different
things.
B
Our
big
rock
coming
into
this
process
was
believing
that
open
standards
are
worth
adopting.
They
provide
flexibility
and
interoperability
and
we
were
looking
for
vendors,
who
agreed
with
that
and
so
again
we
don't.
We
try
to
buy
and
not
build,
but
we
also
accepted
that
for
a
time,
we'd
be
having
to
wait
for
the
market
to
catch
up
with
that
belief,
you
know
at
the
time
going
to
new
relic
or
datadog,
even
even
like
google
stackdriver.
B
You
know
we
kind
of
did
some
experimentation
and
since
they
all
charged
on
cardinality
of
prometheus
metrics
instead
of
volume
and
anyone
who's
worked
with
prometheus
knows,
the
cardinality
is
off.
The
charts,
like
the
number
I
throw
out
to
people
is,
if
you
turn
on
a
google
kubernetes
engine
cluster
with
three
nodes,
and
you
do
not
deploy
anything
into
it.
B
You
have
200
000
metric,
cardinal
metrics
that
come
out
of
that
thing,
and
so,
if
you
then
hook
it
up
to
google
stack
driver,
the
cluster
will
cost
you
150
a
month,
but
the
metrics
will
cost
you
3
500
a
month
and
so
yeah
yeah,
like
you
know,
and
and
that's
a
critical
thing
for
them
getting
back
to
the
verification
pieces.
B
We're
talking
about
right
and
so,
like
figuring
out
how
that
stack
worked,
and
you
know
in
a
way
that
was
cost
effective,
but
still
you
know
gave
us
that
efficiency
has
been
has
been
difficult.
So
what
we're
find
you
know
we're
finding
now.
Is
that
like
we're
now
sending
all
our
metrics
to
sumo
logic,
because
they
ended
up
building
out
a
system
that
would
accept
prometheus
metrics
in
a
way
that
made
sense
for
what
for
how
prometheus
metrics
work?
B
B
So
what
we
found
you
know,
we've
been
using
harness
for
a
couple
of
years
now,
and
I
mean
the
first
thing
was
it
was
there
when
we
needed
it,
it
supported
more
than
just
kubernetes
prior
to
harness
and
I'm
sure
a
lot
of
people
here
can
sympathize
with
this,
like
we
were
using
our
ci
system
to
do
cd,
and
so
we
had
some
900
line
python
scripts
that
people
had
written
to
to
handle
a
bunch
of
you
know,
deployment
work
has
great
trigger
and
workflow
and
pipeline
execution.
B
It
has
really
seamless
like
code
and
ui
change
management,
so
people
can
work
in
the
console
until
they're
ready
to
work
in
code
for
us
the
major
cons
and
like
this
is
why
I
joined
this
group
and
I'm
and
I'm
really
interested
in
this.
This
space
and
the
sig
is
because,
like
canary
and
blue
green
in
particular
are
not
hot
there,
because
that
requires
integration
with
a
mesh
and
they
support
istio,
but
not
like
the
mesh
that
we
use
link
or
d
there's.
B
They
have
really
great
interoperability
with
an
integration
with
like
a
lot
of
prometheus
and
sumo
logic
and
a
lot
of
the
verification
providers,
but
not
in
the
pipeline
process.
So
a
concrete
example
of
that
is
like
flagger
flagger
to
us
looks
super
promising
because
it's
purpose
built
for
doing
canary
and
blue
green
management.
B
It
works
natively
in
kubernetes
and
with
every
mesh,
including
like
the
one
that
we
use.
It
also
works
with
helm
and
not
just
kubernetes.
You
know
deployment
def,
you
know
yaml
definitions
and
stuff,
but
it's
like
hard
to
integrate
back
right,
and
so
this
is
where
I'm
like.
This
is
why
I'm
here
is
trying
to
help
figure
out
like
I'm,
I'm
not
just
a
person
to
sit
back
and
be
like.
Why
doesn't
the
world
cater
to
my
whims?
B
You
know
be
an
active
part
of
figuring
out
how
we
helped
that,
because
I
think
we're
at
this
point
in
with
harness
for
the
long
term,
but
there's
a
lot
of
promise
in
making
sure
that
all
these
things
are
easy
to
put
together,
because
I
really
believe
that
the
more
more
things
are
based
on
open
standards.
The
easier
we
can,
just
you
know,
connect
things
roadblocks
for
us
for
still
really
reaching
full.
B
True,
continuous
delivery
on
things
on
most
of
our
services
are
having
small
diffs,
that's
more
of
an
internal
process
and
an
issue
than
than
external,
but
but
I
think,
there's
a
chicken
and
egg
thing
here
of
until
it's
easy
to
continuously
deliver
it's
harder
to
make
small
diffs
make
sense,
because
you
know
qa
still
has
to
go
through
them
and
everything,
so
they
might
as
well
handle
the
big
chunk
in
terms
of
how
they
manage
their
time
and
then
on
the
technical
side
like
really
getting
verification
down
and
getting
things
back
into
the
pipeline.
B
So
it
can.
You
know-
and
this
came
up
earlier
around
like
especially
particularly
around
canary
deployments
and
making
sure
that
you
know
whether
or
not
the
canary
is
good
before
you.
You
continue
to
roll
traffic
and
then
even
doing
that
manually
or
checking
it
manually
and
then
moving
it
to
full
automation
is
where
we're
we're
still
sort
of
in
the
process
of
getting
to
one
of
the
groups.
Working
groups
I'm
running
this
year
is
literally
just
around
observability
and
reliability
of
like
what
metrics
do
we
want.
B
So
what
we
really
need
or
are
looking
for
from
from
our
system
is,
you
know,
push
button
deploys
and
this
isn't
even
for
the
applications
themselves,
but
this
is
for
from
my
team
for
our
system
right
like
if
I'm
gonna
put
something
out
there
like
if
I
was
gonna,
go,
deploy
spinnaker
tomorrow
or
jenkins
x,
or
something
it
would
need
to
be
a
helm
chart
because,
like
I,
I've
got
time
to
configure
my
helm
charts,
but
I
don't
have
time
to
go,
build
all
of
the
deployment
capability
around
whatever
I'm
about
to
go,
go
put
out
there.
B
So
you
know,
as
we've
talked
about
different
orgs,
taking
over
more
ownership
of
helm,
charts
of
specific
chunks
of
this.
Like
that's
really
encouraging
to
me,
things
need
to
have
guise,
especially
for
for
learning,
for
the
people
coming
in
we're
looking
for
interoperability
and
open
standards.
B
We're
also
looking
for
same
defaults,
because
you
know
again
like
there's,
there's
a
piece
of
us
that
is
of
this
for
us
and
my
team,
that
is,
we
have
so
many
things
to
learn
that
we
can't
become
experts
in
every
tool,
we're
deploying
and
so
anything
that
provides
even
just
a
basic
set
of
defaults.
B
The
value
goes
up
incredibly
for
us
because
we
don't
have
to
dive
in
deep
right
off
the
bat
and
and
learn
everything
about
a
tool
before
we
deploy
it
so
that
we
can
come
up
with
those
defaults
and
then
another
thing
is
paid
support
options
like
we
pay
for
open
source
software.
I
don't
know
if
that
makes
us
like
a
unicorn
here,
but
you
know
we.
B
We
started
off
with
as
an
example
like
the
ambassador,
api
gateway
and
we're
now
paid
customers
of
data
wire,
and
that
was
a
really
easy
sell
to
my
vp,
because
I
was
like
hey
all
our
traffic
runs
through
this
thing.
We
should
have
support
if
it
goes
down
and
he
was
like.
Yes,
we
should
right.
So
it's
it's
an
easier
sell
than
you
might
think.
B
For
once
we
once
we
hit
a
certain
critical
mass
use
of
a
tool,
I
can
get
money
for
it,
we're
not
looking
to
just
we're
not
looking
to
manage
it
ourselves,
we're
not
looking
to
to
just
run
at
open
source
and
free
forever.
B
When
we
started
you
know
the
re.
The
thing
that
drove
us
to
to
more
of
a
proprietary
tool
like
harness
is
that
they
were
kind
of
just
building
blocks
like
tekton,
and
you
know,
techton
is
super
cool,
but
but
jenkins
x
is
cooler
for
me
right
in
terms
of
my
my
use
case
in
a
small
company
right
in
terms
of
looking
at,
like
oh
yeah,
that's
when,
when
techton
first
came
out,
I
went
and
looked,
and
I
was
like-
I
don't
have
time
for
this.
B
You
know
this
is
this
looks
like
great,
but
I
I
can't
do
this
because
of
my
my
other
commitments,
whereas
you
know
there
are
other.
There
are
more
mature
higher
level
of
abstraction
tools
that
are
coming
out
now
that
look
way
more
promising
that
are
built
on
those
building
blocks
and
then
the
other
thing
was
there
were
just
really
kubernetes
specific
tools
like
we've
had
a
great
platform,
but
it
was
very
kubernetes
specific
screwdriver.
B
What
we're
really
excited
about
is
just
the
fact
that
there
is
growing
better
support
for
open
standards.
There
are
more
robust
tools
built
on
things
like
tecton
and
this
group,
because
I
think
this
is
really
critical.
Work
for
figuring
out
how
things
can
come
together
can
inter-operate.
B
You
know
I
I'm
given
another
talk
like
in
a
couple
months
around
like
sumo
logic,
specifically,
but
the
fact
that
you
know
we
adapted
open,
telemetry,
open
tracing
prometheus,
and
now
we
don't
have
to
wait
for
companies
to
build
integrations
with
each
other,
because
the
integration
exists
purely
through
the
fact
that
there's
open
standards
right.
So
I
think
the
more
that
we
can
create
those
standards,
the
the
better
things
will
be.
So
that's
that's
the
presentation
and
and
I'd
love
to
get
feedback
questions.
G
Wow
david,
you
should
either
come
work
for
us
or
I
should
come
with
you
guys.
G
Well,
you,
you
are
even
even
though
you're
a
very
small
company,
the
the
journey
you're
going
through
is
you
know
something
somebody
like
a
biggie
like
an
ebay.
It's
been
around
a
long
time,
we're
also
going
through
similar
things,
because
it's
brand
new
technologies
very
same
set
of
challenges
that
you're
having
to
solve.
G
It's
just
that
the
organization
that
we're
having
to
solve
it.
For
when
I
say
organization
I
mean
the
group
of
developers
at
ebay,
4,
000
plus
having
to
it's
just
a
lot
more.
That's
all,
but
the
challenges
are
the
exact
same,
so
this
is
actually
awesome.
I
need
to
I'm
going
to
take
up
some
of
your
time
later.
Not
here.
I
want
to
set
up
a
meeting
with
you.
Actually
I
want
to
find
out
about
your
experience
with
harness
I'm
seriously.
Looking
at
at
that
as
well,
and
you
know
what
was
your
experience?
G
What
do
you
think
you
know?
I
want
to
dig
into
the
pros
and
cons
that
you
yeah
totally
I've
had
time
with
with
jenkins,
x
and,
and
things
like
that,
but
yeah.
So
if
you
don't
mind
taking
up
some
of
your
time
later,
that
would
be
awesome.
B
Yeah
and
that
you
know-
and
that
was
just
you-
it's
the
it's-
the
same
story
with
like
datadog
with
new
relic
any
any
of
these
companies
that
consider
like
prometheus
metrics
to
be
external
metrics.
It's
it's
the
exact
same
story.
It's
like
you,
pricing
it
out
is
almost
impossible,
because
it's
it's
extremely
difficult
to
like
the
scale
of
these
things
is
so
large
that
it
goes.
B
G
The
way
this
is
what's
driven,
you
know
people
come
to
ebay
and
they
say
why
are
you
running
your
own
data
centers?
Are
you
out
of
your
mind?
You
know,
are
you
crazy?
Why
do
you?
Why
didn't
you
just
use
gcp,
yeah
azure,
of
course,
definitely
not
amazon.
We
would
never
use
amazon,
you
know
as
a
back
end,
but
they
they
say
that
until
they
hear
the
dollar
amounts.
B
E
Yeah
david
is
going
to
echo
romans.
I
thought
that
was
a
fantastic
talk,
and
yet
just
so
many
takeaways
and
yeah,
it
seems
like
you're,
a
particularly
kind
of
mature
team,
just
with
the
whole
assuit
of
you
know
like
even
just
don't
don't
build
it
but
buy
it.
It's
not
necessarily
something
you
can
take
for
granted
with
a
lot
of
teams.
E
A
couple
of
things.
That's
just
gonna,
maybe
clarify
but
yeah,
there's
just
so
so
much
there.
I'd
love
to
dig
in.
E
High
level,
so
with
the
move
to
k8s
and
the
new
stack
it's
like,
did
you
succeed
in
doing
that
for
across
all
the
apps?
So
is
that
still
a
subset
of
kind
of
the
only.
B
Apps
that
are
left
in
heroku
are
a
couple
of
small
things
that
really
no
one's
still
developing
on,
like
we
have
two
more
that
have
to
move
over
in
the
next
two
months.
But
that's
because
they're
on
a
like
a
heroku
stack
that
heroku
is
deprecating,
and
so
they
will
cease
working
on
november,
2nd
and
they're.
Pretty
critical,
like
custom
customer
integrations,
but
they
were
written
and
then
they've
worked,
and
so
no
one
works
on
them.
B
B
It's
it's
a
pain,
it's
not
convenient,
you
know,
and
and
so
now,
but
even
in
that
space,
it's
kind
of
like
well,
you
know
all
right:
we've
written
a
bunch
of
custom
docker
files,
but
the
thing
we're
in
the
process
of
now
is
just
moving
back
over
to
heroku's
build
packs
because
half
the
stuff
we
have
was
written
for
heroku
in
the
first
place,
and
and
now
we
don't
have
to
manage
security
right,
like
I
mean
to
to
a
certain
extent,
like
heroku
kind
of
like
weeds
out
all
the
critical
and
high
you
know,
cbes
out
of
their
container
builds,
and
so
you
know
no
more.
B
Writing
docker
files
right
like
we're,
we're
writing
our
own
helm,
charts
that
are
standardized
across
all
of
our
applications,
so
we're
trying
to
make
things.
I
literally
had
a
developer
at
a
hackathon
last
november.
Come
up
and
say:
I
don't
really
know
what
I'm
gonna
work
on
this
week,
other
than
trying
to
figure
out
why
this
is
such
a
pain
in
the
ass,
and
so
I
was
like
okay
message
received
matt.
You
know
I
mean
he
wasn't
like
directing
it
at
us,
but
it
was
like
all
right.
We
need
to.
B
We,
I
got
two
months
of
a
product
manager's
time
and
we
made
personas
of
like
the
engineers
across
our
organization
and
and
so
now
I've
got
groups
of,
like
a
back
end,
a
front
end,
a
newbie
and
a
lead,
a
support
person
and
we're
trying
to
get
them
together
and
figure
out
like
how
do
we?
How
do
we
do
security
better?
How
do
we
do
observability
and
builds
and
and
everything,
because
that's
what.
B
Yeah-
and
that
was
just
I
don't
know-
we
came
up
with
a
couple
ideas,
and
that
was
the
one
that
stuck,
but
there
was
no
good
there's,
no
good
name
for
it.
You
know
like
we're
just
more
trying
to
communicate
to
everyone
else
like
we
don't
do
literally
everything
but
write
future
code.
Now
you
know
you
guys
have
to
take
on
a
bit
more
of
this.
We
are
managing
we're.
B
E
Yeah,
no,
I
think
that's
good
and
yeah
just
lots
of
so
many
other
ideas,
but
I
might
follow
up
just
on
your
sort
of
working
groups.
The
concepts
of
bringing
support
to
the
table.
I
think
there's
a
pretty
important
kind
of
conversations
and
I'd
love
to
you
know
have
those
with
with
other
folks
as
well
and
see
see
what's
happening
there
and
yeah.
I
love
the
the
open
standards.
E
I
think
preaching
to
the
choir
here,
but
yeah
love
to
see
kind
of
the
the
practical
ways
it
makes
a
difference
to
to
organizations
and
why
that's
so
important.
B
Yeah
for
sure
yeah
happy
to
talk
more.
You
more
often
have
to
shut
me
up
so
yeah
no
problem.
A
Just
say
again,
great
talk
really
enjoyed
it.
I
took
screenshots
to
share
with
my
team
there's
really.
A
To
the
notes,
perfect,
okay,
so
I'll
show
that
to
you,
yeah
really
interesting
to
see
very
clearly
what
your
pain
points
are
and
what
you
want
and
what
you
need
actually
from
your
tools.
That
was
really
that's
just
really
helpful,
but
one
of
the
things
when
you're
talking
about
how
support
needs
to
be
at
the
table.
A
I
thought
that
was
really
important
and
interesting
and
one
of
the
things
we
were
discussing
with
jenkins
x.
It's
not
built
it's
just
an
idea
now,
but
it's
doable.
So
it's
an
idea
is
making
feature
previews.
So
right
now
we
have
preview
environments
like
if
you're
working
in
your
team-
and
you
just
do
one
thing
on
your
team
and
if
everything
else
stays
the
same,
you
get
a
preview
environment
and
you
can
do
it.
You
can
literally
see
it.
A
So
we
spin
it
up
for
you,
but
you
cannot
do
all
sorts
of
testing
on
it.
Whatever
you
want.
One
of
the
things
we
could
do
is
do
a
feature
preview
environment.
So
we
can
bring
in
multiple
changes
and
bring
them
together,
and
then
you
then
have
that
space
to
check
those.
So
you
can,
it
gives
you
more
room
for
testing
and
being
sure
of
what
you're
deploying
when
you
then
do
your
canary
release.
A
B
I
will
I
I'll
tell
I
mean,
like
I'd,
be
happy
to
have
more
conversations
with
you
guys
on
this
it,
I
think,
the
the
difficulty
we've
faced
like,
because
this
is
something
we're
struggling
with
doing
right
now
and
and
the
difficulty
we
face
is
when
you
start
to
get
into
a
micro
service
stack,
it
becomes
like
well,
which
service
am
I
working
on?
B
What
other
things
does
it
need
to
be
interacting
with
you
know
like
I
might
be
working
on
a
feature,
but
it's
for
the
third
service
down
and
so
getting
and
that's
not
an
easy
problem
to
solve.
Like
I
mean
we're
ambassador,
like
the
ambassador
folks
are
working
on
something
called
service
preview
and
it's
really
cool
and
you
it
actually.
It's
built
off
their
telepresence
tool
where
you
basically
can
like
route
specific
requests
to
your
laptop
and
then
back
up
so
that
you
know
kind
of
intercept
the
traffic.
B
But
but
to
do
that,
you
have
to
forward
some
header.
You
know
it's,
but
it's
also
built
around
more
like
I
have
this
app.
That
is
right
behind
the
gateway
that
I'm
working
on
right
and
it's
this
one
app
or
whatever.
But
if
I
want
to
talk
to
the
fourth
thing
down,
I'd
have
to
be
forwarding
headers
all
the
way
down
right.
Like
so
yeah
it's
a
com.
Everything
gets
harder
when
you
go
to
microservices.
A
Yeah,
we
were
having
our
conversation
today
with
a
with
another
company,
but
speaking
about
this
very
issue,
and
one
of
the
things
that
we
found
works
for
jenkins
x
and
we
were
suggesting
to
them-
is
having
version
streams.
So.
G
Yeah,
I'd
love
to
see
I'll,
be
happy
to
talk
to
you
about
that
as
well.
David,
the
the
whole
micro
services,
the
the
journey
into
microservices,
has
been
rocky
for
sure.
Certain
best
practices,
if
you
don't
do
them
you're,
going
to
kill
yourself.
B
G
B
E
Any
other
comments
or
questions
for
dave,
just
one
quick
one
did
you
mention
which
clouds
you're
using
or
I
don't
know
if
I
missed
that
or.
B
Oh
sure,
I
don't
think
I
did
so
we
are
like
90
90
of
our
stuff
is
in
google.
We
have
a
number
before
we
made
that
commit
and
did
a
full
move.
We
have
like
we,
we,
a
lot
of
our
data
pipeline
is
still
on
amazon.
We
have
a
number
of
buckets
that
live
in
amazon
that
will
probably
just
always
live
there,
because
it's
not
worth
the
time
to
fully
migrate
them
or
anything,
but
all
our
kubernetes
infrastructure.
E
B
I
mean
so
it's
we've
talked
about
it.
We
looked
at
like
a
company
called
aviatrix
that
you
know
sort
of
specializes
in
setting
up.
You
know
like
a
super
super
set
of
your
network
across
multiple
clouds
and
stuff
and
and
honestly
what
we've
ended
up
finding,
and
I
think
that
you
know
people
talk
about
cloud
lock-in
and
stuff
and
they
ever
it's
always
proprietary
technology.
That's
not
it
man,
it's
the
egress
fees.
B
You
know
for
us
to
ship
everything
for
because
right
now
our
data
pipeline
still
lives
in
amazon,
and
but
it's
reading
from
our
database,
that's
in
google
and
so
like
it
just
basically
constantly
reads
our
entire
database
out
to
then
dump
into
like
our
analytics
data
set
and
that
costs
us
thousands
and
thousands
of
dollars
a
month
and
is
a
major
source
of
our
our
google
cost,
and
so
we've
we're
trying
to
move
our
data
pipeline
over
to
google,
not
because
of
like
cross-cloud
concerns
already,
but
it's
mostly
around
pure
egress
speed,
cost
and
like
yeah.
B
So
that's
kind
of
the
big
thing
for
us,
but
we're
not
big
enough
like
we,
we
run
a
zonal
cluster
for
production
like
we're.
Just
not
we
might
we're
going
to
move
to
regional,
but
regional
was
had
complexity,
that
what
didn't
make
it
worth
our
time
versus
the
aha
benefits
were
like
full
disaster
recovery
in
cross
region.
Stuff,
that's
on
our
plate
for
the
next
six
months,
but
we're
just
not
big
enough
to
do
cross-cloud
things
so
great,
no.
E
A
Great,
thank
you
all
for
coming
to
today's
sig
meeting.
This
was
an
excellent
meeting.
Really
yeah
had
a
great
time
any
any
last
minute
comments
or
questions.
G
Can
we
just
get
the
zoom
link
for
this
meeting
into
the
invite?
Please.
A
E
At
the
table
now
yeah
I've
asked
for
permissions
to
everything,
give
me
access
to
everything.
So
I
think
thank
you
I'll
do
that.
A
No,
that
was
a
good
reminder.
Thank
you
excellent,
and
we
meet
every
two
weeks.
So
I
look
forward
to
seeing
you
all
in
two
weeks
time,
good,
excellent.
C
A
A
Tracy,
hey,
hey!
I
would
love
to
have
a
quick
chat
with
you.
Whenever
you
can
it's
not
pressing,
but
you
know,
let
me
check
my
calendar.
E
I've
got
15
minutes
now,
but
I
can
send
you
something
for
a
bit
later
today,
maybe
in
an
hour
yeah,
whatever
works
for
you,
yeah
cool,
okay,
okay,
yeah,
be
great
to
catch
up.
Yeah.