►
From YouTube: Ops Backend Group Conversation (Public Livestream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
my
pocket
watch
tells
me
that
it
is
top
of
the
hour.
I'll
get
started
hello
again.
I
would
just
have
a
group
conversation
for
the
secure
stage.
Yesterday.
My
name
is
Kenny
Johnston
and
your
other
host
is
Talia.
Who
is
the
engineering
director
for
the
op
stage?
Let's
get
started,
the
slides
are
on
there.
First
question
was
from
Scott
Scott.
Would
you
like
Superboy?
Is
it
me.
A
I
was
all
answer
across
both
of
them
and
then
Daniel
and
Sarah.
If
you
want
to
contribute
your
specific
thoughts
for
configure
and
monitor,
specifically,
first
and
foremost
it's
dogfooding
for
me,
you
know
that
configure
and
monitor
stages
are
in
this
ops
groups
at
the
end
of
the
DevOps
lifecycle,
and
we
talked
about
wanting
to
be
as
a
complete
DevOps
platform,
but
we
ourselves
don't
use
it
for
that
complete
loop,
so
really
focusing
on
our
own
teams.
Dogfooding,
there's
a
couple
of
initiatives.
A
We
met
with
Devon
Silva
from
the
infrastructure
team,
where
they're
moving
some
of
the
non
core
applications
to
using
Auto,
DevOps
and
monitoring
and
and
using
our
new
incident
management
capabilities.
My
dogs
talking
to
me
and
so
I
think
that
is
gonna
be
first
and
foremost
really
get
our
own
experience
and
have
a
dog
fooding
team
give
us
rapid
iteration
on
on
those
capabilities,
and
then
you
know
I
think
we've
done
a
lot
with
Auto
DevOps,
but
we're
gonna
put
some
pressure
on
other
DevOps
to
work
in
more
robust
places.
Auto
DevOps
is
great.
A
It's
getting
you
started
and
I
wanted
to
kind
of
also
evolve
with
applications
as
they
become
more
mature
and
weave
into
production.
So
those
would
be
mine.
Daniel
I'm
gonna
go
first.
If
you
have
any
additions.
A
C
Next,
three
months,
so
our
immediate
priority
is
bringing
both
Auto
DevOps
and
the
kubernetes
integration
to
a
viable
stage.
I
think
that
we're
already
there
with
the
Metis
integration,
the
thing
of
the
last
bid
we
had
there
was
the
ability
to
uninstall
the
Goodlatte
managed,
apps
and
that
is
mostly
old
merged
and
then
for
Auto
DevOps.
It's
making
our
DevOps
smarter
about
when
it
should
run
and
that
it
should
only
run
what
he
it
can
add
value.
C
That
is
something
that
we're
currently
working
on
and
we
expect
March
to
and
then
immediately
after
that,
we
also
want
to
bring
our
service
category
to
a
viable
state,
and
that
means
things
like
you
know,
having
SSL
by
default,
the
ability
to
provide
a
domain
for
each
one
of
your
functions
without
you
having
to
figure
anything.
So
that's
gonna
be
a
focus
for
the
for
the
coming
months
and
we'll
also
spend
some
time
bringing
some
of
the
other
categories
to
minimal,
specifically
I'm
thinking
about
chaos,
engineering
and
cluster
cost.
C
D
Yeah,
thank
you
for
the
question,
Scott,
so
I
say,
building
on
top
of
what
Kenny
said,
closing
the
loop
of
closing.
What
about
flu?
Specifically,
if
we're
monitor
I'm
hope
they
started
incident
management,
which
is
that
last
part
so
giving
people
the
ability
to
react
to
what
they've
built
understand,
how
it's
performing
and
then
in
the
future,
providing
system
recommendations
for
how
they
can
infuse
operations
back
into
their
planning
cycles.
So
incident
management
right
now.
E
Thing
on
slide
11,
you
showed
some
momentum
on
throughput
as
the
teams
are
growing,
but
we
also
know
that
the
team
size
is
largely
stabilized
for
the
year,
because
we've
reached
staffing
I
believe
so
I
just
be
curious.
To
hear,
like
you
know,
timeframes
for
where
we
thinks
were
put
stabilizes
and
kind
of
what
our
targets
are.
There.
F
F
You're
right
we're
fully
staffed
for
the
Kim
Vig
configure
back-end
group.
We
also
have
we
believe
we
have
enough
bandwidth
on
front-end
as
well,
so
this
particular
group
is
not
lacking,
so
I
expect
throughput
to
start
to
stabilize
at
the
level
that
you're
seeing
right
now
for
July,
so
July
has
been
really
great
month.
However,
that
team
is
also
helping
onboard
the
server
lists
group,
so
they're
still
close
coordination
between
the
teams,
which
is
extremely
valuable
because
server
list
is
a
brand
new
team
share.
Gore's
has
been
doing
a
fantastic
job.
F
You
know
seeding
the
team
and
working
on
those
initiatives,
but
we
still
need
to
onboard
additional
engineers
and
Nick
welcome,
joined
as
the
engineering
manager
for
that
team.
Again,
they're
brand
new,
so
there's
gonna
be
a
little
bit
of
a
tail
to
onboarding,
which
I
expect
to
impact
a
little
bit
of
throughput,
but
generally
the
configure
stage
should
start
to
stabilize
in
a
month.
I
would
say:
does
that
answer
your
question?
I.
F
We're
still
onboarding
a
few
new
folks.
We
also
have
a
few
additional
vacancies.
So
if
you
look
at
slide
5
we
are
doing
well.
However,
we
still
have
half
of
the
team
about
half
of
the
team
brand
new
engineers,
so
I
expect
that
ramp
up
to
still
continue
before
we
see
throughput
stabilized,
but
it's
been
really
good
to
to
talk
about
this
data
with
the
monitor
stage,
we've
seen
impact
on
splitting
the
team,
or
we
believe
that
was
the
impact
from
last
month-
not
July,
sorry
in
June.
F
So
that's
that
was
good
to
watch
and
focus
on
our
initiatives
to
communicate
better
to
align
the
back.
Can
you
know
both
both
groups
together
because
they
used
to
work
to
get
as
one
team,
and
now
we
split
into
four?
So
may
you
know
fine-tuning
our
practices
and
making
sure
that
we
are
still
optimizing
and
working
fast
and
efficient,
and
all
of
that,
so
it's
it's
nice
to
see
that
July.
E
It
seems
like
we
have
a
if
I
do
my
rough
math
in
my
head,
we've
got
about
fifteen
folks.
Working
between
a
health
and
a
p.m.
so
or
target
should
be
roughly
around
150
once
the
team
is
fully
ramped
and
excited
about
all
the
new
players
involved
so
really
excited
all
the
great
hiram
we've
done,
but
just
looking
forward
to
continued
of
velocity
in
that
area.
Thanks.
A
Next
question
was,
from
dan
dan
I'll.
Give
you
two
seconds
to
come
off
mute.
You
mentioned
that
you
might
not
be
able
to
before
I
voice
over
for
you
one
two
okay
dan
asked
we
are
pushing
multi-cloud
as
one
of
our
differentiate
errs,
but
today
we
only
make
it
easy
to
connect
and
create
a
cluster
from
GCE
gke
cluster.
What
are
our
plans
to
do
the
same
for
AWS
and
other
cloud
providers
that
provide
manage
to
run
these
services
Daniel?
Do
you
want
to
take
that
one?
No,
the
AWS
one
is
underway.
Sure.
C
C
A
And
I'll
say
you
know
dan
and
I
have
been
working
closely
with
Alliance
team
to
see
if
we
can
engage
those
individual
cloud
providers
to
provide
that
integration,
it's
in
their
interests
for
it
to
have
just
as
much
ease
of
use
for
creating
clusters
on
their
infrastructure
Sam.
Would
you
like
to
voice
your
question
sure.
G
A
C
D
D
A
And
I
think
you
know.
Overall,
we
have
to
remember
that
as
a
as
a
product,
we
have
this
convention
over
configuration,
so
we
have
the
convention
of
will
deploy
a
specific
humanities
cluster
in
our
conventional
way
same
with
Prometheus,
but
we
shouldn't
always
expect
that
the
majority
of
users
use
that
right.
The
convention
is
meant
to
get
people
started
for
those
who
wants
to
use
out
of
the
box,
but
we
support
the
non-conventional
methods.
It's
actually
really
interesting.
The
Prometheus
data
that
more
are
using.
A
H
Thing
so
this
is
sort
of
a
strategy
question,
as
a
lot
of
people
know
we're
in
the
midst
of
switching
over
to
kubernetes
on
github.com
for
a
few
services,
one
of
the
services
we're
doing
now
is
the
registry.
This
is
a
service
that
we
don't
develop.
Ourselves
and
I
went
deep
into
Auto
DevOps
to
kind
of
see
whether
it
would
be
a
good
fit
for
using
it
for
deployment.
It
doesn't
currently
look
like
that's
the
case
and
I'm
curious
from
a
strategy
perspective.
A
So
the
release
team
I
know
could
probably
speak
to
more
about
the
deployment
aspects
and
we
do
consider
spinnaker
a
competitor
heavily.
So
when
it
comes
to
release
Auto
DevOps
in
general,
though
I
guess
I
would
say
we
we
have
a
principle
in
the
product
direction.
Page
I
believes
it
says
kind
of
our
first-class
workloads
that
we
should
target
our
use.
A
Cases
that
we
should
target
are
cloud
native
and
those
are
necessarily
going
to
be
micro
services,
each
probably
with
their
own
project
and
so
I
know,
Daniels
emphasis
on
moving
the
where
you
can
attach
the
kubernetes
cluster
up
is
in
that
vein,
so
that
you
can
attach
it
at
a
group
level
and
then
deploy
multiple
different
projects.
Slash
repositories
to
to
that
cluster.
A
But
we
have
been
there's
been
some
interesting
analyst
research
recently
about
a
move
in
the
industry
towards
more
platform
teams
where
and
was
part
of
also
a
rethink
of
our
use
of
the
word
pass.
But
we're
organizations
are
more
frequently
having
centralized
teams
that
manage
significant
compute
resources
or
like
in
our,
in
our
current
case,
a
massive
kubernetes
cluster
that
the
other
teams
can
just
the
development
teams
can
easily
just
deploy
to,
and
that
comes
with
a
whole
host
of
problems.
A
Daniel
and
I
were
discussing
on
an
issue
today
about
a
service
catalog,
and
how
do
you,
as
an
operator
of
that
large
instance,
have
a
good
understanding
of
all
the
different
services
that
are
running
or
even
as
a
contributor
of
one
of
those
services?
How
do
you
get
a
good
understanding
of
how
your
service
is
going
to
interact
with
other
services?
What
what
api's
are
available
to
you?
What
version
they're
on
who
the
contact
for
that
team
is,
as
organizations
get
larger.
A
So
I
think
definitely
we
would
expect
to
have
support
for
that
use
case
as
as
almost
a
first
class,
because
that
seems
to
be
that's
our
skate
where
the
puck
is
headed
mentality-
and
I
don't
know
how
much
of
that
involves.
Auto
DevOps,
specifically
adjusting
in
order
to
support
it
and
Daniel
I
know
you've
been
thinking
about
this.
Please
chime
in
with
additional
thoughts,
I'm.
C
Yep
so
I,
it
is
fair
that
we
know
auto
develops,
doesn't
cover
kind
of
every
single
use
case
under
this
on
there,
but
we
do
want
to
get
to
a
place
where
it
covers
the
most
common
kind
of
cloud
native
development
scenarios
and
I
know
that
we
are
we.
We
are
placing
kind
of
a
high
focus
on
dogfooding
and
if
we
come
across
on
an
internal
project
like
the
one
that
you
mentioned,
John
I
think
that
it's.
C
If
we
identified
that
it's
kind
of
a
common
cloud
native
use
case,
we
would
definitely
want
to
go
forward
and
add
the
features
that
would
support
that
that
that
use
case.
That
being
said,
if
it's
more
of
a
legacy
use
case,
that
we
think
that
it
may
not
be
the
case
for
a
long
term
and
then
probably
we
would
evaluate,
but
yeah
I
would
love
to
talk
more
about
that.
That's
specific
use
case
and
get
more
details.
C
H
Thing
you
mentioned
is
attaching
the
cost
at
the
group
level,
and
this
would
be
something
that
we
would
really
like,
because
the
way
we
structured
our
projects
I
think
a
lot
of
other
people
do
this.
We
have
one
cluster
and
then
we
have
a
bunch
of
projects
that
deploy
to
it
and
attaching
each
project
individually
to
the
same
cluster
doesn't
necessarily
make
sense.
So
that
would
be
a
good.
Is
there
an
issue
for
that?
I
can
track
for
having
a
cluster
attached
at
the
group
level.
Yeah.
C
So
that
is
all
functionality
that
is
out
there
today
today
you
can
go
to
a
group
and
you
should
see
a
kubernetes
option
on
the
left
hand,
side
and
you
can
add
the
parts
for
the
group
level
and
it's
available
also
at
the
instance
level,
so
you're
an
instance
admin.
When
you
go
to
settings
you
see
that
each
many
on
the
left-hand
side
and
both
of
those
are
available
today.
I
If
I
can
chime
in
here,
I
think
we
had,
this
is
really
important
that
we
use
our
own
product.
We
have
never
made
something
in
get
lab
successful
if
we
haven't
used
it
ourselves
and
a
big
reason
why
we
don't
see
the
usage
yet
for
monitoring
and
other
DevOps
and
instance
attachment.
Is
that
we're
not
using
it
ourselves,
there's
probably
pretty
big
bugs
that
everyone's
hitting
so
the
first
way
to
get
through?
That
is
to
start
using
it
ourselves?
I
I
Of
course
we
want
to
support
like
common
cloud
native
use
cases,
so
there's
two
options:
either
the
our
infrastructure
team
is
trying
to
do
something
that
is
common
and
we
should
support
it
in
the
product
or
our
infrastructure
team
has
to
change
into
a
common
workflow,
but
either
of
them
end
up
with
us
using
our
own
product.
So
the
outcome
of
of
this
investigation,
it
would
be
very
unlikely.
It
would
seem
very
unlikely
to
me
that
the
outcome
is
that
we
cannot
use
our
own
product
ourselves.
I
This
is
Gale
app
and
running
a
cloud
native
wait.
Hence
it
makes
sense
that
we
support
it
and
then
you
know
I'd
like
to
see
you
in
issues
like
like
this,
like
issue
7
to
97,
discussing
with
the
infrastructure
team
and
making
sure
that
we
can
support
this,
because
this
is
the
first
one.
This
is
a
relatively
straightforward
one:
it's
not
going
to
get
any
easier.
If
we
miss
this
one,
then
there's
an
argument
to
miss
the
next
one
and
before
you
know,
we
have
get
lock-on
running
outside
of
your
lab.
I
We
have
to
dog
food.
We
have
to
run
our
own
things,
because
if
we
are
not
even
prepared
to
use
it,
nobody
will
be
if,
if,
if
we
end
up
starting
using
spinnaker,
because
our
product
can
do
it,
we
might
as
well
give
up
on
our
release
strategy,
because
then
every
company
in
the
world
will
do
it.
So
it
is.
This
is
important,
and
this
is
urgent.
C
A
You
yeah
Daniel
all
pointed
to
you
when
I
had
a
discussion
with
Devon
who's
on
that
issue,
that
John
mentioned
who's
going
through
the
process
of
some
non
core
applications,
moving
them
to
Auto,
DevOps
and
so
I
think
you've
been
involved
in
that
discussion.
Him
and
Tory
moved
design
over,
but
we
need
to
also
focus
on
more
core
applications,
especially
for
the
monitor
one
that
we
would
actually
wake
people
up
in
the
middle
of
the
night
if
they
weren't
available,
and
so
we
had
a
discussion
yesterday
with
Devon
about
that.
Also
you,
the
recording.