►
From YouTube: 2020 07 06 Multi Large Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
thank
you.
Mary
I'll
start
again
made
some
updates
to
the
amar
to
essentially
clamp
down
on
the
definitions
and
the
scope
for
what
we
discussed
last
in
this
working
group
and
try
to
add
some
milestones
and
specifics
as
to
what
I
think
the
working
good
should
be
solving.
It's
definitely
in
line
with
what
Craig
had
added
and
what
we
had
this
class
last.
So
I
I,
don't
know.
If
you
do,
we
want
to
sort
of
I
realize
some
of
the
stuff
that
was
disgusting
comments.
B
So
basically,
I
think
you
answered
the
first
one,
which
was
a
question
of
the
exclusion
of
cost
for
instances.
You
know
I,
think
neighbors,
not
document
said
yeah,
I,
think
so
and
obviously
I
think.
Hopefully
we
can
at
least
keep
an
eye
toward
towards
operationalizing
many
of
these
things
for
our
customers
and
but
I
think
so
I
think
it
makes
sense
the
first
iteration,
but
simply
make
a
little
bit
question
of
mind
and
Internet
fun.
That's
probably
management
representative
here
would
be
like
we're
doing.
B
Where
do
we
think
that
the
biggest
focus
is
that
group
will
be,
and
do
you
think
it's
more
about
the
operational
processes
of
setting
up
serve
an
Operations
Center
for
multiple
instances
of
these
and
responding
efficiently
to
any
instance
or
our
alerts
that
arise?
Or
is
it
more
about
be
building
into
the
product?
More
of
automation
and
capability
to
handle
some?
B
The
comment
day
to
operations
like
dealing
with
giddily
dis
base,
rebalancing
guilty,
shards
and
all
the
rest
and
the
sort
of
stuff
that
potentially
we
do
I'm,
not
sure
what
the
breakdown
of
work
is
to
automate
more.
This
needs
away
and
certainly
kind
of
reduce
the
number
of
all
these
things.
Everybody's,
probably
both
play
with
the
focus
but
I.
A
Think
it's
a
little
bit
of
both,
so
some
of
the
operand
I
was
in
the
middle
of
answering
that
comment.
So
some
of
those
operations
that
you
mentioned,
like
rebalancing,
those
really
happen
inside
the
answers
right
and
there
is
work
that
is
happening
on
that
front-
forget
which
would
essentially
get
replicated
into
those
things
so
I
think
this.
Would
the
output
of
this
working
group
is
a
consumer
of
those
facilities.
A
Saying
I
over
IR
see
all
these
different
instances
and
I
see
that
that
instance
over
there,
but
I
think
the
instances
themselves
have
to
take
care
of
themselves
right.
So
this
is
more
like
our
ability,
I
think
it's
more
availability
to
say
we
need
to
grow
these
instances
efficiently.
We
have
the
references
architectures,
so
we
know
what
they
look
like
at
1,
K,
2,
K,
5
K.
A
If
they
do,
what
do
they
do
and
that's
I
think
the
problem
we're
trying
to
solve,
because
we
are
gonna
run
into
that
as
soon
as
we
set
up
new
instances
and
say
China
and
amia,
so
there's
what
I
call
the
step
functions:
okay,
we're
gonna,
read:
2,000
users,
so
I
you
know
850.
We
know
we
need
to
do
step
one
and
two
so
that
this
instance
can
now
support
2,000
and
at
you
know,
1500.
We
may
decide
okay,
it's
time
to
get
ready
for
5,000.
A
So
how
do
we
do
that?
Seamlessly?
I!
Think
that's
one
of
the
aspects
that
this
working
group
is
solving
and
then
those
are
specific
things
like
from
an
overall
point
of
view.
I,
don't
really
care
that
some
node
in
China
is
rebalance
in
itself.
That's
a
I
think
that's
a
concern
for
they
for
the
instance
in
China,
more
than
the
sort
of
a
world
Orchestrator
and
we're
very
used
somewhere,
and
we
I
think
we
have
a
lot
of
stuff
that
we've
built.
That
assumes
give
up
Commons
it.
A
But
what
happens
when
you
take
these
tools
and
then
you
start
running
them
again
against
a
bunch
of
instances
and
what
new
things
that
we
need
to
develop
so
that
we
can
actually
support
this
instances,
because
if
you
have
one
team
that
is
now
getting
a
flood
of
alerts,
for
instance,
from
various
instances,
do
we
need
to
now
do
routing,
because
we
half
I,
don't
know
I
I,
think
it's
a
higher
level
and
scalability
is
obviously
working
on.
You
know
the
work
they're
doing
to
scale
get
calm.
It's
gonna
apply
to
the
work.
A
B
I
guess
I
just
coming
out
three
months.
No-
and
this
is
all
a
precision,
but
you
know
I
think
there's
some
potentially
idea
that
we
can
use
cloud
native
and
an
operator
to
perhaps
automate
many
of
these
things,
including
an
up
to
potentially
some
of
the
scaling
between
architecture
types.
Now,
obviously,
if
you're
going
from
a
single
node
to
a
database,
that
may
or
may
not
be
trivial
to
do,
but
you
know
it
will
with.
B
Is
that
sort
of
like
a
focus
or
is
it
more
about
like
the
interest
rate
of
processes
and
basically
I'm
trying
to
figure
out
you
know,
should
we
have
larissa,
you
know,
and
distribution
and
and
focusing
operate
in
a
helmet
arts
or
maybe
Andrew,
as
the
in
finish
of
p.m.
more
focused
around
sort
of
the
larger
processes
of
the
interest
or
group,
be
a
mixture
of
both
I
when
I
figure
out
like?
Is
it
weighted
one
direction
or
the
other.
D
Would
that
not
be
like
a
function
of
how
of
timelines?
So
if
we
had
really
short
timelines,
then
it
might
be
sort
of
let's
get
it
in
without
the
operator,
but
if
the
timelines
were
longer
and
I'm
not
suggesting
one
or
the
other,
but
it's
kind
of
a
function
of
that.
So
we
should
discuss
that
first,
yeah
right,
I
think
these
are
some.
A
A
B
A
I
know
I
was
just
there
so
these
instances,
specifically
that
were
putting
as
examples
they're
gonna
drive
a
straight
into
regional
compliance
right,
amia,
China
I
mean
specifically
with
DRP
in
a
lot,
and
so
that's
gonna
start
dipping
toe
some
water
of
which
individuals
can
can
actually
operate,
which
instances
either
to
protect
ourselves
or
to
comply
with
laws.
And
that's
why
I
said
that
federal
was
kind
of
a
it's
a
mix
back
because
it
would.
A
E
A
E
E
Okay,
fully
isolated
means;
they
don't
have
a
common
virtual
operation
center
like
different
people.
That's
not
the
focus
of
this
I
agree
with
the
focus
on
instances
owned
and
operated
by
gate.
Lab
I
think
that
it's
well
I
mean
which
I
added
that
the
automation
we
make
should
be
applicable
to
self-manage
installations
in
the
future.
This
is
all
about
automation,
right
if
there's
nothing
super
controversial
and
take
care.
Take
your
time
to
review
them.
Okay,.
E
I'm
trying
to
write
them
so
that,
like
the
biggest
the
biggest
focus
of
this
working
group,
I
think
should
be
like
making
sure
that
we
can.
We
can
run,
get
Lancome
on
our
ham
charts.
It's
that's.
That's
like
the
first
unlock
to
automating,
more
so
whatever
the
Charter
for.
If
this
group
is
doesn't
point
to
like
eliminating,
NFS
and
and
and
getting
get
Lancome
on
the
hound
charts,
then
we're
doing
it
wrong.
So
I'm
not
sure
exactly.
E
F
E
C
So
helmet
helmet
is
just
a
chef
for
kubernetes.
It's
just
a
way
to
to
control
the
configuration
of
an
installation
and
for
something
as
complicated
as
get
lab
or
Prometheus.
Where
Prometheus
is
not
just
one
component,
it's
a
bunch
of
different
components.
You
you
can
either
build
a
series
of
helm,
charts
that
control
that
configuration.
C
That's
just
not
like
trivial.
You've
got
a
bunch
of
rails
processes
and
you
don't
care
how
they
come
and
go.
You
have
like
some
amount
of
state
that
needs
to
be
moved
around
and
sidekick
jobs
that
need
to
be
stopped,
sidekick,
sidekick,
job
workers
that
need
to
be
started
and
stopped,
and
databases
and
Redis,
and
all
that
so
my
proposal
a
long
time
ago,
was
that
helm
was
a
good
start
to
get
get
lab
into
a
kubernetes
cluster.
E
G
You
bring
some
light
into
that
because,
as
Ben
said,
we're
building
such
an
operator,
and
so
basically
the
key
concept
for
in
my
opinion
for
the
operator
is
what
it's
called
the
reconciliation
cycle.
So
basically,
with
tom,
you
create
automation
and
this
automation,
the
poison
infrastructure,
but
later
on
this
infrastructure
may
change
either,
because
you
want
to
explicitly
change
it.
G
I
want
to
add
an
extra
note
or
because
I
know
dice,
and
so
this
reconciliation
cycle,
which
transcends
the
operator,
is
a
software
that
continuously
watches
the
expected
state
against
the
actual
state
and
if
it
finds
any
difference,
it
applies
whatever
change
we're
and
you
know
that
has
died
or
deploy
every
instance
that
you
have
request,
and
this
cannot
be
done
on
how
so
how
so
'ls
cannot.
The
first
part
you'll,
deploy
this
automation
and
then
you're
done,
but
if
anything
happens
or
you
want
to
change
that,
you
may
run
into
difficulties.
G
So
an
operator
via
this
reconciliation
cycle
allows
you
not
only
to
keep
the
state
in
the
desired
state,
but
also
to
apply
these
de
to
operations
that
you're,
referring
to
as
part
of
this.
This
working
group,
for
example,
we're
targeting
right
now
how
to
perform
a
control.
We
start
of
a
process
faster
by
dragging
in
first
a
traffic.
All
this
can
be
fully
automated
or
how
to
perform
a
piggyback.
It
is
something
that
that.
C
Other
way
around
so,
for
example,
in
in
our
production
environment,
we
actually
use
the
Prometheus
operator
helm
chart
to
configure
the
operator
and
the
operator
does
the
actual
work
so
its
helm,
then
operator
then
kubernetes.
You
don't
need
to
do
that.
You
can.
You
could
actually
to
control
the
operator
directly,
but
helm
is
just
helm,
is
just
one
config
language
for
kubernetes
and
it's
not
the
only
config
language.
There
are
several
other
competing
projects
over
on.
Oh.
E
G
So
the
operator
may
actually
need
some
configuration
itself
may
have
variables
like
options
speaking
our
case,
you
might
want
to
deploy
in
a
given
namespace
or
you
want
to
deploy
the
UI
or
not
and
and
several
other
knobs
that
you
make
tuned.
So
you
can
use
talam
for
those
so
helm,
it's
actually
used
to
deploy
the
operator,
and
then
the
operator
takes
control
of
it.
Actually,
the
whole
pattern
is
also
been
mentioned.
The
CR
is
the
custom
resource
definition,
which
is
basically
high-level
objects
and
you
create
that
represents
abstract
concept.
G
In
our
case,
a
phosphorus
blaster
right
and
you
don't
need
to
understand
that
it
uses
Patroni
underneath
that
it
uses
connection
pooling
or
how
they
are
configured
together.
Using
games
to
make
sockets.
You
say:
hey
I
want
a
phosphorus
cluster
with
three
instances
with
this
phosphorus
version
and
maybe
discuss
some
configuration
and
that's
a
seer,
which
is
a
jumbo
file.
Basically
falsification
relevant
they
just
committed
to
kubernetes
Benitez
will
talk
to
the
operator.
The
operator
will
create
the
actual
stuff
behind
the
scenes.
So
how
charts
are
the
bootstrapping
process?
E
F
The
operator
concept
is
it's
hard
to
implement
on
an
existing
installation.
It's
not
trivial,
on
a
specifically
set
up
like
github.com
charts
that
we
are
setting
up
for
github.com.
That
takes
a
lot
more
effort
for
Greenfield
investments.
That
would
make
a
lot
of
sense
to
approach
it
right
now,
but
if
we
want
to
go
down
this
route
for
github.com,
that
adds
additional
level
of
complexity
that
we
need
to
think
about,
especially
with
the
fact
that
we
are
shipping
home
charts
to
our
customers,
meaning
we
need
to
also
support
both
ways.
B
I
can
make
fried
a
little
bit
of
color
on
where
we're
at
I'm
the
operator.
So
we
worked
on
a
while
ago
and
we
ran
into
problems
with
the
helm
with
helm
and
the
operator
stepping
on
each
other,
because
helm
doesn't
like
it
when
you
change
things
underneath
it
I
linked
in
the
point
five.
A
couple:
there's
additional
items
as
well:
I
just
found
one
of
the
first
operator
discussion,
there's
other
parts
of
that
as
well.
I
just
couldn't
find
them
quickly
and
then
B.
B
So
we
are
doing
some
thinking
of
us
right
now,
we're
having
I,
believe
weekly
meetings
with
red
hat
and
so
there's
some
discussion
here
in
some
progress.
That's
the
current
state
of
the
operator
and
potentially
the
helm
SDK.
If
the
operator
might
be
a
interesting
way
to
have
the
operator
act
as
a
control
plane
but
execute
some
operations
through
the
home
chart
and
mental
I
think
hopefully
over
time
we
can
find
a
way
to
have
the
operator
consume
more
and
more
of
the
helm
charting
getting
away
from
home.
But
that's
a
little
bit
of
thing.
E
B
E
F
When
we
were
playing
with
the
operator,
we
were
trying
to
implement
the
operator
concept
concept
inside
of
the
charts.
To
begin
with,
we
ran
exactly
into
like
a
huge
number
of
edge
cases
and
that
created
quite
a
lot
of
challenges.
For
us,
we
either
could
focus
on
getting
the
Hound
charts
to
be
usable
or
work
on
the
operator
and
trying
to
get
hampshire's
to
be
usable
at
the
same
time
which
didn't
really
work.
A
Gonna
do
a
time
check.
We
have
two
more
minutes
to
go
so
I.
Think
the
next
thing
for
us
to
do
is
to
put
some
of
these
ideas
in
writing
to
be
specific
and
make
sure
that
we're
figuring
out
which
things
for
our
string
I'll
take
care
of
doing
that.
We
also
need
to
figure
out
the
so
I
wanted
to
raise
the
point
of
sit.
Are
you
gonna
be
attending
this
one
regularly,
because
we
have.