►
From YouTube: 2020-12-07 162200 Multi Large Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
So
your
you're
also
aware
we
are
migrating
to
our
self-hosted
elastic
cluster
right.
A
Yes,
okay,
cool,
I'm
not
sure
the
time
frame,
though,
but
yeah
john
has
mentioned
that
that's
in
the
plans
sounds
like
there
wasn't
a
whole
lot
of
value.
Add
from
elastic
so
yeah.
B
D
Thanks
veron
so
yeah,
I
just
wanted
to
give
an
update
on
our
issue
where
we're
tracking
some
of
the
open
items
for
supporting
the
first
lab
manage
instance.
D
So
all
that's
being
tracked
in
the
issue
if
you'd
like
to
follow
along,
please
do
so
there
next
up
we're
going
to
come
up
with
a
proposal
for
how
we
will
administer
the
instance
and
what
are
the
duties
as
far
as
infrastructure
and
the
customer
and
customer
support.
So
that's
be
working
on
that
this
week,.
E
C
I
should
do
sorry
jason.
I
wanted
to
ask
andrew
a
question
or
two
sure
where,
like
I
didn't,
I
read
the
issue
and
I
didn't
see
where
the
discussion
was
about.
99.95
like
I
saw
a
comment
about
us
being
equal
or
better
than
github,
but
I
don't
know
how
we
came
to
the
conclusion
of
99.95.
D
Yeah
it
was,
it
was
mostly
a
discussion
with
the
account
team
when
we
met
with
them.
Last
week
we
they
had
asked
the
customer
what
their
expectations
were
for
uptime
and
I
think
they
had
provided
a
number
like
99.9,
and
so
we
felt,
along
with
the
account
team,
that
we
shouldn't
treat
this
any
different
than
gitlab.com
and
we
should
try
to
maintain
the
same
sla
there.
D
I
can.
I
can
capture
some
of
that
discussion
on
the
issue,
but
a
lot
of
it
happened
in
the
sync,
with
the
account
team.
C
A
F
D
Yeah,
so
we
should
handle
it.
The
way
that
I'm
looking
at
this
is
we
should
handle
it
the
exact
same
way.
We
handle
com
contracts
where,
in
our
current
contracts
today
we
have
an
aspirational
sla.
Now
we
don't
say
anything
about
giving
credits
in
case.
You
know
we
missed
that
sla,
but
we
are
trying
to
aspire
to
99.95.
A
Yeah
so
which
just
might
capturing
the
issue,
but
the
goal
is
basically
take
what
we
do
for
dot
com
and,
at
least
at
least,
have
the
same
level
of
the
same
policies
for.
C
Okay,
I
would,
I
would
like
to
have
that's
clearly
in
written
somewhere,
because
it's
it's,
it's
not
really
clear
and
then
another
thing:
the
defined
deployment
release
cadence.
What's
the
latest
version?
What
does
that
mean
right?
Like
does
that
mean
we
update
them?
We
upgrade
them
immediately
after
the
22nd
to
the
latest.0
version.
What
does
it
mean
to
actually
upgrade
them
on
the
security
patch,
like
patch
releases,
it
says
complete,
but
there
is
a
lot
of
things
that
are
not
defined
there.
D
Yeah
we
were
just
defining.
We
were
trying
to
get
a
sense
of
what
the
customer
expects,
as
far
as
having
the
instance
upgraded
on
a
regular
basis.
They
are
fine
with
monthly
updates.
D
Now,
as
far
as
security
releases
go,
I
think
that
probably
needs
more
discussion,
but
as
long
as
they
can
have
the
instance
updated
monthly,
they
would
be
happy.
G
Maybe
maybe
I'll
interject
here
that
I
don't
know
if
we
want
to
have
the
this
working
group
kind
of
so
much
taken
over
with
that
with
a
single
topic.
Potentially,
we
should
have
a
a
separate
conversation
to
kind
of
make
sure
that
we're
we're
addressing
all
the
right
questions
and
and
everything
and
and
and
getting
into
this
so
and
also
making
sure
that
we're
we're
defining
a
service
that
can
be,
you
know,
sold
to
20
different
customers,
not
not
just
the
the
first
one
and
everything.
G
I
think.
That's
probably
what
some
of
what
some
of
the
concern
is,
you
know
is
it
it
makes
sense
to
take
this
to
another.
G
F
I'd
almost
suggest
that
we
have
another
working
group
like
this
is
a
pretty
complex
topic,
especially
when
we
want
to
make
this
repeatable
and
not
a
ad
hoc
thing.
For
this
first
prospect.
C
I
generally
agree
with
you
steve
here:
it's
just
that
like
if,
if
you
read
the
issue
and
read
the
tables,
there
are
some
statements
in
there
that
just
like
it's
really
hard
to
connect
with
them
and
when
you
read
something
complete
and
then
you
can
find
things
like
it
and
it
might
end
up
being
the
next
customer
that
comes
in.
We
just
use
the
same
languages
say
we
copy
whatever
we
do
for
github.com
and
customer
one
which,
like
we
have
to
understand
the
decisions
that
are
being
made
and
why
they're
being
made.
G
Of
yeah,
no
certainly
I
I
I
get
the
the
concern
and
everything
I'm
just
saying
that
there's
probably
a
whole
30
minutes
or
45
minutes
of
discussion
to
have
on
this
on
this
one
topic.
So
so
maybe
we
should.
We
should
do
that.
Andrew,
maybe,
and
I
can
work
together
to
set
that
up.
D
Yeah
for
sure-
and
I
think
you
know
we
we've
done
the
work
to
get
the
customer
expectation
here.
I
think
the
piece
that
we're
missing
is
what
we're
comfortable
with
on
the
infrastructure
side.
So
that's
where
I
need
brent
and
marin
to
jump
in
and
say
what
are
we
comfortable
with
man
with
managing
this
first
customer
is
99.95
too
high.
C
Cool
jason
you're
up
next.
E
Right
so
after
much
much
ado
and
testing
and
revision,
we
do
officially
have
the
ability
to
deploy
independently
configure
web
service
fleets
merged
into
master.
At
this
time,
we
can
now
individually
configure
all
the
key
items
about
the
deployments
allowing
for
focused
scaling
and
separation
of
concerns.
C
E
D
Yeah,
this
is
just
an
fyi
that
we've
got
an
opportunity.
Canvas
review,
set
up
with
scott
and
anup
later
this
month
to
talk
about
gitlab
private,
the
overall
opportunity,
not
just
this
one
customer,
but
how
we
want
to
scale
it
out
in
the
future.
What
problems
we're
solving?
D
What
is
the
total
opportunity
size
so
more
of
like
a
product
view
on
gitlab
private
and
how
we're
planning
to
offer
it
to
customers?
So
that's
on
the
calendar
for
later
this
month,.
E
E
Itself:
okay,
in
terms
of
updates
on
the
open
shift,
application
operator
itself,
we'll
soon
be
merging
the
code
to
consume
the
helm,
chart
itself
within
the
applications
code
base
and
replace
the
existing
model
generators
themselves.
B
C
Jason,
I
have
a
question
for
you
related
to
the
operator
and
how
you're
approaching
this
thing.
I
I
only
read
like
the
the
summaries
like
I
didn't,
follow
everything.
What
kind
of
operational
stress
are
you
putting
on
while
you're
developing
like?
Are
you
using
maybe
any
of
the
reference
architecture
like
systems
to
see
how
the
operator
acts
in
under
those
workloads,
or
is
this
just
initial
work
that
you're
doing
to
to
test
to
actually
have
the
operator
work.
E
E
We
do
have
separate
open
issues
to
do
ci
and
scripted
generation
of
openshift
clusters
for
development
and
integration
of
the
ci
and
other
items,
but
they're
they're,
not
a
part
of
this
particular
work.
The
intent
of
the
reference
work
here
is
specifically
around
not
actually
hard
coding.
All
of
the
kubernetes
objects
in
go
data
structures
and
instead
reusing
the
work
from
the
helm
chart
so
that
we
can,
as
we
get
people
comfortable
with
what
the
operator
is
and
how
to
work
with
it.
B
E
E
Okay,
in
terms
of
a
production
blocker
issue,
we
have
the
zero
downtime
deployment
for
nginx
ingress,
it's
being
worked
on
from
multiple
angles.
Part
of
that
is
already
addressed
in
the
short
term.
We
have
two
things
from
john
jarvis
from
delivery.
First
is
we
found
the
methodology
to
put
in
place
a
pre-stop
hook
that
prevents
early
termination
or,
I
should
say,
abrupt
termination
of
the
nginx
process
itself
and
second,
we've
added
termination
grace
period
seconds.
The
combination
of
those
two
things
gives
us
the
ability
to
cleanly
roll
the
controller
as
we
work
going
forward.
E
We
are
in
the
process
of
doing
an
upgrade
to
nginx
itself.
We
are
looking
to
jump
forward
to
the
last
released
helm,
2
variant
of
that
chart
from
the
upstream
and,
if
necessary,
pull
things
from
slightly
later
revisions
and
just
make
the
necessary
back
ports
to
helm.
V2
point
of
context,
helm,
2
versus
helm,
3,
helm,
2
is
officially
in
maintenance
mode.
E
However,
we
have
to
wait
a
little
while
to
make
sure
that
our
customers
are
on
recent
enough
versions
that
they're
not
going
to
hit
the
problem
of
having
one
chart
that
we
depend
on
being
held
three
and
the
rest
of
our
chart
being
held
two
productions
currently
on
helm,
2.16
working
to
move
to
helm
three,
and
I
know,
there's
a
lot
of
customers
out
there
that
are
still
running
home
to
14
and
to
16
through
rancher.
So
we
have
to
be
careful
about
that.
C
E
I
will
have
to
revisit.
I
know
that,
there's
not
it's
not
as
hard
of
a
breaking
change,
because
we
have
a
larger
refinement,
but
I
had
not
actually
tried
the
helm,
2-3
plug-in
in
a
little
bit
myself.
The
biggest
item
that
I
remember
we
narrowed
it
down
to.
There
were
a
couple
of
components
that
did
not
like
to
upgrade
because
helm
and
tiller
disagreed
on
which
label
they
were
using,
which
resulted
in
immutable
labels.
C
F
Oh
yeah,
so
just
a
quick
photo
from
last
week,
discussion
of
the
environmental
toolkit
and
we're
aiming
to
make
it
public
by
the
end
of
next
week.
We
have
some
work
to
do
just
to
remove
some
baked
and
config
data
and
secrets,
and
we
have
to
kind
of
also
figure
a
way
to
make
that
work
with
more
flexible
config.
F
So
we're
on
the
way
it's
just
slotting
it
in
and
actually
getting
it
done
so
yeah
and
the
next
week
they
put
the
project
should
be
public
with
a
warning
just
to
say
it's
there's
still
a
few
actual
things
to
do,
but
feel
free
to
start
perusing
and
start.
A
Preparing
cool
thanks,
grant
it's
exciting
I'll,
go
ahead
and
slow
questions
on
to
discussion.
Just
one
quick
heads
up
that
we
expect
github
to
probably
announce
their
version
of
private
sas
this
week
during
good
of
universe
starts
tomorrow.
The
keynote
is
around
this
time.
I
think
tomorrow.
Actually
the
robot
issue
for
their
version
is
there.
It
used
to
be
called
github
private.
Now,
they're
named
to
ae,
I've
tried
to
find
out
what
ae
stands
for
have
been
unsuccessful
in
doing
so,
azure
enterprise,
don't
know.
A
Andrew
t
has
opened
an
issue
to
collaborative
marketing
just
so
we
can
have
a
answer
ready
if
customers
ask
prospects
sales,
you
name
it
so
getting
that
rolling
there,
and
just
noting
that
there
are
separate
room
up
items
for
packages,
actions
and
other
features
of
github,
so
it
may
or
may
not
be
fully
featured
upon
release,
and
you
can
also
see
some
documents
arrive
as
well
as
as
part
of
those
documentation.
You
can
see
their
pricing
policy,
so
it
will
be
39
or
a
18
premium
over
their
github
enterprise
feature
per
month.
A
B
A
Yeah,
you
know
I,
I
think
it
is
basically
the
github
private.
I
don't
know
if
it'll
be
supported
on
multiple
cloud
providers
or
how
you
get
started
right
now.
A
When
you
look
at
some
of
the
documentation
you
it's
all
based
on
github
support,
so
you
have
to
call
github
support,
send
them
some
information
like
your
smtp
server
and
things
like
that,
and
they
will
get
you
going,
but
it
is
very
much
a
private
managed
service
for
sure,
but
I
don't
know
if
it
is
sort
of
a
github
specific
thing
or
if
it
is
an
azure
sort
of
push
button
to
deploy
type
thing
right,
like
you
would
do
with
like
a
azure
database
service.
A
You
know
where
you
just
hit
the
button
and
go
that
I
do
not
know
we
might
find
out
tomorrow.
I
try
to
find
out,
but
it
is
not
the
docs
that
are
public.
You
cannot
find
out
that
information
from
it
has
been
in
beta
for
some
time,
as
you
can
tell
from
the
documentation
things
like
that,
so
it's
possible
that
the
current
iteration
was
not
hosted
through
azure
and
the
new
version
will
be,
but
we'll
find
out
more.
I
imagine
tomorrow,
but
yeah.
I
think
it's
a
drug
competitor
to
this.
C
Did
you
maybe
stumble
upon
information
about
whether
this
is
an
aha
offering,
or
is
this
like
a
singular
instance
setup,
because
I
couldn't
find
that.
C
A
Yeah,
I
don't
know
if
there's
a
500
seat
minimum
based
on
the
pricing
policy,
you
can
see
there
are
linked
in
imd,
so
it's
probably
not
too
small
in
the
small
end,
at
least
that,
but
that
also
may
have
been
the
bar
for
their
limited
release.
I
don't
know
it
might
be
getting
relaxed
as
it
goes.
Ga
here.
E
Say
500
seats
makes
a
lot
of
sense
for
even
our
own
small
instances.
That's
effectively
the
single
omnibus
size,
or
you
know
two
to
three
nodes
of
reasonably
sized
kubernetes.
So
it's
in
that
low
end
minimum
market
they're
just
making
sure
that
they're
covering
costs
for
whatever
their
reasonable
sized
azure
vms
are.
C
I
mean
ops,
gitlab.net
has
500
users,
obviously
not
all
active
at
the
same
time,
but
the
point
is
it:
it
covers
a
completely
different
space
from
the
start,
at
least
okay
cool.