►
From YouTube: 2022-01-31 Kubernetes Migration Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
let's
get
started
yeah
welcome.
Everyone
today
is
2022
january
31st
wow.
This
is
the
coordinates
migration
working
group
weekly
by
withdressing.
So
let's
get
to
the
agenda
items.
First,
what's
been
done,
amy
you
want
to
verbalize
what
has
been
done.
B
Yeah,
thank
you
so
just
to
mention
that
we've
officially
wrapped
up
the
pages
migration,
so
we
had
just
one
bug
out
standing
on
that.
That's
now
been
resolved,
so
epic
has
been
close.
So
thanks
for
everyone
who
helped
out
there
and
alongside
that
the
rate
limiting,
where
does
cluster
migration
work
is
going
along
so
far,
we've
validated
that
we
can
run
as
a
hybrid
deployment,
that's
kind
of
been
the
big
thing,
so
we
are
now
sort
of
working
to
to
pursue
that
approach.
A
Okay,
then,
then,
let's
move
on
to
what's
happening
next.
B
Great
so
we'll
be
continuing
to
work
on
the
rate
limiting
redis
migration,
big
part,
we're
kind
of
doing
at
the
moment.
We
need
to
upgrade
our
redis
instances.
So
what
we're
finding
at
the
moment
is,
as
we
spin
things
out
with
kubernetes,
they
come
up
on
the
latest
version.
Our
vms
are
not
currently
running
on
our
latest
version,
which
means
we're
running
a
mixture
so
we're
doing
the
redis
upgrade
at
the
moment
and
then
we'll
be
continuing
to
try
and
move
ahead
to
deploy
onto
pre
in
the
next
week
or
so.
A
That's
cool.
Thank
you.
Thank
you
for
the
update
move
on
to
the
bloggers,
so
I
will
realize
john's
coming
up.
It's
jump
here.
B
A
He's
just
saying:
yeah,
okay,
I'll
guess
the
world
is
so
basically
had
trouble
configuring
the
environment
with
pre-effect
running
in
the
kubernetes,
while
the
deadly
nodes
are
outside
of
kubernetes,
so
could
be
documentation
gap
or
could
be.
There
are
some.
You
know
product
gaps
here.
So
I'm
clear
right
now,
but
my
I
had
two
questions
here.
The
first
one
was
was:
do
we
intend
to
run
this
hybrid
mode
for
appropriately,
like
you
know,
for
effectively
kubernetes,
but
it
outside
of
kubernetes.
B
So
I
think,
if
it's
possible
like
if
it's
a
possible
approach,
we
would
certainly
consider
it.
So
I
believe
from
the
testing
that
gitaly
into
kubernetes
could
be
tricky
and
perfect,
maybe
a
an
easier
first
step.
So
I
think
it's
a
you
know
if
if
we
have
options
to
migrate
suddenly
we
can
certainly
look
to
it,
but
I
don't
think
it
would
be.
We
would
necessarily
choose
this
to
be
the
long-term
set
up
like
just
purely
based
on
wanting
it
to
be
hybrid,
okay,
so
hybrid
is
the
first
step
towards
fully.
A
Migrating
the
deeply
hosting
to
kubernetes,
okay
gotcha,
so
so,
based
on
the
you
know,
iterative
approach,
then
we
do
need
to
sort
this
out
where
the
gap
is
whether
it's
a
documentation
or
a
product
or
gap.
Here
sounds
right.
C
I'd
agree:
italy,
italy
itself
has
its
own
issues
that
are
being
worked
in
a
separate
issue
in
terms
of
running
well
in
kubernetes
right
fact,
effectively
the
application
nodes
are
stateless,
so
there's
something
that
is
more
easily
translated
into
the
kubernetes
environment.
C
A
Okay,
so
what
do
we
need
to
do
to
identify
those
gaps,
whether
it's
a
documentation
or
product?
What
do
we
need
to
do
here?
I
guess
it's.
We
need
to
discover
those
gaps
in
through
testing
right
or.
C
D
Well,
I
think,
there's
a
couple
things
to
mention
here.
I'm
sorry,
my
video
is
not
working,
so
I
can't
get
my
camera
online,
but
there's
two
different
things.
We
need
to
make
sure
we're
understanding
correctly,
there's
prefect
and
gideon
cluster,
which
is
currently
not
functional
in
kubernetes,
to
what
jason
has
said
and
then
there's
giddaly,
which
is
functional
in
kubernetes,
so
the
gitoli
nodes
necessary
for
a
cluster
deployment
are
what
we
can't
currently
reference
through
kubernetes.
D
D
C
B
D
C
B
B
Is
this
what
we
want?
Do
we
want
to
do
this?
I
know
this
testing
is
around,
so
the
sort
of
end
goal
is
to
get
goodly
and
perfect
both
into
kubernetes.
But
I'm
wondering
like:
is
this
actually
a
goal
we
want
to
achieve
like
because
it
sounds
like
either
way
we're
going
to
have
to
prioritize
work
around
it
or
find
some
workarounds,
but
is
this
actually
something
we
want
to
be
doing.
D
A
And
also
your
point
is
still
over.
The
prefix
chart
is
still
over
status,
so
if
we
put
it
in
production,
we
probably
need
to
prioritize
this
to
make
it
ga.
C
A
E
Sure
yeah,
I
wasn't
sure
if
I
was
gonna,
hop
in
and
say
anything,
but
we
have
been
talking
about
what
it
would
take
to
recommend
the
chart
on
the
same
level
as
we
recommend
omnibus.
E
I
know
that
this
meeting
is
more
about
us
internally,
but
that
would
help
the
maturity
of
our
chart
and
essentially
in
the
future,
we
might
only
recommend
or
higher
level
recommend
the
chart
over
omnibus.
So
this
all
leads
to
that.
I
would
say.
F
C
Right
now
I'm
reviewing
some
of
his
comments,
but
he
is
accurate
and
saying
that
he's
had
problems
getting
it
to
actually
work
because,
as
he
notes
in
his
comments,
it's
not
designed
to
do
what
he's
trying
to
do
so.
We
need
engineering
work
on
this
chart
in
order
to
do
what
he's
attempting
to
test,
which
is
right
effect
in
cluster
italy
out
of
cluster,
that
the
chart
is
very
much
not
designed
to
do
that
right
now,.
A
So
we
want
to
pause
on
the
on
the
testing,
but
my
understanding
pre-effective
works
in
the
cornelius
and
with
the
external
gate
leaking.
So
we
want
to
test
that
scenario
to
prepare
the
the
migration.
D
Some
micro
reaction
on
this
is
it's
more
important
to
ensure
that
giddaly
not
pre-effect
works
within
the
chart.
First,
as
a
building
block
and
a
foundational
block.
If
we
get
that
working
and
we're
happy
with
the
testing
results
from
that,
then
we
can
decide
between
dylan,
myself
and-
and
you
know,
the
internal
team.
What
makes
the
most
sense
for
us
to
tackle
as
far
as
getting
prefect
and
its
corresponding
nodes
within
a
kubernetes
cluster
right
now
I
will
say
that
we
have
not
run
into
in
the
customers.
I've
talked
to
a
huge
demand
for
that.
C
But
it's
not
a
lot
every
once
in
a
while
one
or
two
parties
comes
in
and
makes
a
whole
bunch
of
noise
for
good
reason,
but,
as
mark
said,
customer
wise
other
than
a
bunch
of
people
going.
Why
isn't
this
default
and
us
having
to
go
well?
It
can't
work
that
way.
Yet.
D
I
actually
receive
much
more
demand
for
giddily,
not
cluster
internal,
to
kubernetes.
They
want
a
kubernetes
deployment
to
replace
omnibus,
with
everything
for
a
standalone
non-clustered
install
in
kubernetes,
and
we
still
in
our
reference
architecture,
recommend
against
that.
We
can
that's
a
whole
different
topic,
but
that
to
me
is
more
important
to
test
and
achieve
maturity
on
as
a
first
step,
because
I
think
that
will
satisfy
a
lot
of
the
customers
who
are
requesting
kubernetes.
A
So
amy
is
our
yes.
What
is
our
migration
plan
like
we
run
deeply
in
kubernetes
and
leave
pre-effect
out?
Is
that
a
set
of
scenario
or
first
step
towards
my
full
migration?
If
that's
the
scenario,
we
can
accept
that
maybe
we
switched
our
testing
direction
to
towards
test
needling.
B
Absolutely
I
mean,
I
think
so
I
think
we've
landed
at
the
what
is
going
to
be
the
most
straightforward
but
yeah
if
we're
saying
that
goodly
is
going
to
be
more
straightforward
and
actually
a
better
better
thing
to
have
in
kubernetes,
then
absolutely
I
believe
from
sort
of
conversations
last
year
it
had
more
unknowns
like
there's
more
complexity
but
yeah.
Absolutely
if
we
can,
if
we're
happy
that
we
can
test
and
show
that
italy
will
work
perfect
out,
then
we
can
work
with
that.
D
I
mean
from
a
purely
numbers
perspective
from
our
customers.
The
vast
majority
of
them
do
not
use
prey
factor
cluster
to
be
fair.
The
ones
that
do
are
very
high
value
high
are
very
important
customers,
but
from
a
pure
numbers
perspective,
the
vast
majority,
by
a
long
shot,
do
not
use
cluster
and
they're
the
ones
that
are
going
to
benefit
the
most
from
helm,
charts
being
updated
to
run
giddily
in
native
kubernetes
as
a
deployment
option.
A
That
sounds
that
sounds
a
pass
forward.
So
lindsay.
Can
we
just
directed
the
redirected,
the
testing
path,
our
direction
to
testing
the
deeply
kubernetes
with
a
prefactor
outside.
F
E
Yeah
and
then
we
can
still
talk
about,
I
guess
what
our
plans
are
for
this
quarter
chun,
but
basically
we
I
have
some
efficiency
projects
that
we're
going
to
work
on
and
move
cloud
native
maturity
more
into
q2,
and
so
I
was
going
to
probably
get
with
mark
to
check
out
about
q2
more
on
that
timeline.
Yeah.
A
So
that
sounds
good
too,
because
now
we
switched
our
direction
so
perfect
chart.
Probably
you
can
wait
for
a
little
bit
more
time.
E
Yeah
well
also,
I
meant
any
anywhere.
We
can
do
a
little
bit
of
work,
but
any
work
related
to
getting
in
plus
in
helm,
we'll
probably
do
in
q2
yeah,
okay,.
A
Okay,
thank
you.
So,
let's
just
continue
continue
our
keyword.
Our
testing
towards
the
kubernetes
see
how
that
works,
then
circle
back
to
see
how
how
ready
we
are
and
change
our
route
plan
cool.
Okay,
thank
you.
Everyone.
This
is
a
good
discussion.
A
Yeah
then
move
on
to
the
discussion
section
check
time.
We
only
have
eight
minutes
so
give
me
your
question.
You
want
to
verbalize.
B
It
talk
my
so
0.5,
I
believe,
is
covered
all
by
discussion.
We've
just
had
that's
all
great,
but
six
I
wanted
to
just
mostly
just
share.
So
this
has
come
from
staging
ref,
but
I've
opened
up
an
issue,
so
we
can
discuss
keeping
infra
changes,
sort
of
reflected
in
reference
architecture
or
whether
we
want
to
do
that
or
how
we
want
to
do
that.
F
Thank
amy
for
creating
that
issue
at
an
initial
glance.
I
would
say
yes
because
we
do
want
to
have
staging
ref
as
close
to
production,
so
that
could
be
considered
as
a
good
testing
environment,
as
you
pointed
out.
Otherwise
it
will
invalidate
the
testing
if
there
are
changes
at
this
point,
I
do
not
know
how
often
we
make
the
different
cycle
changes.
So
let
me
circle
back
to
you
on
that
one,
and
then
we
should
probably
discuss
async,
so
we
have
others
chime
in
as
well.
What
should
be
the
path?
F
I
would
assume
at
this
point
when,
when
the
input
team
is
going
towards
from
free
to
staging,
that
was
a
good
time
and
we
do
it
in
sync
with
staging
as
well,
so
we
could
go
in
sync
with
it
and
before
it
goes
to
the
production
category,
let
me
changes
that
has
to
go
and
from
infrastructure
point
of
view.
First,
a
graph.
B
F
And
that's
what
I
would
want
to
understand
better
like
how
often
we
do
it
in
staging
ref,
but
for
now
at
this
point
a
lot
of
people
are
not
using
staging
enough.
So
it's
not
much
of
a
concern,
but
the
long
term.
Definitely
it
will
be
so
let's
come
up
with
a
plan
that
would
work
best.
That
is
just
done
one
time
and
not
like
a
deviation
to
staging
a
rough
pose
architecture.
Ideally
it
should
be
almost
architecture,
but
I
do
not
know
how
often
we
do
it
yet.
B
To
you,
yep,
okay,
great,
that
makes
sense
thanks
for
that.
I
think
this
fits
in
what
kind
of
conversation
we've
had
here
earlier
around
nginx
and
proxy,
and
where
do
we
sit
in
terms
of
like
you
know
like
recommendations
to
to
users
versus
what
we
do
on
gitlab.com.
B
I
we
have
no
process
for
that
at
the
moment,
but
probably
yeah.
That's
worrying
what
we
also
want
to
tie
to
this,
like
I
imagine
that
users
may
be
interested
in
you
know.
Why
did
we
change
what
you
know?
What
what
experience
do
we
have,
but
I
don't
believe
we
have
any
way
of
linking
that
together.
Yet,
okay.
G
Yeah,
I'm
supportive
of
using
all
our
stagings
to
improve
the
four
planner
reference
architectures.
If
there
are
changes-
and
this
is
something
that
will
eventually
be
a
permanent
change
in
the
reference
architecture,
I
think
those
are
really
reasonable.
G
I
think
within
the
confinements
of
reference
architecture,
if,
if
the
staging
of
those
referent
architectures
is
our
testing
grounds
for
future
enhancements,
I
think
that's
that's
a
a
great
storyline
to
to
tell,
but
if
it's
an
exception
that
we
only
do
it
for
us,
then
I
think
we
should
dive
into
to
further
understand
the
changes.
A
Okay,
that's
that's
the
end
of
the
agenda
today.
Anything
else.