►
From YouTube: 2021-05-17 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Hello,
so
this
is
all
of
us,
so
welcome
happy
monday,
I
I
haven't
put
mttp
in
here.
B
It
has
started
tracking
into
since
again,
but
it
looks
pretty
horrendous
and
I
suspect
that's
because
we
don't
have
all
of
it's
not
17
days
of
may
data,
it's
probably
just
the
first
couple
of
days,
so
I
will
review
where
we
are
with
that
one
for
now
on
the
announcements,
certainly
anything
too
much
to
shout
here,
just
be
aware
of
b,
if
you've
been
at
getlab
for
a
while
that
may
affect
you.
So
please
just
check
your
docusign
accounts,
cool
and
then
on
discussions.
B
C
So
we
have
some
environment
variables
that
need
to
be
configured
inside
of
kubernetes,
and
currently
we
don't
support
the
ability
to
put
secretive
information
inside
of
environment
variables
inside
of
kubernetes.
This
is
a
limitation
to
our
home
chart.
This
is
desired
by
our
home
chart.
We
have
it
in
our
documentation
to
prevent
us
from
setting
secrets
as
environments
as
environment
variables
into
our
kubernetes
configuration.
C
So
I
see
us
having
two
options
to
move
forward.
Here
is
one
we
need
to
modify
the
policy
buyer
home
chart
to
allow
such
configuration
two.
We
could
pick
up
that
issue
to
add
the
necessary
configuration,
so
we
could
create
a
secret
object
and
pull
that
in
as
an
environment
variable
both
of
those
will
take
a
little
bit
of
time
and
feedback
from
the
distribution
team
in
order
to
get
accomplished
and
then
upgrade
our
chart
accordingly
to
get
that
environment
put
in
place.
C
C
B
I
agree,
I
think,
that's
not
sensible.
We
have
some
well.
I
suppose
it's
a
how
much
okay
so
different
threads
here
right.
So
if,
if
we
can
patch
this
and
do
it
in
parallel
and
then
come
back
to
the
the
helm,
chart
change
assuming
distribution
haven't
got
to
it
before
you
get
there,
which
I'll
assume
they
won't
write,
because
it's
only
a
few
years.
B
Yeah,
that
sounds
like
the
best
approach.
The
only
question
mark
I
put
on
that
one
is
the
bug
that
henry
just
pointed
out
that
graham
mentioned
last
time.
We
did
this.
Does
anyone
know?
When
did
we
ever
find
the
issue
to
what
that
bug
is.
C
No,
I
looked
at
the
issue
and
I
couldn't
find
a
reference
to
it.
I
just
saw
the
verbiage
of
him
looking
or.
C
About
it,
but
couldn't
find
it,
so
I
think
what
I
could
do
is
test
to
make
sure
I
could
recreate
the
patch
of
what
we
would
need
to
create
inside
of
our
own
home
chart,
and
I
could
test
this
in
minicube,
for
example,
and
I
could
also
test
rolling
backwards
just
to
make
sure
that
we're
not
going
to
have
a
problem.
We've
removed
the
configuration
see
if
we
run
into
that
bug
and
then
that
should
be
able
to
give
us
enough
confidence
to
either
move
forward
or
revisit
this
plan
of
action
if
necessary.
B
C
B
Yeah-
and
there
are
three
of
you
around
who
can
contribute
on
this,
so
I
think
it's
a
like.
You
know
we
can
as
long
as
we
line
these
things
up.
You
know
like
if,
if
someone
has
time
to
pick
up,
move
starting
work
on
the
helm
chart,
then
we
can
go
there.
C
D
The
lowest
variable,
I
think,
there's
only
one
customer
right
now,
so
it's
just
one
entry
there
and
besides
that,
I'm
wondering
anyway
how
we
make
sure
that
if
we
need
to
add
something
more,
that
you
make
sure
that
sres
are
putting
this
into
both
places
like
for
chef
and
also
for
kunditas
right
so
moving
this
into
gkms
might
be
the
best
short
way
for
that.
Maybe
because
then
you
would.
A
A
Move
it
into
gkms.
I
think
it
makes.
I
mean
either
case,
though
we
can
just
source
it
from
either
the
raw
file
or
gkms.
So
it's
altered
in
one
place.
C
D
Weird,
what
do
you
mean
weird
config
yeah?
I
think
if
we
had
this
once
we
had
this
environment
key
directly
in
the
roll
file
and
then
we
later
changed
it
to
a
different
name
as
an
omnibus
attribute
in
the
chef
roll
file
and
in
the
omnibus
cookbook
to
then
convert
this
into
an
environment
variable
inside
of
that.
D
A
But
but
we
could
just
move
this
to
gkms
and
then
once
it's
there,
then
we
just
you,
know,
read
it
from
gkms
like
we
do
other
things
and
yep.
We
don't
need
any
chart
updates
for
the
customers.
Token,
I
I
don't
know
like
we
can
ask
one
of
the
developers
for
customers.give.com
how
it's
used.
My
suspicion
would
be
that
it's
not
needed
for
api,
but
willing.
C
To
send
the
subscription
portal
admin
token
yeah,
okay,
all
right
I'll,
follow
up
in
whatever
slack
channel
I'll,
find
I'll,
find
them
and
ask
them.
Yeah.
B
Let's
ask
in
development
and
if
you
don't
get
an
answer,
let
me
know
and
we'll
chase
up.
Do
we
need
any
java
slash
reliability,
help
for
the
moving
stuff
into
gkms.
B
B
Awesome,
okay,
great
and
the
other
one
is
on
a3
as
well.
Is
that
the
avatar
issue
looks
like
there's
potentially
also
a
code
problem
in
that,
so
that
would
be
another
good
one.
I
think
if
either
we
can
hand
over
to
graham
later
tonight
or
if
there's
a
way
like,
if
we
do
think
it's
a
code
thing
we
can,
you
know,
find
a
way
to
get
some
help
for
this.
One.
C
Just
prior
to
this
meeting,
I
responded
to
heinrich
of
wondering
if
we
should
open
up
an
issue
and
get
labor
gitlab,
because
if
it's
a
code
issue,
that's
not
our
purview.
B
Okay,
that
sounds
good,
great,
nice
off
good
and
so
like.
If
we
don't
get
to
all
of
this
stuff
today
like
hand
over
to
graham
and
let's
see
what
he
can
push
forwards.
C
B
B
If
there
are
any
other
actions
either
you
want
to
add
on
there
or
discuss
now
then
shout
like
it
would
be
good
to
have
a
think
about.
How
do
we
build
on
what
we
did
with
q1,
which,
like
loads
of
great
stuff,
around
collaboration?
We
made
loads
of
great
progress?
I
think
my
takeaway
was
we
made
good
progress
on
too
many
projects,
and
gold
for
q2
is
to
work
on
fewer
projects.
C
B
C
York
and
we
lost
jarv,
I
don't
know
how
to
encourage
jarv
to
come
back
and
I
don't
know
how
to
tell
database
team
that
they
don't
need
charting.
But
you
know
if
we
kept
york,
I
would've
been
happy.
That's
all
I
have
thank
you.
B
But
it's
a
fair
point:
it
definitely
it
did
highlight
it
does
highlight
like
the
the
overhead
as
well
when
like
having
can
coming
in
just
for
a
few
weeks.
You
know
not
like
him
not
having
time
to
actually
kind
of
get
settled
and
start
adding
value
like
it
does
show
how
yeah
there
is
also
an
overhead
of
people
starting.
So
I'm.
B
B
Okay,
definitely
a
great
yes,
yes,
cool,
okay,
I'll
leave
the
issue
open
for
a
few
days.
If
you
do
want
to
add
any
suggestions
or
respond
to
any
comments,
other
people
have
left
on
there.
Please
do.
B
Oh
and
then
related
to
this
is
c,
so
what
I'll
start
doing
is
on
mondays
sharing
priorities
in
slack.
The
main
reason
for
this
is
just
because
we
have
got
loads
going
on.
We
get
pinged
on
lots
of
stuff.
There
are
lots
of
different
things
flying
around
so
just
to
try
and
help
know
what
we
can
push
back
I'll
do
this
weekly,
because
I
think
our
stuff
changes
quite
rapidly.
B
So
give
me
a
shout
on
things
that
you're
aware
of
that.
We
do
need
to
get
prioritized
and
we'll
work
out
like
how
do
we
prioritize
these
for
next
week?
So,
for
example,
there's
definitely
some
more
registry
work
coming
up
that
I
think
we
can
push
out
to
next
week,
but
we
do
want
to
get
to
it
before
too
long.
A
couple
of
things
I
did
just
want
to
highlight
on
the
priorities
for
this
week
number
three:
we
have
a
registry
task.
B
I
actually
love
all
of
you
to
do
this,
which
is
that
the
registry
team
have
put
together
an
alternative
proposal
for
the
registry
migration.
So
this
changes
the
way
that
we
would
be
deploying.
B
It
changes
our
setup,
which
is
great,
has
some
kind
of
infrastructure
dependent
like
impact,
but
also
they
talk
quite
a
lot
about
how
they're
actually
going
to
do
things
in
the
code.
So
robert,
I'm
pretty
sure,
you'll
actually
have
some
thoughts
on
how
they're
going
to
approach
this
they've
been
trying
to
get
time
with
stan.
They
haven't
had
a
great
like
he's
a
little
bit
limited.
They
don't
think
they've
had
as
much
input
as
they
would
like,
but
certainly
from
a
delivery
perspective.
B
Knowing
that
we'll
be
deploying
to
this
thing,
it
has
a
link
to
to
omnibus
and
just
anything
you
can
think
about
whether
there
are
additional
risks
you
can
see
here,
or
you
know
or
like
things
that
we
should
watch
for
would
be,
would
be
really
good
if
you
could
do
that.
It
didn't
take
me
too
long
to
go
through
this
one.
Certainly
not
every
point
in
their
document
is
right
is
relevant
to
us,
so
it
took
it's
probably
like
30
minutes
would
be
enough
to
get
an
overview
and
leave
a
comment.
B
D
And
also,
we
should
try
to
make
that
work
if
we
can,
because
it
will
make
it
much
easier
for
us
to
deploy
registry,
because
this
will
be
the
one
way
to
just
keep
one
single
cluster
and
one
single
bucket
and
do
the
migration
in
place
instead
of
us
needing
to
fiddle
around
with
a
lot
of
extra
infrastructure
works.
So
reveal
it,
but
give
it
a
positive
spin.
If
you
can.
B
There
is
one
big
area
in
there
that
I
think
will
be
a
technical
challenge.
I
left
a
comment
around
that
as
yet
we
don't
know
how
we'd
solve
that,
so
that
will
be
a
one
as
you
go
through.
If
you
can
see
things
that,
like
okay,
how
would
we
do
that?
You
have
some
ideas,
then
also
would
be
a
great
one
to
add
to
and
then
beyond.
That
is
the
two.
Our
two
kind
of
okls
we've
got
going
on
like
api
service,
obviously,
and
also
rollbacks.
B
We've
got
we're
so
close
to
having
everything
we
need
for
our
next
test
and
then
there's
a
few
other
things
which
1738
and
1741,
but
then
plus
anything
else
on
the
board
right.
So
I
went
through
the
board
this
morning.
Everything
on
the
board
is
valuable
and
useful
and
links
in
with
either
projects
we
have
going
on
or
okrs.
B
Cool
so
related
to
this,
so
we
have
got
an
outstanding
task
on
rollbacks
that
we
need
to
pick
up,
and
we
also
the
scalability
team
have
started
looking
at
scaling
db
connections
and
I
think
after
this
morning's
incident
reliability,
I
expect
we'll
be
looking
at
scaling.
The
web
nodes.
B
Cool
okay
I'll
cancel
this
for
tomorrow
and
I'm
kind
of
expecting
as
soon
as
we
feel
like.
We've
got
through
the
rollbacks
follow-up
issues,
which
alessia
is
kind
of
nicely
grouped
on
the
on
the
issue
on
the
epic
already
we
can
just
schedule
our
next
production
test
and
have
another
run.
D
And
how
far
do
we
need
to
maybe
push
forward
the
scaling
above
the
fleet?
Because
it
could
block
us
in
doing
deployments
right
if
you
need
to
drain
canary
for
some
reason?
And
we
can't,
because
you
don't
have
enough
capacity
in
the
fleet.
B
That
would
that
would
be
my
like.
I
don't
know
if
jarv
you
don't
know
anymore
and
that
I
guess
eagle
was
more
involved
in
it,
but
well
the
actually.
The
only
thing
I
did
leave
a
comment
on
is:
we
do
get
quite
close
to
using
all
of
our
db
connections
right.
We
hit
80
of
db
connections
if
we
scale
up
the
web
fleet
and
the
api
fleet
too
much
we're
going
to
run
out
of
those.
A
I'm
not
sure
I
haven't
really
looked
into
this
myself.
I'm
also
worried
about
photos
too,
because
I
don't
know
skyrim.
Are
we
like
in
pretty
good
shape
now
with
c4
photos.
C
B
I
will
follow
up
on
that
issue,
but
yeah
I
I'm
I
expect
igor,
has
experienced
it
twice
in
two
weeks.
So
I'll
be
quite
surprised
if
he
doesn't
push
to
push
move
this
one
for
us,
but
I'll
check
in
on
that.
One.
D
B
Yeah
I'll
check
in
with
you
I'll
I'll,
be
I'll,
be
surprised
if
it
doesn't
get
handed
over,
but
yeah
I'll.
Take
a
look,
go
and
then
number
on
e
number
e.
Just
to
confirm
that
uric
has
officially
left
the
team.