►
From YouTube: 2022-11-30 Delivery:System Sync and Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
For
the
30th
of
November
2022,
we
we
don't
have
anything
to
demo
today,
but
since
we
have
a
new
addition
in
the
team
welcome
Vladimir
to
the
team,
we
decided
to
keep
this
meeting
also
to
have
a
an
overview
of
what
the
tri
want
to
shifted
of
the
team
itself
and
also
to
understand
which
are
our
vision
mission
and
our
long-term
view
on
what
the
theme
was
to
tackle
in
the
short
term,
medium
term
and
long
term
and
as
well
also
the
main
working
items
that
we
have
right
now
as
part
of
our
okay
Arts.
A
Everyone
is
working
on
some
other
parts.
So,
if
letting
me
you
have
any
questions
feel
free
to
if
it
was
done
in
general,
we
already
added
as
part
of
our
handbook,
and
maybe
let
me
share
my
screen
here-
ownership
of
the
team
right
so.
A
So
in
general
we
have
three
main
functions:
three
main
areas:
one
is
release
managers,
that
is
the
rotation
that
everyone
is
a
part
of.
You
know
in
a
four
weeks
month,
rotation
right
now
in
this
current
month
for
release,
5.7
is
Ahmad
engine,
and
usually
we
don't
have
like
both
two
persons
from
the
same
Team
on
the
rotation.
A
Usually
you
tend
to
have
one
personal
orchestration
or
a
personal
system,
but
during
the
schedule
this
time
we
are
a
bit
like
a
problem
with
coverage,
so
we
actually
ended
up
having
a
lot
of
system
release
management
for
these.
For
the
last
couple
of
rotations
in
general
delivery
system
is
a
customer
of
one
of
the
of
reliability.
I
would
say:
whereability
we're
expecting
related
to
provide
core
infrastructure
functionalities
and,
on
the
other,
end
delivery
system
as
as
customers
deliver
orchestration.
A
So
if
we
want
to
see
these
division
in
a
easier
way,
we
can
see
orchestration
as
the
team
that
orchestrate
all
the
releases
how
the
deployments
are
the
package
are
created.
So
when
they
decide
what
goes
into
the
package
itself
and
the
system
is
actually
the
theme
in
in
charge
of
deciding
where
this
package
is
going
to
be
deployed
right
and
at
eye
level
of
the
team
and
around
that
clearly,
we
have
a
lot
of
suddenline
Duties
that
are
coming
into
into
play,
especially
like
on
on
system
side.
A
Is
it
possible
to
deploy
this
package
into
this
cluster
application
allowed
the
only
Matrix
part-
and
we
also
have
I
know
here
on
the
Matrix
part,
for
this
quarter
and
also
maintaining
all
the
values
part
about
renderization,
dashboard
and
also
all
divided
functionalities,
that
we
want
to
build
on
top
of
our
deployment
environments
like
production,
staging
and
so
on,
and
also
one
important
thing
that
is
part
of
our
longer
term.
C
Yes,
I
do
have
a
couple
of
questions
so
in
in
the
value
stream
like
develop,
build
test
release,
deploy
where
we
are
staying.
Okay,.
A
So
we
we
are
not
part
of
one
of
the
stage
groups,
because
we
are
kind
of
an
horizontal
function
within
infrastructure
right,
so
we
are
in
charge
of
release
and
deploy.
We
can
see
this
way.
You
can
see
it
because
currently
stage
groups,
so
the
teams
that
are
like
developing
like
the
function,
the
functionalities
and
the
software
and
the
product
itself.
A
They
are
creating
the
code,
creating
the
merge
request,
merge
requests
are
merged
and
test
and
delivery
is
in
charge
of
deploying
this
software
and
releasing
this
software
on
a
monthly
currents,
the
22nd
of
each
month,
where
we
have
the
monthly
release
and
also
we
are
in
charge
of
security
releases.
That
is
the
one
that
is
actually
happening
today,
where
Jenny
and
are
under
in
charge
of
so
the
security
release
of
this
month
is
actually
built
on
top
of
the
previous
release.
A
So
15.6.3
will
be
the
security
release
of
this
month
and
we're
also
in
charge
of
patch
releases
patch
releases
usually
are
as
well
coming
on
top
of
the
previous
month
release
of
the
compared
to
the
one
neighbor
urase
management.
So
right
now
stage
groups.
They
are
not
directly
in
charge
of
deploying
their
software.
We
actually
deploy
the
software
for
them,
so
we
orchestrate
all
the
deployment
should
open
and
we
decide
in
which
environment
the
software
goes
at
which
stage
of
our
like
coordinated,
Pipelines,.
C
A
A
Sorry,
so
you
will
have
also
an
issue
coming
up
as
part
of
your
training,
where
you
will
learn
a
lot
about
risk
management
about
the
release.
Tooling
is
a
pretty
white
breath
topic
because
it
touches
a
lot
of
part.
There
is
a
lot
of
like
I
would
say,
complexity
that
has
been
built
because
also,
like
you
know,
a
lot
of
needs
that
they
had
along
in
the
time
while
the
product
was
developing.
A
So
is
also
like
a
pretty
complex
process
at
the
beginning,
because
it
has
a
lot
of
small
nuances
and
so
on,
but
you
will
see
that
how
this
is
going
to
work
end
to
end
everybody
on
us
has
been
on
in
training
for
his
measurements.
I
think
Reuben
trained
me
at
least
I,
don't
know
ahmadu
you're
being
trained
from,
but
a
journey
you've
been
to
improved
Myra
and
with
Amy,
but
usually
we
are
like
these
training
periods
before
going
into
the
first
rotation
of
risk
management.
C
First
of
all,
you
said
that
you
partially
own
in
the
infrastructure
and
the
built
infrastructure
is
that
is
that
true,
like
in
terms
of
kubernetes
like
who
is
actually
managing
kubernetes
clusters
and
the
bill
can
rebuild
those
kubernetes
clusters.
So.
A
A
In
theory,
this
is
like
right
now
is
a
kind
of
shared
responsibility
has
been
so
far
together
with
the
reliability
team
that
are
also
taking
care
of
some
aspects
of
our
infrastructure.
So
I
don't
know
recently,
there's
been
like
intro
introduction
to
the
internet,
vault
in
our
infrastructure
for
Secrets
management.
So
there
is
this
kind
of
like
share
responsibility
in
some
part
of
the
infrastructure
with
reliability.
A
We
actually,
they
demanded
the
maintenance
of
the
cluster
itself,
so
gke
upgrade
and
so
on.
It
usually
should
be
part
of
reliability,
duties,
I,
think
for
mentioning
for
historical
reasons
in
grade,
sometimes
to
par
two-part
in
this
GK
upgrades
for
our
clusters
and
so
on,
but
usually
it's
not
our
duty
to
upgrade
the
cluster
questions
and
so
on.
What
our
duty
is
is
to
build
functionalities
on
top
of
the
core
infrastructure.
That,
in
theory,
should
be
provided
to
us
from
barrier
habit.
C
So,
and
in
in
case,
we
want
to
or
I
don't
know
or
change
something.
C
C
Yes,
how
much
influence
do
we
have
in
actually
proposing
new
Solutions
for
that
and
changing
the
the
processes
because,
like
you
know,
like
infrastructure,
part
and
deployment
part,
they
are
very
like
tied
together
and
if
you
introduce
change
somewhere
else,
then
it
might
impact
the
changes
that
either
like
the
the
infrastructure
team
facing
owns
or
delivery
like
delivery
pipeline.
Yes,.
A
So
so
far
there
is
usually
these
kind
of,
like
big
changes,
are
put
out
for
discussion
with
proposal.
First
and
then
key
stakeholders
are
involved
in
these
discussions
right.
Everyone
is
putting
down
their
requirements
and
also
like
bringing
up
the
reason
why
we
should-
or
we
shouldn't
do
some
some
of
these
parts.
A
Obviously
what,
since
you
are
customers
of
reliability
on
these
aspect
in
these
aspects,
and
so
on,
we
can
definitely
influence
how
certain
things
should
work,
because
at
the
end
is
like
you
know,
as
you
mentioned
before,
there's
a
value
stream
right
where
we
have
a
reliability
that
is
maintaining
the
whole
infrastructure.
We
are
building
Solution
on
top
of
the
core
infrastructure,
still
infrastructure,
to
try
to
build
the
value
later
down
to
the
to
the
chain,
to
orchestration
and
later
on,
to
Stage
groups
and
finally
to
our
customers.
A
Obviously,
if
tomorrow
we
need
I,
don't
know,
then
we
have
the
need.
Let
you
do
the
good
example
of
having
disposable
cluster,
or
at
least
the
capability
of
having
a
cluster
built
on
demand,
to
deploy
a
risky
change.
Something
like
that.
We
will
right
now
we
are
building
some
of
these
functionalities
on
our
own,
because
there
was
no
no
alignment
until
now
with
the
with
the
reliability
core
infrastructure,
to
add
this
functionality
out
of
the
roadmap,
but
later
on
I'm
expecting
this
to
be
okay.
A
You
know
we
need
probably
a
dysfunctionalized
implemented
according
to
structure
level,
and
then
on
top
of
that
we
are
going
to
build
the
functionalize
that
we
want
right.
I,
don't
know
having
a
new
deployment
strategies
having
a
lot
of
multiple
calories
environment
where
we
can
route
some
traffic
for
experimental
features
and
so
on.
A
So
the
expectation
or
going
forward
is
to
have
core
infrastructure,
the
infrastructure
team
from
reliability,
providing
us
with
basic
core
infrastructure
and
functionalities
that
we
can
use
out
of
the
box,
but
definitely
we
can
influence
and
we
have
requirements
to
them
to
have
how
to
shape
these
functionalities.
The
way
that
they're
gonna
be
useful
for
us
not
for
the
sake
of
the
tool,
but
for
the
sake
of
the
value
we
want
to
build.
On
top
of
those.
C
And
also
I
kind
of
I
I
might
be
completely
wrong,
and
maybe
this
is
my
wrong
perspective
perception,
but
I,
usually
those
monthly
releases,
if
you,
if
you
say-
and
you
have
monthly
releases,
usually
they
come
with
high
level
of
toil
because
it's
kind
of
manually
monthly
releases
you
need
to
build.
You
need
to
prepare
their
needs.
C
You
need
to
hand
over
the
release,
you
need
to
you
know
like
roll
out
and
etc,
etc,
and
I
just
wanted
to
understand,
like
a
in
terms
of
Doyle
and
the
project
work
and
like
improving
the
things
like.
What
is
the
balance
here
like
how,
in
in
person
which,
like
how
much
time
you
spend
on
toil
and
how
much
time
you
spend
on
like
a
real
projects
that
you
you
work
in
to
improve
the
things
so.
A
Reduce
tutorial
in
general,
if
you're,
not
in
release
management
duties,
you
work
on
Project
work
that
is
related
to
some
more
hairs
or
if
it's
not
trade,
to
an
Arts
is
still
high
priority
work
for
improving
some
parts
of
the
of
the
tooling
of
the
process
that
we
have
or
reducing
some
thought
in
some
other
parts.
If
you
see
all
the
labels
that
we
use
for
in
our
issues,
we
will
say
there
is
a
level
that
is
like
soil
reduction
or
release
toilet
I.
A
Don't
remember
how
is
worded,
but
otherwise
we
have
mainly
like
label
for
projects
like
credit
to
kubernetes
or
related
to
release,
velocity
and
so
on.
So
in
general,
you
are
in
charge
of
so
the
release
itself.
Let
me
take
a
step
back.
There
is
around
the
release
and
the
security
release
and
Patch
release,
and
so
on.
A
There
is
a
huge
amount
of
automation
by
each
amount,
and
there
is
a
very
good
integration
with
we
are
using
chatops
and
there
is
a
lot
of
automation
built
around
all
the
prerequisites
that
you
need,
any
step
that
you
need
to
take
to
to
put
a
result.
At
the
end
of
the
month,
right
on
top
of
that,
it's
not
that
we
we
release
gitlab.com
only
once
a
month
with
a
new
version,
we're
actually
releasing
multiple
times
a
day
on
youtube.com
is
going
to
get.
A
Probably
it's
like
correct
me
guys
if
I'm
wrong,
like
it's
a
eighth
or
eight
dollar
employee
packages,
a
day
right
in
theory,
if
we
based
on
the
schedule
so
at
least
eight
times,
eight
nine
times
a
day,
you
have
a
new
big
holiday
deploy
package
because
it
is
in
general,
it's
like
fully
automated,
where
the
duty,
or
there
is
manager
if
everything
was
military,
is
simply
to
press
one
button
when
shut
up
spot,
notify.
There
is
measure
and
say:
okay,
now
you
the.
A
Test
so
please,
if
you
want
to
promote
it
to
production.
So
there
is
a
lot
of
automation
built
around
that
there
is
not
so
much
manual
activities
involved.
Clearly,
the
week
of
the
release
is
always
coming
hot,
sometimes
where
you
have
a
requests
from
stage
groups
teams
that
they
want
to
include
the
feature
within
the
in
the
release
of
the
month
and
then
maybe
you
know
you
got
kind
of
at
a
bit
of
like
more
work
to
do
in
this
kind
of
situation.
C
A
The
monthly
release
is
when
we
create
the
packages
that
we
publish
outside
of
our
customers.
So
let's
say
that
you
are
self-managed
customer,
it
means
you
are
getting
one
of
the
packages
we
produce
and
we
release
under
20
seconds
of
each
month,
and
you
stole
on
premises
on
your
premises.
C
This
this
monthly
release
is
only
for
for
the
customers
who
are
not
using
like
a
SAS
solution
that
they
are
running:
gitlab,
on-prem
right,
exactly
okay,
but
but
the
maintenance
and
the
upgrades
and
all
the
stuff.
It's
of
those
monthly
releases
and
new
versions,
it's
it's
kind
of
hand
over
it
and
delegated
to
customers
themselves,
or
we
also
somehow
control
them.
A
For
each
version
we
are
releasing,
we
have
an
upgrade
path
that
allows
them
to
upgrade
to
the
next
question
without
any
problem
without
if
they
need
to
they
want
to
skip.
You
know,
moving
from
15
dots
1
to
15.7,
they
can
still
do
it
in
one
shot.
I
think
if
the
upgraded
part
is
there,
but
we
need
to
have
a
full
restart
of
the
gitla
distance.
So
maybe
they
have
like
some
Minnesota
downtime
depends
on
which
appearance.
A
A
Not
sure,
let's
say
that
you,
your
company,
a
starter
for
a
big
company,
your
infrastructure,
you
take
one
of
the
packages
we
publish
or
let
me
work
for
your
ubms
or
the
cloud
native
package,
and
so
on.
You
stole
it
in
your
infrastructure,
so
you're
also
responsible
for
doing
the
upgrades,
you're,
also
responsible
for
a
lot
of
other
things.
Right.
If
you
have
a
problem,
you
need
to
be
able
to
provide
logs
if
you
have
a
country
that
includes
customer
support
from
gitlab
itself
and
so
on.
A
If
you
don't
have
a
team
behind
your
company
that
can
take
care
of
the
maintenance
of
the
infrastructure,
then
at
that
point
you
can
be
a
SAS
customer
where
you
can
be
still
an
ultimate
license,
or
we
also
have
another
offering
right
now
that
is
a
git
Club
dedicated
that
is
kind
of
like
a
single
tenant
offering
that
is
simply
for
your
company,
so
yeah
your
company
just
have
the
same
as
SAS,
but
only
with
a
single
tenant.
A
C
And
how
does
the
versioning
of
SAS
solution
related
to
the
this
monthly
monthly
packages
like
monthly
packages?
Are
they
like
older
than
SAS
or
SAS,
is
older
than
packages,
so.
A
If
you
use
the
gap,
usually
in
the
moment
in
the
27th
of
the
month,
when
the
packet
we
publish
the
packages,
gitlab.com
deployed,
it
version,
probably
a
couple
of
days
earlier
or
three
foreign.
A
Don't
remember
now
yeah
sometime
before
so,
guitar
is
always
rolling
forward.
So
and
you
know
it's
what
we
published
on
the
22nd
is
probably
some
version
that
we
tagged
a
couple
of
days
or
one
day
before.
A
C
And
how
about
the
database
Integrations
and
I
don't
know
some
schema
changes
and
etc,
etc?
Is
it
like
also
included
in
in
deployments,
okay.
A
Let
me
try
to
respond
with
this,
otherwise
I'm
going
to
ask
for
for
some
help.
So
we
have
two
kind
of
migration,
the
migration
that
are
coming
within
the
normal
deployment,
and
then
we
have
another
set
of
migration
data.
The
post
deployment,
migrations,
post-deployment
migrations.
We
is
the
duty
that
is
manager
to
execute
them,
so
we
execute
them
after
a
new
package
has
been
deployed
to
production
Max.
Usually
we
try
to
execute
them
one
time
a
day,
because
after
we
execute
them,
we
we
are
losing
the
capability
of
rolling
back.
A
B
The
part
about
migration,
then
post
deployment
migration
is
correct.
I'll
just
add
that
there
is.
There
are
issues
for
making
post
deployment
migrations
rollbackable,
so
Myra
had
created
a
whole
epic
I.
Think
for
that,
so
hopefully
we'll
work
on
that
sometime
and
then
it'll
be
even
post
deployment.
Migration
will
be
rollbackable
and
there
is
also
a
third
type
of
migration
called
a
background
migration
where.
B
C
And
is
persistent
layer
somehow
charged
by
the
regions
and
do
you
do
like
a
roll
out
on
one
region
for
I?
Don't
know
like
Canary
testing,
a
b
testing
or
something
like
this,
like
a
progressive
rollout
in
from
one.
You
know
one
location
to
another
location
or
you
just
like
have
spread
in
the
the
new
version
like
everywhere
at
once,.
B
We
do
have
a
canary,
so
production,
we
have
a
staging
Canary
and
production
canary.
We
deploy
usually
to
the
canaries
first
and
then
there's
a
30
minute
baking
time
for
for
production,
Canary
and
then
we
start
the
production
deployment.
C
And
automated
tests,
they
heat,
The,
Cannery
right
and
then
we've
automated
tests
green.
Is
it
like
automatically
progressing.
C
A
C
B
Have
we
have
automated
QA
running
after
I
think
every
environment
so
like
first
it
deploys
to
staging
Canary,
it
runs
QA,
then
it
deploys
to
and
it
runs
two
types
of
care.
One
is
QA
that
is
entirely
against
staging
Canary,
and
then
there
is
a
mixed
deployment
testing.
So
it
runs
against
staging
Canary,
which
is
on
the
new
version
and
again
staging
which
is
on
the
previous
version.
C
There
any
manual
Q8
process
involved
as
well
or
like
everything,
is
automated,
like
QA
singing.
B
C
And
also
sorry,
I'm,
asking
like
a
lot
of
questions
just
trying
to
understand
how
things
work,
and
so
since
you
have
a
lot
of
like
everything
is,
is,
is
tested
automatically.
C
B
So
code
coverage
I
think
is
like
around
90
percent.
If
you
open
any
merge
request,
there
is
a
code
coverage
job
there
in
the
pipeline,
so.
C
Well,
that's,
that's
quite
is
that
only
unit
tests
or
the
code
coverage
or
is
also
includes,
like
integration
tests,
interface
tests,
smoke
tests,
I,
don't
know
something
like.
B
Well,
there's
actually
a
open
issue
for
automating
that
I
think
that
okay,
the
contention
point
for
automating
that
right
now
is
how
do
we?
How
do
we
allow
release
managers
to
say
that
they
are
available,
because
the
release
manager
needs
to
be
available
in
case?
Something
goes
wrong
so
like
how
do
you
tell
the
tool
that
okay,
there
is
a
release
manager
available
available?
You
can
go
ahead
and
automatically
promote.
C
Are
there
any
plans-
and
you
know
Milestones
set
up
for
enable
cicd
like
a
real
crcd.
A
So
you
mean
to
remove
that
to
the
manual
step
right,
yeah,
yeah,
so
I
think.
If
we
need
to
go
there,
we
need
to
have
a
bit
of
better
to
put
together
a
bit
better,
our
observability,
that
we
already
have
the
data,
but
we
also
probably
need
to
what
we
want
to
have
also
capability
to
actually
roll
back
on
the
same
fashion
when
something
happens.
C
B
We
are
working
on
the
prerequisites,
so,
like
observability
is
one
of
our
okrs
for
this
quarter,
so
that
will
certainly
help
with
making
it
fully
automated
yeah.
B
C
Because
the
the
major
well,
for
example,
at
hoping
how
it
worked
right,
we
were
yeah,
we
had
a
okr
to
enable
continuous
deployment
and
we
finally
did
it,
but
the
major
problem
was
the
pest
coverage.
Actually,
the
test
coverage
was
pretty
bad
and
we
spent
a
lot
of
time
to
improve
this
test
coverage
and
once
we
got
the
test
coverage
sort
it
out,
it
was
okay
to
just
you
know,
don't
care
about
these
releases
anymore.
It's
like
we
are.
C
We
are
confident
enough
that
our
tests
are
covered
every
every
major
user
stories
and
like
the
code,
quality
and
etc,
etc.
C
I,
don't
know
like
I'm,
just
I'm,
just
trying
to
understand
the
understand
the
priorities
because,
from
my
point
of
view
like
enabling
cicd
should
have
like
pretty
high
priority,
you
know
which
enables
like
a
faster
value
streams
but
I,
don't
know
if
you,
if
you
say
like
you,
you
have
other
priorities
then
maybe
just
like
a
could.
B
A
To
give
you
an
example:
here
we
have
a
risky
change
coming
in
around
March.
A
That
is
going
to
be
like
the
introduction
and
switching
to
Ruby
tree
right,
so
for
the
way
that
the
entire
pipeline
is
deployed.
So
we
we
tag
the
values
repositories
and
then
we
we
have
distribution
that
is
creating
all
these
packages
right
now
we
cannot
have.
We
are
going
to
have
packages
running
or
on
Ruby
2.7.6
or
in
Ruby
3.,
so
I'm
quite
sure
that
all
the
functional
requirements
are
going
to
be
fulfilled
by
the
testing.
A
A
For
a
bit,
and
then
everything
goes
well
now
because
don't
go
to
production,
but
we
don't
know
what's
happening
after
I.
Don't
know
three
four
days
running
on
kubernetes
right,
because
we
don't
have
this
data,
yet
we
don't
have
at
least
not
with
the
production
data
so
having
functionalities.
That
will
allow
us
to
introduce
those
risky
changes
into
in
production,
with
a
very
minimal
impact
to
the
end
users,
but
still
to
be
able
to
collect
the
right
amount
of
information
that
are
going
to
allow
us
to
build
confidence.
A
So
there
is
risk
we
see
them
at
much
higher
priority
right
now.
This
move
to
director
to
CSD
and
at
the
same
time,
if
we
have
this
CI
CD,
we
will
still
need
to
have
different
deployment
strategies
for
when
we
introduce
new
changes
right
to
be
able
to
switch
traffic.
If
we
speak,
let's
say:
blue
green
from
our
cluster
to
a
different
cluster,
so
also
you're,
having
like
routing
capabilities
between
a
new
version
that
we
want
to
bring
up.
A
new
version
want
to
bring
down
automatically
roll
back
to
the
previous
version.
A
To
when
we
see
an
error,
we
see-
or
we
see
like
epidex
like
indexes
like
going
down
so
I-
think
we
need
to
build
better
strategy
on
that
side
to
be
able,
then
to
move
to
cicd
right.
Cse
is
going
to
be.
We
need
some
prerequisites
that
are
going
to
allow
us
to
be
in
a
full
I
would
say
in
a
full,
safe
solution
to
deploy
continuously
to
production
without
like
ice
with
ice,
squid
eyes
open
and
closed
at
the
same
time.
So.
C
This
is
basically
what
what
I
just
said
right
so
like
you
need
to
build
the
confidence
and
the
like.
The
tests
are
not
bring
you
enough
confidence
even
back.
You
have
like
90
of
coverage
tests
are
not
bringing
you
enough
confidence
to
get
rid
of
this
manual
manual
process
of
approving
changes
and
testing
them.
Also,
you
said
you
you
mentioned.
We
don't
know
how
it's
going
to
work
on
production,
because
we
don't
have
enough
traffic,
but
you
can
always
mirror
the
traffic
from
production
to
Canary
and
see.
A
Yeah
we
actually,
we
actually
did
so
going
back
to
the
example.
Gigalab
sshd
right,
so
sshd
is
a
microservice
that
is
replacing
GitHub
shell,
so
it
was
actually
replacing
for
open
SSH
to
have
a
a
smaller
memory
footprint
and
running
more
efficiently.
A
This
has
been
put
on
staging.
We
did
send
our
request
to
staging
everything.
It
was
looking
fine
on
staging.
Then
we
went
to
move
this
to
Canary.
We
surfaced
some
problems
with
surface,
but
we
didn't,
but
at
certain
point
we
started
to
see
it
was
instantly
in
production,
but
then
we
started
to
see
the
context
canceled.
A
We
started
when
we
started
to
increase
the
amount
of
traffic
that
we're
actually
sending
to
Gallery.
We
started
to
see
some
errors
that
we
actually
couldn't
see
in
staging,
and
this
was
because
we
didn't
have
enough
statistical
significance
on
that
to
represent
all
the
ciphers
that
all
the
SSH
client
could
use.
We
are
using
always
with
the
with
testing
with
the
most
common
clients,
but
then,
when
we
go
to
production,
you
had
you
know
thousands
of
users
using
very
weird
ciphers.
A
Sometimes
they
were
actually
not
handled
by
our
implementation
of
ssh3
right
and
then
we're
getting
these
contest
Council
that
we
are
like
not
be
able
to
understand
what
was
the
reason.
So
this
was
a
lesson
learned,
probably
better
observability,
to
be
built
within
the
software
itself,
but
also
it
was
really.
It
was
really
strange
to
not
have
something
like
that.
Also
you
just
mentioned
something
like
traffic
mirroring
traffic
mirror
is
something
that
probably
would
be
very
nice
to
have,
especially,
but
keep
in
mind
that
we
have
also
data.
A
Let's
say
that
you
mirror
Git
gits
You
need
to
have
the
same
Repository
on
on
Italy
on
a
different
cluster
and
it's
difficult
to
predict,
which
one,
which
price?
Will
you
get
right
so
and
yeah
in
addition
to
that,
traffic
mirroring
is
still
not
a
functional.
That
is
that
we
have
at
infrastructure
level-
and
you
know
like
this-
is
going
a
bit
on
on
the
realm
of
having
like
H
proxy
0.0
or,
like
I
mean
like
service
meshes
and
so
on.
A
That
is
different
topics
for
discussion,
especially
with
neural
Library,
Within.
A
No,
we
don't
have
service
meshes
so
far.
We
have,
we
had
Calico
so
far
in
and
I
share
proxy
and
everything
is
behind
cloudflare.
C
A
There
are
some
efforts
to
replace
to
upgrade
Asia
proxy
and
discussion
to
have
different
Ingress
controllers.
C
What's
what's
the
purpose
of
proxy,
what
does
it
do
that?
What
can
it
do
that
cloudflare
thing
you
cannot
do.
A
Honestly,
I
don't
know
the
historical
reason
for
that.
It
was
being
taken
like
some
years
ago
when
they
started
the
immigration
with
kubernetes
and
keep
in
mind
that
we
are
NHA
proxy
and
er
Jenny
are
commandment
is
a
bit
stronger
than
me
here
in
memory.
We
also
have
all
the
abms
right.
Everything
is
behind
each
proxy
there.
So
we
then
not
everything
is
on
kubernetes,
like
all
digital
infrastructure
is
still
nbms,
and
it
was
like
the
way
that
we
arriving
traffic.
A
A
lot
of
tooling,
has
been
built
around
the
chip
proxy
as
well
always
switch
traffic
when
we
do
rollouts
or
upgrades
and
everything
else
so
I
guess
it's
not
it's
also
important
to
understand.
You
know
if
we
change
the
tooling,
which
kind
of
tooling
we
need
also
to
adapt
for
the
new
process
and
so
on,
but
there
are,
if
you
look
in
the
in
in
the
infrastructure
project,
there
are
a
lot
of
epics
and
issues
for
discussion
around
this.
These
Technologies,
if
you're
interested
in
reading
into
those
and.
C
Is
this
proxy
runs
on
kubernetes?
So
it's
like
a
dedicated
VMS
that,
like
VM
layer,
yeah.
A
Hp
proxy
is
not
running
on
kubernetes.
So
far.
Okay,
there
were
some
discussions
to
move
that,
but
I
don't
know
where
they
are
right
now.
C
All
right
but
yeah
to
be
multiple,
if
you,
if
you
have
multiple
regions,
so
you
might
need
to
have
multiple
proxy
kind
of
yeah
like
local
locations.
A
A
A
Yeah
you
I'm
gonna
link
to
you
Vladimir
a
couple
of
things,
maybe
on
on
slack
I
added,
so
we
are
at
time
I
think.
Yes,
we
are
at
time
so
I
invite
you
maybe
to
go
through
the
links.
Whenever
you
have
time.
I
know
your
onboarding
issue
and
a
lot
of
check
boxes
to
mark
and
I
think
we
we
crossed
the
300
check
boxes
to
mark
for
the
entire
boarding
issue,
so
it's
gonna
be
and.
A
Exactly
so,
you
need
to
repeat
it
in
type
day,
one
with
the
new
laptop
right
now
yeah.
So
when
your
time
These,
Are,
Gonna,
Be,
Still,
Still
In
the
agenda,
this
issue
that
they
say
there
there
is
some
working
items
we
have.
There
is
a
explanation
of
our
list
process.
All
the
links
of
the
handbook
and
I'm
gonna,
also
link
to
you.
A
couple
of
a
video
I
think
was
from
scarbeck.
That
was
like
a
couple
of
one
year
and
a
half
ago,
two
years
ago
presented
a
coupon.
A
C
A
Okay,
well,
thank
you
very
much,
I'm
happy
that
we
ended
up
at
least
to
have
do
everything
this
format.
So
it
was
not
a
nice
ask
me
anything
session.
I,
hope
we.
We
clarified
some
of
the
questions
you
had
Vladimir
again
welcome
to
the
team
and
I
speak
to
everybody
a
bit
later
then.