►
From YouTube: Argo Contributor Experience Office Hour 10 Sep 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
good
morning,
everyone
or
good
afternoon,
I'm
alex
I'm
a
software
engineer
at
internet
and
today
is
our
second
argo
contributors
meeting.
So
I
I
think
we
have
new
attendance
in
the
meeting.
I
guess
before
we
start.
You
know
our
agenda.
B
Hello,
everyone,
jan
here
yeah,
I'm
an
individual
contributor
since,
like
I
don't
know
a
year
and
a
couple
of
months
or
so,
some
people
might
know
me
from
from
slack
or
from
the
issue
tracker
already,
so
my
handle
is
janfis.
A
A
E
Hi,
I'm
shobhick.
I
work
for
red
hat.
I
am
an
architect
at
red
hat
around
a
few
areas,
and
that
includes
argo
city
and
get
ops
I'm
here,
because
my
team
and
I
want
to
contribute
into
our
ocd
and
we
want
to
ensure
that
we're
ramped
up
and
we're
also,
I
want
to
ensure
that
I
can
share
some
of
the
load
from
you
alex
and
the
other
core
contributors
around
how
we
plan
and
take
up
key
areas
of
argo
cd.
E
A
Okay,
so
we've
hit
couple
agenda
items
today,
so
we're
going
to
talk
about
using
telepresence
to
develop
argo,
cd
and
alec
was
going
to
do
the
presentation
and
I
think
he's
couple
minutes
late.
So
we'll
push
it
to
the
you
know.
A
Second
half
of
the
meeting
and
for
that
meeting
I
also
I
prepared
kind
of
post
modern
for
1.7
release,
so
just
some
small
analysis
of
all
the
regressions
which
leaked
into
release
and
we
did
catch
them
and
then
you
know
just
kind
of
not
real
proposals,
but
some
ideas
of
how
we
can
improve
it
and
how
we
can
improve
quality
of
next
series,
and
I
just
got
message
from.
F
Screen
so
you
can
see
my
screen:
okay,
hey
so
a
couple
or
I
think,
last
week
or
maybe
two
weeks
ago,
I
I
made
a
pr
that
describe
how
to
use
the
tool
called
telepresence
to
debug
your
cluster,
your
obviously
application
that
is
installed
on
your
remote
cluster
and
you
you
can
just
go
to
and
read
it
actually,
but
I'll
just
show
you
to
you
also.
I
have
here
the
configuration
for
vs
code,
because
this
is
what
I'm
using.
F
So
telepresence
is,
is
an
open
source
tool
from
from
datawire
gives
you
the
ability
to
swap
a
remote
deployment
with
the
with
the
with
the
same
identical
deployment
in
terms
of
the
spec,
with
the
change
of
the
image.
F
So
once
the
image
is
changed,
it
will
open
an
ssh
connection
back
to
your
local
machine
and
start
forwarding
all
the
traffic.
So
you
can
start
debugging
the
remote
service
on
the
on
your
local
cluster,
your
local
process.
It
gives
you
a
very
quick
feedback
loop.
You
don't
need
to
build
image,
push
it
to
production
and
stuff,
like
that.
It
gives
you
the
very
realistic
environment.
F
F
F
So
here
I
have
the
the
rbcd
server,
and
here
is
my
my
cluster.
Everything
is
installed
and
I
I
will
be
swapping
this
pod,
so
it's
just
as
simple
as
telepresence
swap
deployment.
What
is
the
name
of
the
deployment?
What
is
the
service
name?
What
is
the
environment
variable
file
name?
I
will
explain
how
to
relate
it
to
the
debugging
later.
F
F
F
F
So
from
now
on,
we
can
just
go
and
oh
by
the
way.
If
we
go
into
telepresence
root,
we
are
actually
looking
inside
the
container
file
system,
so
we
can
see,
for
example,
the
the
os
release,
so
the
volume
has
been
mounted
to
our
our
file
system.
A
F
A
I
see
yeah
and
then
does
it
happens
to
have
make
or
does
it
magically
install
all
the
tools
into
into
that
image?
It
does.
F
Not
make
it
doesn't
need
anything
inside
the
image
beside
starting
the
the
the
network
to
your
local.
Oh.
F
F
F
F
F
Here
is
the
debug
so
the
same
way
I
can
debug
one
or
multiple
services
the
only
conflict
may
be
if
you
have
different
ports,
so
you
need
to
to
change
your
local
port
to
something
else
and
then
tune
the
application
to
use
it.
It's
not
nothing.
Nothing
too
complex.
A
Yeah,
I
feel,
like
I'm
still
kind
of
missing
one
last
point:
how
does
it
know
which
binary
to
start?
Can
we
show
one
more
time
launch
configuration.
F
Yeah
so
here's
my
local
configuration
it's
just
building
the
command.
Argo
cd7,
it's
starting
it
as
it
like
the
same
thing
that
I
would
do
go
go
run
something
like
that.
A
Okay
and
then
yeah
a
lot
of
things,
so
there
is
volume
mounting
between
the
port,
yes
and
your
local
file
system,
so
everything
every
time
you
make
a
change
locally.
It
kind
of
magically
moves,
yeah.
F
So
yes,
so,
as
luckily
rbcd
is
not
depending
on
the
file
system,
it's
not
looking
for
files
from
file
system
as
much
as
I
saw
at
least
but
for
future-
maybe
pull
requests,
we
use
it.
We
need
to
to
take
into
account
that
the
route
may
be
different,
because
when
debugging
remote
cluster
the
route
is
the
is
this
one.
F
Now,
as
I
mentioned
in
here
once
I
started
to
to
play
with
argo
locally,
I
I
was
needed
to
read
everything
on
the
contribution
guide
and
running
algo
cd
locally,
and
I
wanted
to
start
very
fast.
I
not,
I
wasn't
sure
if
my
environment
is
configured
properly
or
not,
but
I
did
know
that
the
kubernetes
cluster
is
running
and
everything
is
good.
So
this
is
was
my
first
motivation.
F
I
was
not
sure
about
how
to
do
it
and
the
other
benefits
that
you
have
is
that
your
the
the
all
the
other
environment
beside
your
service
stays
the
same,
the
same
thing
as
your
production
or
staging
it's
not
on
your
local
machine.
It
has
a
minimal
footprint,
it
doesn't
require
you
to
run
multiple
services.
F
A
E
No,
so
I
think
I'll
I
like
this,
because
some
of
my
development
process
includes
actually
running
things
on
a
remote
cluster.
That's
not
a
local
one,
and
I
I
I
almost
never
run
anything
locally
these
days.
So
yes,
this
definitely
helps
me
there.
I
would
say
so.
Thank
you
for
this.
F
I'm
more
than
the
same
actually
the
same
point
as
you:
I'm
not
running
almost
nothing
on
my
local
machine.
Everything
is.
This
is
a
cluster
on
amazon,
but
the
the
point
is
that
it
requires
a
root
access
both
on
your
local
machine
and
the
port,
to
be
running
through
by
the
way.
This
is
the
this
is
a
limitation.
E
A
F
E
F
E
E
H
With
the
remote,
the
process
is
actually
running
remotely
right,
but
you're
forwarding
the
debug
ports
of
the
process
to
the
local
debug
client
right.
Yes,
so
let
me
ask
you
your
debugging
go
so
the
delve
executable,
which
is
the
debugger.
Is
it
actually
in
the
container
itself
as
well
or
is
it
no?
No?
No.
No
all
the
tools
that
I'm.
F
H
A
I
I
noticed
so
basically,
you
shared
your
vs
code
configuration
file
in
documentation,
and
I
was
I
kind
of
keep
debating.
Does
it
make
sense
to
have
to
commit
it?
So
people
can
just
use
it
instead
of
you
know
copy
basically
from
documentation.
I
know
a
lot
of
developers
are
supposed
to
have
kind
of
ide
specific.
F
I
know
I
know,
I
know
that
it
will
work,
probably
for
the
vs
con,
but
for
for
people
that
using
for
let's
say
intellij,
I'm
not
sure
they
need
to.
They
need
to
download
some
kind
of
plugin
that
will
allow
using
the
nvrc,
which
my
mvrc,
the
default
embassy
just
sourcing
this
one.
So
this
is
the
the
configuration
intellij
and
I
can
commit
the
the
launch
json,
but
for
intel.
H
A
Documentation
is
kind
of
better
than
nothing
because
we
pretty
much
have,
and
no
I
mean
we
kind
of
assume
that
people
know
how
to
configure
their
intellij
idea,
absolutely
ide
and
and
then
and
don't
even
have
documentation.
So
I
feel
like,
like
I
have
some
hidden
knowledge
which
I
hesitated
to
put
into
files,
but
I
can
put
them
into
documentation
and
describe
how
I
configure
my
id.
H
I
J
I
have
one
quick
question:
I'm
not
sure
I
fully
understood
the
swap
deployment
step
that
I
saw
in
the
diagram
just
wondering
what
exactly
is
happening
and
why
we
need
to
do
that.
J
F
The
swap
the
swap
starts
the
same
exactly
the
same
space
spec
of
the
pod
beside
inside
a
different
image.
The
image
that
comes
from
telepresence
is
the
they
has
that
command
to
start
forwarding
all
the
other
traffic
back
to
your
local
local.
C
F
D
J
A
Okay,
so
I
think
it's
my
turn
now.
Let
me
share
my
screen,
so
the
next
topic
I
wanted
to
talk
about
was
it
was
kind
of
honks
and
set
me
up
for
to
do
that.
He
proposed
to
give
us
short
description
of
what
was
happening
with
1.7
release
and
I
try
to
basically,
I
just
have
a
list
of
most
notable
issues,
which
happened
and
some
maybe
suggestion,
and
then
we
can
kind
of
brainstorm
together
and
if
we
like
some
of
these
suggestions,
we
can
convert
them
into.
A
Okay,
so
that
I
will
post
document
in
in
the
slack-
and
it's
already
available
in
calendar
invitation
for
for
this,
for
this
meeting.
So,
okay,
first
couple
issues,
so
it
goes
kind
of
so
first
issues
are
maybe
boring
and
then
it's
getting
more
and
more
fun.
So
let
me
start
from
the
simplest
one.
So
in
1.7
we
accidentally
broke
1.15
kubernetes
support.
So
basically
not
everything
was
broken.
A
It's
just
one
functionality,
but
pretty
important
one
as
a
quick
summary
users
were
unable
to
create
any
new
tokens
for
our
projects
and
not
sure
if
you're
familiar
with
it.
But
basically
this
is
the
way
to
this
is
the
most
famous
way
to
integrate
your
ci
and
argo
cd
together,
and
you
know
this
tokens
kind
of
open,
limited
access
to
argo,
cd
and
the
reason
is
kind
of
very
simple
here.
We
don't
even
have
right
now
official
list
of
supported
kubernetes
versions,
and
this
I
feel
like
this
is
easy
to
fix.
A
We
just
need
to
agree
on
which
versions
we
want
to
support
and
then
second
problem.
I
think
it's
a
little
more
difficult
to
execute.
We
don't
have
automated
tests
for
different
kubernetes
versions,
so
we
have
tests
for
token
generation,
but
it
just
uses
kubernetes
1.16
and,
I
believe,
github
supports
matrix
testing.
I
think
this
is
how
it's
called
so
the
same
set
of
tests
can
be
run
on
different
using
different
conveniences
versions.
A
So
yeah
and
that's
that's
the
proposal
and
I'm
going
to
maybe
convert
it
into
github
ticket
because
it
is
kind
of
you
know
it's
hard
to
argue
about
it.
The
only
open
question
is,
I
think
we
should
take
it
offline,
which
versions
of
kubernetes.
We
want
to
support
and
yeah.
E
I
guess
I
think
a
quick
note.
There
will
be
that,
even
even
if
we
can't
get
to
that
answer,
because
that
might
be
a
difficult
question
because
there
may
be
many
enterprises
running
on
old
kubernetes.
The
thing
that
we
should
probably
agree
on
is
that
what
is
a
range
of
kubernetes
versions?
We
should
run
our
tests
on
to
begin
with,
so
that
we
at
least
know
something
is
failing
or
not,
because
sometimes
you
may
say
that
hey.
This
is
failing.
E
It's
a
super
simple
fix
to
ensure
it
works
there
or-
or
we
can
say
this
works
on
like
argo
city
works,
on
cube
1.15
except
the
following
things
like
those
are
things
we
could
get
to,
but
I
think
the
first
step
would
be:
let's
get
a
wide
range
of
cube,
like
cube
versions
tested
on
a
regular
basis
that
will
help
us
answer
the
next
question.
What
can
we
reasonably
support?
B
I
think
actually,
I
think
this
is
pretty
much
possible
with
github
actions.
They
have
something
like
matrix
testing
and
I
think
you
should
just
the
e2e
test
step
and
and
use
different
kubernetes
versions
as
as
from
k3s
or
something
like
that.
Yeah.
A
This
one
was
quick
one,
so
the
next
one
is
maybe
a
little
bit
more
difficult.
So
in
1.7
we
made
a
dangerous
change,
so
we
factored
the
logic
responsible
for
kubernetes
resource
reconciliation,
and
so
the
previous
logic
was
kind
of.
We
call
it
poor
man's.
Do
you
think
so?
We
were
trying
to?
It
was
not
you
know
it.
The
problem
with
previous
implementation
was
that
it
was
not
matching
kubernetes
behavior
100,
so
we
made
a
switch
and
as
a
result
of
it,
we
kind
of
dropped.
A
A
bunch
of
you
know,
code
that
was
which
was
handling
edge
cases
and
we
also
dropped
hk's
related
to
creation,
time
stem
field,
and
that
was
basically
a
mistake.
Apparently
creation
time
stamp
was
there
not
because
it
was
just
the
edge
case.
It
was.
A
It
was
each
case,
but
it
was
related
to
just
the
situation
kind
of
a
lot
of
users
intentionally
put
creation,
timestamp
field
into
the
spec
in
in
git,
and
the
value
of
creation.
Time
step
is
set
to
null
and
it
immediately
changed
as
soon
as
you
know,
you
cube
ctl,
apply
that
manifest
so
and
then
so
the
logic
was
kind
of
correct.
A
It
was
reporting
here
difference,
but
the
user
intention
is,
you
know,
to
kind
of
not
see
the
difference
yeah,
so
we
dropped
that
support
of
special
peddling
for
creation,
timestamp,
and
then
a
lot
of
users
complained
about
it
and
then
I
kind
of
tried
to
investigate
what
was
happening,
and
I
found
that
a
lot
of
popular
health
charts.
A
Have
you
know
this
error
and
that's
why
the
proposal
is
to
somehow
have
automation
and
just
a
rather
basic
test
which
simply
install
a
set
of
helm,
charts
and
make
sure
that,
after
the
installation,
application
based
on
that
health
chart
is
in
sync
so
yeah.
This
is
basically
what
I'm
saying
is
that
if
we
simply
ensure
that
army
city
able
to
install
popular
home
charts
like
short
manager
or
istio,
maybe
it's
too
much
radius.
E
Yeah,
I
think
that
makes
sense,
so
I
think
we
need
to
do
a
couple
of
things.
One
is
we
need
to
ensure
that
when
we
I
mean
very
often
what
happens
is
that
the
pr
which
makes
the
code
change
to
let's
say,
drop
the
special
handling
of
creation?
In
times
time.
C
E
Goes
fine,
so
I
think
this
looks
like
we
need
some
kind
of
acceptance
test
source.
Now
we
have
to
figure
out
what
that
would
imply
in
the
context
of
our
project,
so
that
we
know
that,
when
somebody's
changing
that
they're
effectively
changing
behavior,
that
is,
that
can
have
an
impact
on
the
user's
expectation.
So,
yes,
we
need
to
have
some
kind
of
acceptance,
testing
framework
in
it,
which
is
not
core
unit
tests
and.
E
That,
yes,
I
completely
agree
with
you.
We
should
have
some
real-world
hem
charts,
which
are
complex
enough
and
popular
enough
that
we
add
into
our
test
bed,
there's
no
harm
doing
that.
That's
right
and.
A
Could
you
mention
unit
test?
We
did
have
a
unit
test
for
you
know
for
for
this
special
field,
and
I
happily
get
rid
of
it
because
I
thought
oh,
it's
not
needed
anymore,
so
yeah,
yeah,
okay
and
then
in
this
it
sounds
reasonable.
Okay,
I
will
create,
you
know
just
a
proposal
to
do
that,
and
we
can
you
know
in
this
issue.
We
can
figure
out
a
list
of
home,
charts
and
yeah
and
then
working
okay.
A
The
next
issue
is
kind
of
related,
so
it
is
again
we
made
a
change,
you
know
and
we
start
using
pretty
much
the
exact
same
logic
for
differing
as
kubernetes
and
unfortunately
kubernetes
1..
I
think
18.4
it
had
a
bug
and
so
cube.
Cgld
could
just
hang
and
don't
execute
diff
at
all
if
it
tries
to
compare
a
very
big
json
document
and
such
document
could
be
a
crg
manifest
from
one
of
sword,
manager,
sergeys,
yeah
and
so
good
thing
that.
A
That
was
cached
even
before
release
was
created,
and
so
we
kind
of
so
we
fixed
it,
but
the
bad
thing
that
it
was
not
really
tested.
Well,
so
the
basically
we
had
two
pr's
one
one
pr
was
merged
before
this
was
created
and
second
pr
was
merged
after
release
was
created.
A
A
You
know
just
unit
testing.
So
unless
you
know
that
this
bug
might
exist,
you
you
can't
write
test
for
it.
The
bad
thing
is
that
the
fix
was
the
initial
fix
was
not
tested
properly
and
it
kind
of
flicked
into
the
release-
and
I
see
kind
of
two
improvements
here
first-
is
to
test
better,
and
I
guess
basically
the
more
you
know
these
more
contributors.
A
We
potentially
have
more
people
who
go
to
test
release,
and
then
you
know
just
just
you
know
second
pair
of
eyes,
and
I
guess
this
previous
proposal
would
help
as
well.
If
we
simply
have
a
test
which
keep
kind
of
verifying
popular
hand,
charts-
and
you
know
we
would
catch
that
error
during
development,
if
if
we
continuously
try
to
deploy
most
recent
version
of
third
manager,
we
would
know
about
the
problem
like
way
sooner,
even
before
the
release
was
ready,
right,
yeah,
so
kind
of.
C
E
C
E
D
A
C
A
About
the
possibility
for
creating
tests
after
we
know
that
problem
exists,
it's
extremely
easy.
You
just
need
a
simple,
convenient
test
which
tries
to
compare
two
files
yeah.
So
it's
those
kind
of
explanations
why
we
had?
No,
you
know
no
such
tests,
because
we
never
thought
that
such
tests
might
be
even
needed.
A
D
A
D
E
L
E
A
couple
of
things,
then,
I
think
yeah
we
need
to
have
basically
those
popular
charts
that
typically
encountered
that
and
then
use
that
learning
to
create
a
maybe
test
chart
ourselves,
which
is
a
model
of
all
the
complexities
together
that
we
know
we
can
maintain
like
that,
will
gradually
help
us
grow
a
test
candidate
for
the
chart
that
would
get
eventually
used.
G
Questions
do
we
do
you
think
it's
appropriate
to
actually
have
like
a
more
comprehensive
test.
Suite
that
is
not
ryan,
actually
is
part
of
every
pull
request
or
commit,
but
run
regularly
like
on
the
quran
schedule.
That
is
I
I
was
thinking
the
same.
A
I
feel
like,
even
if
we
have
the
test
potentially
can
be
time
consuming,
because
if
you
just
if
you
want
to
install
search
manager,
it
takes,
I
guess
minutes
because
it
you
know,
sync
hooks.
Basically
some
jobs
which
tries
to
do
something
with
currencies,
but
at
least
if
we
run
it
every
day
or
even
like
even
every
week,
would
be
already
helpful.
A
G
That's
just
that's
one
strategy
that
we're
going
to
take
with
rollouts,
because
we
need
more
integration,
type
testing
with
like
alb
ingress
and
istio,
and
it's
basically
not
really
possible
or
practical
to
to
run
those
as
part
of
the
pipeline.
I
see
a
pipeline.
E
So
so
that
there,
so
there
are
some
things
we
do
just,
for
example,
in
some
of
the
other
red
hat,
run
open
source
projects.
E
E
E
I
mean
we
do
have
some
infrastructure
set
up
to
actually
have
tests
run
against
open
shift
across
multiple
versions,
so
I
mean
in
short,
yes,
I
know
that
this
is
expensive
and
this
needs
to
be
run
on
a
less
frequent
basis,
but
then
we
need
to
be
still
aware
that
we
can
still
end
up
in
a
in
an
awkward
position
where
five
six
prs
went
in
and
the
nightly
job
fails
and
we're
trying
to
figure
out
which
change
broke.
E
That
so
I
mean
as
a
first
step,
I
would
say:
let's
have
a
well-written,
now
test
suit
for
this.
Let's
try
to
run
it
as
frequently
as
possible,
and
if
it
goes
out
of
hand,
we
reduce
it.
C
E
E
G
Right,
yeah,
here's
the
technical
challenge
and
also
the
the
resource
challenge
yeah,
for
I
guess
that
the
one
the
technical
challenge
that
I
was
like,
I
don't
know
that
in
the
at
least
the
infrastructure
that
we
have
right
now
that
it
would
be
technically
possible
to
to
actually
test
certain
managers
against
k
or
seo
or
those
things
against
a
small
kps.
If,
given
the
infrastructure,
I
would
definitely
agree
half
an
hour
is
nothing
compared
to
the
developer
time
chasing
this
after
the
fact.
A
Yeah,
okay-
and
I
feel
like
so
this
teaching
is
a
kind
of
related
and,
like
short,
summaries,
that
we
just
don't
have
enough
functional
tests.
Yeah
all
right
and
next
tickets
are
next.
Issues
are
all
related
to
performance
and
they
all
kind
of
maybe
have
the
same
suggestion,
and
it
has
small
kind
of
pretty
story
to
it.
So
there
were
no
bug,
but
at
some
point
at
intuit
we
reached
some
scalability
limits
where
one
rbc
just
could
not
work,
and
it
was
not
even
related
to
controller.
A
It
was
related
just
to
supposedly,
since
which
the
part
is
supposed
to
be
very
simple,
the
qi
and
ipi,
and
what
we
learned
is
that
apparently
kubernetes
doesn't
really
like
when
you
write
into
hcg
very
frequently
and
when
you
have
too
many
open
connections
tv
after
all,
very
basic
one.
Yes
yeah.
The
improvements
we
made
was
extremely
simple.
We
just
tried
to
move
some
fields
which
were
changing
very
frequently
into
radius
because
radius
can
handle
it
better,
and
that
was
one
improvement
and
second
ipi
server
in
1.7
no
longer
read
data
directly
from
kubernetes.
A
It
tries
to
read
it
from
informal
as
much
as
possible,
I
think
only
in
some
in
two
maybe
two
apis
tries
to
intentionally
read
data
from
kubernetes
to
ensure
it
gets
the
you
know
fresh,
not
upsetting
data.
Everything
else
is
from
informer
and
yeah,
and
then
we
tested
things
internally,
as
we
usually
do.
Maybe
I'm
not
sure
if
we,
I
think
we
had
some
presentation
about
it,
but
basically
what
we
do
at
intuit,
the
team
at
intuit.
A
A
You
know,
team,
the
developer,
ocd,
argo
workflows
and
rollouts,
and
we
use
it
to
deploy
our
workflows,
rollouts
and
events
into
multiple
clusters
so
and
after
1.7
release
the
first
complaint
we've
got
was
from
users
and
basically
we
ourselves
noticed
it
so
sometimes.
A
So
what
happened
is
every
user,
every
time,
user
and
open
application
detail
page
and
in
rgbcdui
the
ui
will
open
two
persistent
connections
to
rbcd
and
browser
has
limitation.
Basically,
every
browser,
don't
let
you
open
more
than
six
open
connections
to
single
domain
and
yeah.
So
what
happened?
Is
we
increased
instead
of
we
used
to
have
such
connection,
but
user
had
to
open
one
connection
per
page?
We
increased
it
to
two
connections
plus
after
switching
to
informer
the
booting
api.
A
It
became
very
stable
and
it
never
closes
because
informers,
you
know,
don't
call,
but
you
don't.
You
can
just
infinitely
read
data
from
informer.
As
a
result,
user
opened
two
persistent
connections
which
never
closes
and
after
trying
to
you
know
after
opening
three
tabs.
Fourth
tab,
just
wouldn't
open
at
all
yeah
and
with
it.
A
Apparently
it's
a
well
well-known
problem
if
you
use
a
long
boarding
or
like
service
and
events
protocol
and
the
one
of
the
suggestions
is
to
close
connection
when
user
kind
of
click
away
and
move
to
a
different
page,
a
browser
supports
feasibility
api
and
it's
kind
of
relatively
easy
to
just
open
and
close
these
connections.
When
you
navigate
on
the
page
and
off
and
that's
what
we
did
so
we
made
this
improvement
released,
it
tested
everything
and
everything
was
working
fine
and
then
we
moved
to
the
next
problem.
A
After
basically,
five
days
of
the
investigations,
we
realized
that
we
had
a
bug
again
implemented
created
in
1.7,
while
reading
data
from
informer
there
was
a
basic
possibility
of
deadlock
and
the
bug
was
in
production
for
quite
a
while,
but
it
was
extremely
difficult
to
reproduce
it
after
we
improved
ui,
we
created
perfect
conditions
to
reproduce
the
bug,
so
the
bug
could
happen.
The
deadlock
could
happen
if
user,
open
and
close
connection
like
very
quickly
and
frequently,
which
was
happening
when
users
switch
between
different
pages
and
yeah.
A
So
it
took
five
days
to
find
the
bug.
Finally,
we
fixed
it,
and
one
more
kind
of
a
related
issue
is
that,
while
fixing
the
bug
we
created
security
issue,
which
is
you
know,
the
summary
is
here
so
and
kind
of
the
summary
of
faulty
issues
is
that
it
seems
like
it
to
me.
It
seems
to
me
with
three
almost
no
infrastructure
to
test
scalability
and
performance.
We
have
pretty
good
set
of
e2v
tests
and
they
catch.
You
know
functional
errors
and
the
dog
food
cluster
which
we
use
at
intuit.
A
It's
it
really.
Let
us
test
well
controller
performance,
but
to
test
ipi
and
ui
performance.
You
need
something
which
creates
traffic
and
we
don't
have
such
thing,
and
one
proposal
could
be
to
kind
of
try
to
develop
an
automation,
and
you
know
that
automation
could
be
as
simple
as
you
know,
run
a
smoke
test
kind
of
deploy,
a
simple
application
using
selenium
scripting.
Basically
using
ui
created
application,
click
sync
button,
and
then
you
know
make
sure
that
elements
are
rendered
correctly
as
well
as
we
can
have
a
jenkins
pipeline
which
keeps
syncing
that
application.
E
C
A
It's
and
kind
of
rpcg
I
mean
it.
I
call
it
significant,
but
I
don't
want
to
scare
you
like,
even
if
you
have
like
five
concurrent
users
kind
of
you
know,
fake
users.
That
would
be
more
than
enough
because,
right
now
I
didn't
add
into
it.
We
have
a
way
to
track
how
many
users
we
have
it's
up
to
like
10
people.
At
the
same
time
I
mean
developers,
don't
use
rcd
on
the
you
know,
it's
not
a
it's,
not
facebook
right.
E
A
Yeah,
so
that's
another
kind
of
basically,
I
feel
like
it's
important,
because
we
need
to
touch
performance
in
1.8
again
and
then
we
will
do
our
best
to
test
it,
but
it
will
be
hard.
You
know,
and
1.8
can
have
yet
another
unexpected.
You
know
basically
like
looking
at
this
after
we
already
know
the
reasons
I
cannot
say
what
they,
what
could
be
done
better
to
prevent
it
like
it's
hard
to
it,
took
us
like
an
hour.
G
Kind
of
the
classic
issues
like
when
a
lot
of
times
when
you
actually
increase
the
performance
of
something
you
end
up,
putting
stress
and
load
on
the
place
that
has
never
seen
that
before,
which
then
causes.
Actually
this
underlying
issue,
that's
been
there
the
whole
time,
but
just
never
exposed
because
of
those
performance
increases,
so
those
are
impossible
very,
very,
very
difficult
to
always
predict
those
things.
So
I
agree
it's
it.
G
It's
you
just
kind
of
have
to
dog
food
and-
and
but
I
mean
like
I,
don't
know
what
else
we
could
have
done.
Yeah.
A
E
You're
getting
end-to-end
ui
testing
and
that's
like
a
selenium
way
of
doing
that
with
little
more
traffic.
That
is
not
something
that
you
necessarily
have
to
do
on
every
pr.
That
is
something
that
you
know.
That
is
something
that
could
as
just
you,
you
are
saying
that
it
could
be
effectively
a
crown
job
that
runs
at
a
specific.
You
know
interval
we
just
have
to
ensure
it
runs
frequently
enough,
so
that
we
know
it
well
and
well
ahead
of
doing
a
release.
A
Yeah,
okay
and
I
think
it's
a
good
kind
of
segue
into
the
next
topic
it
was
about.
You
know
the
work
we're
doing
for
next
release
and
after
fixing
1.7
release
bugs
we
kind
of
we
learned
that
in
1.7
we
made
a
lot
of
changes
which
we
were
planning
to
do,
which
are
performance,
improvements
and
a
lot
of
contributions
from
community,
and
we
never
plan
to
have
this.
You
know
to
work
on
these
things,
but
we
obviously
I
mean
I
said
contributions
because
all
the
features
which
were
contributed
are
useful.
A
You
know,
which
is
great.
At
the
same
time,
I
think
it
affected
the
quality
of
release
because
we
had
to
test
you
know
way
more
than
we
were
planning
to
test,
and
so
I
think
that
kind
of
caveat
was
that
we
had
two
developers
who
couldn't
test
and
we
had
list
of
maybe
200
changes,
just
a
spreadsheet
and
then
two
of
us
we
went
through
each
and
every
you
know,
line
item
and
at
least
yeah
so
and
the
proposal
is
to
kind
of
try
to
focus
on.
A
A
You
cannot
expect
a
new
contributor
to
work
on
this
type
of
issues,
and
now
it
has
more
variety
of
things
and
kind
of
yeah
proposal
is
for
new
contributors
to
try
to
pick
from
the
list
first,
because
it
make
sure
we
you
know,
we
know
how
much
time
we
spent
for
testing,
because
we
kind
of
committed
to
deliver
these
issues.
Yeah
and
yeah.
G
I
think
that's
that
yeah.
In
short,
if,
if
you're
trying
to
pick
up
issues
to
look
at,
I
we'd
like
to
kind
of
pick
from
this
list,
first
and
kind
of
focus
on,
I
think
1.8
is.
The
theme
is
again
on
on
scalability
and
stability.
G
A
A
A
Now
we
just
have
two
people
committed
to
test
the
next
release.
What
we
were
trying
to
do.
We
were
just
trying
to
make
sure
that
you
know
that
if
we
have
five
changes
and
all
changes
goes
into
the
same,
let's
say
project
details
page:
it
will
be
much
easier
to
test
because
you
don't
have
to
switch
context.
You
just
open
the
page
and
then
you
know
during
manual
testing.
L
A
E
E
G
I
think
a
lot
of
times
we
get
actually
features,
and
then
we
neglect
to
document
that
new
feature.
That
should
be
actually
a
checkbox.
That
prs
need
to
go
to
have
if
they're
introducing
new
features.
E
A
E
I
think
this
sounds
great
alex.
I
think
one
thing
which
I'd
like
to
do
is
that
I
think
between
all
the
contributors
I
mean
my
general
ask,
would
be
for
folks.
Who've
been
doing
this
for
a
while
and
who
know
what
they're
doing
that
there
are
folks
who
are
who
are
figuring
things
out.
I
would
say
if
you
could
go
and
pick
up
issues
that
you
want
to
work
on
and
say
that
these
are
the
ones
that
you've
laid
a
dibs
on
at
least
then.
E
I
want
to
have
some
folks
here
at
tread
hat,
also
to
go
through
these
list
of
issues
and
put
a
dibs
on
the
ones
they
would
want
to
work
on.
Just
so
that
I
don't
just
just
so
that
somebody
here
doesn't
lives
on
something
which
alex
you
were
planning
to
work
on.
So.
G
Oh
yeah,
we
do
assign
so
he
uses
assignee
thing.
I
think,
if
you're
unable
to
assign
yourself
you
might
you
probably
can't
miss
on
yourself?
But
if
you.
E
Awesome
so
so
I
think
so
I
just
wanted
to
check
the
ones
we
have
here.
I
mean
one
thing:
if
we
could
do
is
that
we
could
go
in
and
effectively,
and
this
is
kind
of
basically
an
async
planning
meeting
effectively
now.
So
if
we
could
go
in
and
go
through
the
list
of
issues
and
we
could
go
and
say
hey,
this
is
what
I
want
to
work
on
and
ensure
that
we
follow
up
on
that.
E
I
I
actually
prefer
be
able
to
assign
an
issue
myself.
I
think
you
should
be
able
to
and
maybe
maybe
we
should
try
it
you
don't
you
you
can
yeah.
Okay,
I
I
don't
know
how
I
haven't
tried
lately,
but
I
can't
show
you
yeah.
We
should
try.
E
A
E
That's
fine
awesome,
thank
you
alex
for
the
postmodern
and
the
introduction
to
what's
going
on
in
1.8.
Thank
you.
Yan.