►
From YouTube: 20200416 SIG Arch Code Org
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
so
James
I
know
you
added
some.
You
were
talking
about
some
of
the
dependencies
that
were
added
with
K
log
B,
that's
flack,
I'm,
not
sure
if
you
wanna
add
that
to
the
agenda,
but
I
can
talk
about
something
that
was
discussed
in
our
last
call
around
staging
the
EDV
testing
framework
and
so
followed
up
with
George
Aaron
from
C
testing
and
George
took
the
initiative
to
put
together
a
draft
cap.
A
B
Yeah,
absolutely
so
sometimes
as
Andrew
mentioned,
this
is
the
first
final
draft
of
what
I'm
thinking
they
work
that
is
needed
to
eventually
move
into
an
test
framework
and
it's
packages
into
staging
more
than
anything
to
improve
the
user
experience
and
maintainability
of
that
area
of
cover
Nerys
and
at
least
a
one
particular
thing
that
I
feel
I
should
mention
out
of
the
bay
from
the
beginning
about
that.
Cape.
A
C
Both
both
online,
so
the
big
progress
on
the
doc
Alice
cap
directed
a
review
and
Matt
reword
at
some
of
the
things.
So
hopefully
it's
it
should
get
merged,
if
not
this
week
next
week
and
the
PR
really
looks
good
as
well.
There
are
some
follow-up
items
from
the
PR
in
terms
of
testing
and
how
how
we
could
make
sure
that
we
don't
break
the
functionality
that
we
are
adding
there
and
things
like
that,
so
we'll
be
able
to
work
through
that
in
subsequent
PRS.
So
at
least
the
first
PR
is
ready
to
be
merged.
C
So
that's
the
update
on
that
now.
The
harder
problem
is
the
next
one
which
is
structure
logging,
so
structure
logging
need
needed
some
new
methods
to
be
added
in
key
log,
so
we
added
it
in
v2
API
because
we
want.
We
didn't
want
to
pull
you
the
v1
and
also
we
had
already
cut
over
to
be
two
because
of
some
additional,
a
different
method.
Signature
changes
that
we
had
from
before
for
supporting
logger
lockable
logging
stuff.
So
where
we
are,
is
we
have
PR
that
we
are
iterating?
C
They
were,
can
click
on
the
fixes
space
fixes
issue
yeah
that
one.
So
we
were
looking
at
what
are
the
Cuban?
It
has
dependencies
that
need
to
be
updated
with
Kayla
veto,
because
you
know
we
can't
update
KK
to
kill
a
veto
and
leave
the
the
vendor
library
set.
K
log
be
one
so
we
went
through
that
exercise
and
there
were
five
six
repositories
that
we
needed
to
change
and
all
those
changes
are
in
Django.
You
tells
cube
open,
API,
see
adviser
Kate
cloud
provider
and
yeah
the
network
proxy
one.
C
So
all
we
walk
through
all
those
dependencies
and
all
of
them
have
the
have
been
updated
to
K
log.
Be
to
some
of
these.
We've
never
made
any
releases,
including
noodles,
so
they
are
all
Shaw's
at
this
point.
So
we're
going
to
leave
it
that
way
at
this
point
so,
but
the
next
step
is
to
run
scalability
tests.
If
people
remember
last
time,
we
had
trouble
with
key
log
with,
but
it's
something
I
think
it
was
images
yeah.
C
C
We
need
PR
sorry
reviews
from
for
each
of
the
vendor
dependency
from
different
six
I'm,
not
too
sure
if
I
got
all
the
assignments
right
which
dependency
for
which
SIG's.
So
if
you
have
any
tweaks
there,
let
me
know:
I
can
go
make
those
tweaks,
but
the
more
important
thing
is.
We
are
again
pulling
a
bunch
of
the
new
dependencies
that
we
didn't
have
before.
So,
if
you
look
at
the
go,
mod
go
sound
files,
you
know
they
get
blown
up
again.
C
D
C
Brings
in
a
couple
that
we
hadn't
seen
before
so,
can
you
go
back
to
previous
we'll
come
back
to
this
one,
so
the
Viper
one
brings
in
the
sub
sub
oxy
two
and
atomic
and
in
II
I
think
that
was
one
thing
that
was
major
and
the
Cates
cloud
provider
brings
in
the
GAC
start:
go
GAX
go
so
that's
new
Honus!
We
had
before.
C
So
as
a
net,
we
are
adding
about
a
hundred
thousand
lines
of
code,
I
think
so,
if
you
look
at
the
between
how
many
lines
got
added
and
how
many
lines
got
deleted,
it's
about
100
thousand
I
think
so
the
that's
where
we
are
right
now,
at
least
the
co.
The
the
PR
itself
is
clean.
All
the
CI
jobs
are
working
to
verify
a
couple
of
failures
and
I
can
fix
that
anytime.
C
D
C
D
But
I
thought
I
thought
we
had
said
like
the
pattern
of
making
changes
in
K
log
that
caused
scale
problems
that
we
don't
have
test
coverage
for
and
then
cutting
releases
and
then
bringing
Caleb
versions
and
new
kubernetes
and
then
doing
scale
tests
in
kubernetes
is
not
workable.
I
thought
we
had
said
that
we
needed
to
have
tests
in
K
log
that
would
exercise
the
paths
that
regressed
in
the
distress
environment.
The
problem.
D
Me
causing
that
and
driving
the
issue
back
to
K
log
seems
like
a
blocker
for
like
if
we,
if
K
log
has
made
changes
that
we
know
caused
regressions
that
haven't
been
route
caused
and
we
don't
have
coverage
for
in
K
log.
That
seems
really
fundamental.
Like
I.
Don't
know
how
I
can
continue
developing
K
log
with
this
unresolved.
C
C
C
D
A
C
C
D
C
D
If
we
can
make
this
closer
to
a
dependency
neutral
change,
that
would
be
ideal
yeah
if
we
can
track
down
where
those
came
from
and
at
least
understand.
So
we
know
if
we're
increasing
our
exposure
to
things
or,
if,
like
some,
sometimes
if
you
just
bump
like
going
or
exes
or
something
like
that,
can
bring
in
a
ton
of
changes.
Things
like
that
aren't
just
concerning
Joe
right,
just
try
calm
down
if
there
are
new
things
were
depending
on
or
if
the
existing
things
just
increased
in
size.
So.
D
So
so,
there's
new
things
that
we're
not
tendering
so
the
stuff
in
goes
some
then
there's
new
things
that
we
are
venturing,
so
that
would
be
new
licenses
and
then
there's
increased
sides
of
existing
things.
So
just
breaking
down
the
side,
we
understand
where
that's
coming
from
that'd
be
helpful.
Yeah.