►
From YouTube: 20200423 SIG Arch Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
A
Really
we
haven't
decided
yet
I
think
we're
probably
gonna
want
a
biweekly,
but
everything
else
going
on
with
the
release,
discussion
and
everything
kind
of
tweaked,
but
I
think
that
if
we
want
to
do
some
changes
around
particularly
around
kept
validation
and
it
sort
of
dovetails
with
some
of
the
validation,
we
want
to
do
caps
for
narrow
arched
enough.
So
you
know
as
well
as
the
things
we
would
happen
about
making
everyone
is
comfortable
years
ago.
So
we.
D
So
what
I
wanted
to
show
was
demo
of
a
mechanism
for
sending
warnings
from
the
server
to
API
clients.
In
this
case
it's
specifically
keep
control,
but
it
could
be
any
API
client.
This
is
using
a
mechanism
that
is
proposed
in
the
kep,
that's
linked
in
the
agenda.
It's
using
warning
headers,
as
defined
by
the
HTTP
RFC,
and
the
particular
warning
that
I
have
the
server
sending
in
this
demo
is
about
use
of
deprecated,
API
s
and
so
I
have
here.
D
A
file
that
has
a
bunch
of
beta
version
are
back
objects
which
were
deprecated
in
117,
which
you
may
or
may
not
have
known,
depending
on
how
closely
you
read
the
30
pages
of
117
released
notes,
and
so
today,
if
you
apply
those,
the
server
happily
accepts
them,
and
you
may
not
even
realize
you're
using
a
thing
that
is,
has
an
end
date
associated
with
it.
But
with
this
warning
mechanism,
when
I
apply
these
the
server
can
send
back
warnings.
D
That
says,
what
do
you
want
to
do
with
warning
GT
back
from
the
server,
and
so
the
clients
could
do
things
like
ignore
them
and
silence
them.
They
could
log
them,
they
could
print
them
to
standard.
They
were,
in
this
case,
I
set
one
up
for
queue,
controls
that
D
duplicates
all
the
warnings
it
gets
for
a
given
invocation
and
outputs
them
to
standard
error,
similar
to
other
warnings
that
it
can
output
the
standard
error
already,
because
this
happens
at
the
client
level.
This
actually
works
for
any
Q
control
command.
D
That
uses
client
go
so
if
you
do
gets
or
if
you
do
annotate,
it
even
works.
If
you
do
raw
raw
commands.
So
if
you're
like
giving
a
Q
control
an
endpoint
on
the
server
to
speak
to
and
it's
not
doing
any
parsing
of
the
body
or
anything
else,
it
still
will
handle
warning
headers
that
come
back
from
that
endpoint
so
because
it
cooks
in
at
such
a
low
level,
it's
actually
very
effective
at
catching
all
the
different
types
of
calls
that
you
make
to
a
server.
So
this
is
demonstrating
the
questions
yeah.
F
D
D
First
of
all
can
I
say
thank
you
for
finally
tackling
this
I've
wanted
this
since
before
1.0
yeah.
So
thank
you
did
you?
Is
this
implemented
as
sort
of
freeform
strings
or
is
it
structured?
So
like
is?
Is
this
API
group
is
deprecated
a
formal
thing,
or
is
it
just
a
conventional
string?
That's
just
a
string.
Li
typed
thing
the
arc
see
around
warning
headers,
it's
just
a
text
string.
So
if
I
actually
dump
out
the
header,
that's
what
it
looks
like
coming
across
the
wire.
So
there's
a
code.
D
There
are
a
few
RFC
to
find
codes.
299
is
the
sort
of
generic
warning
you
can
display
to
a
user.
You
can
optionally
indicate
who
is
generating
this
and
then
it's
a
quoted
string
within
our
code
base.
Are
we
structuring
it
or
are
we
just
allowing
strings
to
be
added
onto
the
request
at
various
stages
of
the
rest
rest
pipeline?
So
the
mechanism
would
allow
arbitrary
strings
the
places
where
I'm
using
it
it's
generated
in
a
central
place.
So
at
least
the
initial
users
would
be
consistently
formatted.
D
One
of
the
things
that
we
talked
about
yesterday
and
litigate
di
machinery
call
was
starting
with
it
in
a
very
narrow
set
of
use
cases,
but
eventually
there
are
other
places.
We
would
like
to
use
this
as
well.
Things
like
yield
level
warnings
or
things
that
are
we
know
are
bad
today,
but
we
can't
start
rejecting
for
backwards
compatibility
reasons,
but
would
really
like
to
let
keep
no
you're
doing
a
bad
thing.
D
Sort
of
bounding
the
way
that
you
added
warning,
so
you
had
to
go
through
sort
of
well-established
formatted
paths
and
so
they're
just
throwing
strings
and,
oh
yeah.
That's
that's
really.
My
concern
right.
I
wanted
it
from
before
around
1.0
anyway,
because
you
know
we
have
historical
baggage
that
we'd
like
to
tell
people
hey
you.
You
really
should
stop
using
this
field
or
you've
set
these
two
fields
in
an
incompatible
way
and
I'm
breaking
the
deadlock
by
choosing
X,
and
we
have
you
know
more
than
a
handful
such
examples,
so
I
wouldn't
I
want.
D
This
is
what
I
wanted
forever
to
be
able
to
plumb
those
things
back
through
yeah,
so
that
was
definitely
in
mind.
The
finding
the
balance
between
letting
us
use
it
in
different
layers
of
the
stack
and
avoiding
just
sort
of
a
mess
getting
sent
back
out
to
the
user,
and
so
thinking
through
like
what?
What
is
the
threshold
for
a
warning?
What
deserves
a
warning?
D
What
doesn't,
what's
the
guidance
around
how
that's
structured
and
then
at
least
for
the
things
in
entry
having
you
know,
helpers,
to
say
similar
to
our
validation
helpers
like
if
you're
giving
a
warning
about
a
field
you
should
give
the
field
path.
You
should
give
this.
You
should
give
this
sort
of
putting
rails
around
what
exactly
what
I
was
thinking
right.
D
We
put
all
this
machinery
in
place
to
make
sure
that
validation
errors
are
more
or
less
consistent,
not
that
everybody
follows
it,
but,
but
mostly
they
do
and
I
wondered
if,
at
the
same
time,
it
would
made
sense
to
return
this
same
structure
but
warnings
instead
of
errors.
I
understand,
that's
probably
more
than
you.
You
are
wanting
to
tackle
at
the
moment,
yeah,
because
the
warnings
actually
can
get
surfaced
by
clients
other
than
like
our
clients.
B
D
B
D
So
the
first
step
is
a
very
narrow
use,
specifically
around
deprecated
API
usage.
This
connects
to
some
of
the
other
things
that
we're
trying
to
do
around
limiting
the
lifetime
of
pre-release
API
versions,
so
improving
the
visibility
to
when
you're
using
those
is
a
big
important
part
of
that.
But
if
we
can
get
the
mechanism
in
place
to
allow
the
server
to
send
warnings
back
to
the
client,
then
yes,
eventually,
the
goal
would
be
to
start
making
use
of
it
at
the
field
level
and
allow
extension
mechanisms
to
contribute
warnings
as
well.
B
D
F
D
But
the
short
version
is
the
places
where
we
parse
server
responses.
We
are
already
extracting
the
warning
headers
into
the
response
object
and
so
that
that
work
would
be
done
for
you.
If
you
are
using
client,
go
ok
and
then
you
can
determine
what
you
want
to
do
with
warnings
process
wide
or
per
client.
F
D
When
I
request,
a
write
request
is
sent
to
the
API
server.
It
has
the
option,
it
can
actually
be
routed
to
admission
web
Books
and
today
those
admission
web
hooks
can
allow
the
response,
allow
the
request
or
they
can
reject
the
request,
and
if
they
reject
the
request
they
can
return.
You
know
a
status
that
gets
sent
back
to
the
to
the
API
user,
so
they
can
fail
a
request
and
control
the
status
that
gets
sent
back
and
send
the
forbidden
error
or
an
invalid,
a
or
or
something
like
that.
D
The
goal
is
to
maintain
parity
with
the
extension
mechanisms
that
we
are
telling
people
to
use
so
just
like.
We
could
make
use
of
this
warning
mechanism
in
process
for
things
like
validation
or
admission,
to
continue
allowing
requests
for
compatibility
reasons
but
surface
information
to
users
that
you're,
probably
making
mistake
here.
You
probably
want
to
change
this.
D
D
D
The
mechanism
is
generic,
it's
just
sending
up
you
can
you
can
see
and
didn't
share
my
screen.
It's
just
a
warning
header,
so
the
header
is
not
specific
to
deprecations.
The
first
use
of
it
would
be
related
to
deprecations,
but
there's
nothing
on
the
client
side.
That
would
be
specific
to
deprecations
the
clients
just
presenting
warnings
sent
to
it.
So
that
gives
us
flexibility
in
the
future
to
use
that
for
field
validation,
specific
things
or
other
other
warnings
that
extension
mechanisms
want
to
contribute.
D
So
it
if
I'm
speaking
you
people,
the
are
back,
objects
were
deprecated
in
one
seven
to
the
beta
once
we're
deprecated
in
117.
If
I'm,
making
a
request
to
a
116,
for
example,
I
wouldn't
get
warnings
because
they
hadn't
been
deprecated
yet.
But
if
I
was
then
speaking
to
a
one,
whatever
17
18
19
clustered
at
that
point,
they've
been
deprecated,
and
so
you
would
start
getting
the
one.
A
A
Metadata
in
the
API
is
in
order
to
support
the
particular
use
case
for
this.
That
John
was
talking
about,
and
you
mentioned,
though,
that
that
team
controlled
by
default
or
lease
discovery
works
for
API.
Is
it
it's
the
first
thing
which
isn't
necessarily
one?
That's
not
deprecated,
even
if
you
know,
even
if
even
if
a
non
deprecated
API
version
exists,
you
have
any
plan
to
address
that.
So
we
don't
start
getting
all
kinds
of
extraneous
warnings
out
of
control,
commands.
D
The
only
different
API
we
prefer
today
and
keep
control
is
ingress
--is,
which
was
for
backwards
compatibility
reasons
every
other
ad
hominem.
We
put
the
the
non
deprecated
version
first
in
discovery
and
then
and
that's
how
custom
resource
versions
are
ordered
as
well.
Ga
ones
are
ordered
before
pre-release
ones
in
discovery.
A
Okay,
so
it's
not
really
an
issue.
The
the
the
primary
thing
that
was
one
of
about
the
warnings
coming
out
and
the
other
was
about
this
is
off
topic.
Maybe
is
for
that
the
thing
David's
working
on-
if
you
have
a
metric
for
this,
because
this
makes
it
discoverable
deputation-
is
discoverable
for
clients,
but
for
operators
who
aren't
necessarily
seeing
people
is
the
client
and
looking
at
the
clients
and
what
other
clients
are
doing.
A
D
Just
to
check
on
what
John
said
before
mo
goes,
there
is
another
piece
in
the
proposal
which
is
admin
facing
side
specifically
around
deprecated
API
use
so
that
there
are
metrics
associated
with
use
of
those
it
guys.
So
the
warning
thing
is
the
kind
of
nice
thing
to
demo,
but
at
the
same
time,
I'm
making
those
deprecated
API
requests
a
counter
metric
would
be
ticking
up
so
that
an
admin
managing
this
cluster
could
see.
D
E
All
right!
Well,
yes,
so
I
can't
remember
exactly
the
warning
text
you
showed
for
like
our
back
P
1
beta
1
is
too
stable,
like
it
was
like
in
117
or
whatever
was
deprecated
will
be
removed
in
such-and-such.
E
Yeah,
does
it
make
sense
for
at
least
those
types
of
warnings
to
include
the
version
of
the
cube
API?
That's
sending
to
you,
mostly
in
the
context
of
like,
if
you
like,
if
you
use
the
same
API
against
to
like
just
off
by
one
clusters,
and
one
of
them
gives
you
a
warning
and
the
other
one.
Doesn't
it's
a
little
like
if
the
user
doesn't
happen
to
know
who
that
one
is
two
slightly
older?
It's
kind
of
strange
right
for
the
same
action
like
I
know
conceptually.
D
We'll
get
right
on
the
the
future
telling
module
so
that
we
can
announce
the
end
deprecation
dates
before
we
actually
did
forget
them.
Okay,
I,
don't
know.
I
was
trying
to
avoid
super
link.
4
boasts
messages,
especially
if,
as
you
saw
when
I
applied
a
manifest
that
had
several
deprecated
things
in
it,
you
get
you
get
a
warning
about
each
distinct
deprecated,
API,
you're
using
and
so
the
current
version
of
the
server
you're
speaking
to
is
like
repetitive
information.
D
A
D
A
D
Hey
everyone,
so
anybody
who's
been
paying
attention
to
the
ongoing
dual
stack.
Work
probably
noticed
that
there
was
a
fair
amount
of
thrashing,
as
we
figured
out
some
of
the
defaulting
and
semantics
around
choosing
IP
families
and
what
happens
with
updates
across
upgrades
and
those
sorts
of
things
it
turns
out.
It's
actually
fairly
complicated
and
Cal
and
I
have
been
going
back
and
forth
for
quite
a
long
time.
Now.
What
could
work?
What
does
work,
what
we
need
to
do?
D
D
So
for
those
of
you
who
are
sort
of
API
walks
and
want
to
look
at
some
of
the
nuances
around
the
complexities
around
upgrades-
and
you
know
what,
if
situations,
that's
a
it's
a
hell
of
a
read,
we're
also
trying
to
take
a
step
back.
We've
created
a
new
dock
which
we
are
trying
to
get
some
consensus
on
like.
Actually.
What
do
people
expect
out
of
this
feature?
D
Because
I
think,
maybe
we
don't
have
full
alignment
across
the
community
about
what
the
sort
of
expectations
of
a
dual
stack
cluster
would
be
in
the
face
of
upgrades
in
particular,
but
also
just
sort
of
default
operation
mode.
So
I
thought
since
we're
doing
cigars
today,
I
would
put
out
a
call
for
help
for
for
people
who
like
to
think
through
these
sorts
of
API
puzzles
and
people
who
are
familiar
with
dual
stack
to
come
in
and
comment
on
a
dock
and
just
say
simply
add
your
opinions
about.
D
D
B
D
D
D
In
the
doc
thanks,
one
thing
I
wanted
to
bring
up
to
is
I've
seen
some
questions
about
like
backporting
fixes
relate
that
are
specific
to
dual
stack
and
I
think
this
is
probably
the
alpha
feature.
I've
seen,
enabled
and
used
the
most
recently,
and
it
would
probably
help
to
be
clearer
about
whether
we
plan
to
fix
and
backport
fixes
for
this
feature
just
to
set
expectations
for
people
using
it.
Currently,
yeah
I
mean
generally,
the
policy
is
no
backports
for
alphas
right,
yes,
but
people
are
not
clear
about
that.
D
Apparently
they
enable
it
and
then
it
something
fundamental
in
their
environment
doesn't
work
and
it's
clear
that
they
expect
or
desire
fixes
and
backboards
and
fixes.
So,
if
there's
a
sort
of
a
central
place,
that's
tracking
thunder
to
the
pker
and
saying
this
is
also.
This
is
the
this
is
when
we
plan
to
go
beta
or
when
were
targeting
beta.
This
is
the
point
when
we
think
it's
reasonable
to
start
using
it
and
we
would
consider
backporting
fixes
it.
D
We
have
the
enhancement
right.
This
is
sort
of
the
tug
of
war
that
we
have
with
alpha.
We
want
to
need
people
to
use
it
or
we
will
find
some
of
these
issues,
but
at
the
same
time,
back
ports
are
non
zero
risk,
and
are
we
really
willing
to
take
that
for
allophones?
Mostly,
we
fall
on
the
no
we're
not
willing
to
take
that
risk
for
alphas
I.
Don't
really
see
any
fundamental
reason
to
change
that
I
agree.
I.
Think
centralizing.
That
statement
would
be
helpful
right.
B
D
Tim,
no
one
reads:
Docs,
it's
change
loves
alpha
notifications.
No
one
looks
at
api's,
no
one
looks
at
CR
DS
that
say:
v1
alpha
1
and
thanks
anything
other
than
cool
I'm.
Gonna
use
this
in
production,
so
I
think
demonstrated
sufficient
evidence
that
people
aren't
sufficiently
scared
about
the
things
that
are
not
ready
for
production
use
in
our
eco.
So
how
do
we
overcome
that
I
mean
other
than
the
thing
like
shutting
down
every
10
minutes?
I'm,
not
sure
we
can
do
anything,
and
then
people
would
probably
just
write
a
restart
I.
D
Stance,
I,
don't
see
how
we
I
don't
think
I
have
a
see
anything
that
makes
me
change.
How
I
think
about
alpha,
which
is
the
statement.
The
pre
strong
statements
we've
had
for
the
last
five
years
or
so,
which
we've
tightened
and
made
stronger,
are
still
backed
up
by
the
evidence,
but
I
think
the
increasing
truth
is
that.
D
Put
alpha
beta
nests
in
people's
face
in
the
you
know
the
pre-release
annotations
having
to
spell
out
the
words
alpha
like
do.
We
need
to
go
further
with
the
feature
dates
you
need
to
turn
on
a
feature
date
to
use
dual
stack.
Should
the
feature
be
called
scary.
Scary,
scary,
don't
use
this
if
you
care
about
your
cluster,
enable
ipv6,
I,
don't
behavior
I,
probably
George
warnings,
I
was
just
gonna
say
my
my
diabolical
next
step
is
to
make
a
cluster
that
has
all
the
things
enabled
return,
a
warning
that
says
always.
A
I
think
the
warnings,
but
we
also
have
talked
about
this
lenses
thing
right,
Tim,
so
like
we're
in
you,
control
you'd
have
to
actually
put
the
word
alpha
in
your
command
line.
You
know
when
you
want
to
use
an
alpha
thing
that
doesn't
clients,
but
but
at
least
people
mucking
around
there
we
have
to,
but.
D
A
perfect
like
a
perfectly
fight
people,
weren't
trying
the
ipv6
stuff
and
actively
using
it
like
what
is
happening
here
is
people
are
running
with
ipv6
and
then
going
and
harassing
people
like
dims
and
myself
and
Jordan
and
I'm
sure
everybody
here
has
been
like:
hey
when's,
that
ipv6
thing
coming,
and
so
it
does
create
the
kind
of
user
driven
feedback.
We
want
it's
just
the
we
got
it
wrong
and
we're
gonna
have
to
fix
it
potentially
doesn't
always
follow
up
from
alpha
I
guess
the
tooth.
D
The
two
things
I'm
looking
for
specifically
the
first
is
just
a
clear
place
that
is
calling
out.
These
are
the
issues
that
block
getting
to
beta,
and
these
are
the
issues
that
we
think
block
getting
the
GA
we've
used
project
boards
for
that
in
the
past,
like
we've
used
label,
queries
like
there
are
a
lot
of
different
ways
to
do
it,
but
a
really
concrete
place
that
people
in
the
project
can
go
to
say
like
what.
Why
aren't
I,
enabling
this
alpha
thing?
Are
there
no
knee
issues
like
you
can
sort
of
find
them?
D
D
This
I
relay
that
to
Cal
and
lucky
I.
Think
it's
a
fine
idea.
I,
don't
I
mean
we
don't
really
have
a
precedent,
so
we
can
make
stuff
up.
I,
don't
know,
I,
find
project
boards
to
be
a
little
bit
tedious,
but
we
don't
really
have
anything
better
sort
of
creating
purposeful
labels.
You
know
we
used
to
have
a
lot
of
area
whatever
labels
or
effort,
whatever
labels
I.
Don't
think
we
really
want
to
do
that
anymore.
Tim,
no
good.
B
B
D
Lockheed
the
conversation
was
just
the
last
part.
I
just
posted
in
the
slack
separately
was
people
are
confused
about
when
if
this
is
going
beta
because
partially
because
the
enhancements
issue
is
stale,
so
we
should
take
an
effort
to
update
any
documentation
around
dual-stack
to
let
people
know
like
it
is
not
done.
There's
lots
of
good
reasons
why
it's
not
beta,
and
if
you
use
it,
you
should
expect
something
to
catch
fire.
C
D
C
C
A
All
right
thanks,
Tim,
lucky
everybody
and
that's
everything
on
that.
Then
we'll
move
on
to
the
some
project.
Readouts
we
have
Tim's
up
with
code,
org
and
I
didn't
recruit.
Anybody
did
you
API
reveal.
So
if
anybody
on
here
wants
to
do
that-
and
we
have
time-
then
you
know
volunteer
yourself,
but
I'm
I'll
turn
it
off.
You
didn't.
A
B
B
B
So
we
found
out
that
some
of
these
dependencies
created
more
new
dependencies
when
updating
to
newer
versions.
We
found
it
during
the
que
la
vie
to
migration
PR.
So
we
went
back
and
fixed
majority
of
these
rendered
things.
Examples
are
around
Golant
and
other
tools,
so
we
moved
them
into
a
separate
directory
with
its
own.
Go
mod,
go
some
as
a
result.
B
What
happened
was
some
of
the
licenses
in
the
route
licenses
file
where
also
it
went
away
because
we
knew
the
references
to
the
dependencies
in
the
vendor
directory,
so
the
licenses
in
the
root
directory
also
went
away.
So,
but
now
we
have
a
separate
licenses
files
for
the
tools
that
we
use,
so
we
had
two
different
licenses
file.
So
that's
one
of
the
outcomes
of
this
PR
so.
D
Can
I
mention
real,
quick,
the
licenses
we
we
spent
a
bunch
of
time
to
break
licenses
up
into
a
bunch
of
individual
files,
so
they'd
be
far
easier
to
code
review
and
then
that
got
rolled
back
for
reasons
I
have
on
my
to-do
list
rolling
back
the
roll
back
once
I
understand
what
the
reasons
were.
So
just
FYI
I'd
like
to
get
rid
of
the
modern
omnibus
mando
licenses
file.
It
yeah.
B
I,
remember
that
one
okay!
Yes,
we
should
do
that
and
when
we
do
that,
we'll
make
sure
that
we
put
all
the
licenses
in
the
same
is
what
I
guess
it
doesn't
matter
there.
It's
just
that
what
was
happening
was
when
we
moved
the
dependencies
to
a
separate,
go
mod
file.
We
we
lost
the
information
about
when
dependency
new
dependencies
were
getting
added,
we
didn't
generate.
If
we
didn't
generate
a
new
licenses
file,
then
you
know
we
would
forget
to
validate
the
new
license,
so
we
ended
up
creating
a
new
file
there.
B
D
A
concrete
benefit
of
this
isolation,
the
the
one
that
we
ran
into
the
most
problems
with
was
the
tooling
around
generating
as
old,
build
files
that
brought
in
a
lot
of
other
tools
from
the
testing
for
a
repo
infra
stuff,
including
lending
stuff
which
brought
in
like
SQL
drivers.
It
was,
it
was
kind
of
crazy
and
it
turns
out.
We
don't
actually
need
any
of
that
at
build
time
and
so
isolating
that.
Let
us
decouple
that
that
whole
crazy
dependency
tree
from
the
things
that
the
kubernetes
build
itself
actually
depends
on,
which
is
great.
B
Really
happy
how
it
turned
out,
so
the
next
one
is
the
K
log
v2
migration.
So
this
is
basically
driven
from
the
the
team
that
needed
structure
login.
They
log
the
structured
login
cap,
so
to
support
them.
We
started
down
this
path
where
we
updated
all
the
dependencies.
That
was
that
we
use
that
you
currently
use
Kellogg
v1.
We
migrated
them
to
killer
v2,
so
that
part
is
done.
Now
we
have
a
PR
in
the
main
KK
repository,
the
moving
things
to
be
and
I
keep
updating
it
every
other
day,
and
it's
fine
it
it.
B
It
looks.
Fine,
the
old
Kellogg
reference
is
gone.
The
new
one
is
added,
and
you
know
every
other
day.
You
know
something
the
other
sneaks
and
so
I
love
to
have
to
keep
updated.
I
have
a
script,
so
it
should
be
a
fun.
It
should
be
fine,
but
the
harder
part
was
earlier.
We
used
to
have
issues
with
when
updating
K
log,
especially
scalability
issues,
so
we
did
get
a
sign-off
from
the
scalability
team.
They
ran
with
this
PR
and
they
said
they
didn't
find
any
regressions
in
terms
of
scalability
itself.
B
B
I
need
to
find
people
who
can
review
and
approve
for
specific
sticks
and
then
I
have
to
ping
Jordan
or
Clayton,
or
one
of
the
route
approvers
Tim
to
push
the
whole
thing
through,
probably
like
we
have
to
find
like
a
good
time
to
do
this
work,
that's
the
problem,
so
you
have
to
find
a
good
boundary
like
a
Friday
afternoon
or
something
like
that
where
we
could
get
a
clean
run
and
push
it
in.
So
it
won't
affect
anybody
too
much.
B
So
let
me
put
it
on
another
way,
so
we
are
also
looking
at
what
is
the
compatibility
story?
There
can
K
log,
V,
1
and
V
2
be
used
in
somebody
else's
project
at
the
same
time,
if
they've,
full
cuban
artists
in
they'll
be
using
v2,
but
they
might
have
the
b1
coming
from
somewhere
else.
So
what
is
the
compatibility
story
there?
I'm
working
on
that
problem
right
now,
Jordan
left
a
few
comments
on
that
PR
in
the
Cuban
artists,
/
Kellogg
repository.
B
So
that's
the
update
on
the
key
log
so
and
the
third
one
or
the
fourth
one
was
we
pinged
a
bunch
of
people
for
creating
tags
for
their
own
repositories
like
it
CS
h,
CS
shim?
They
are
working
on
a
tag.
Then
the
core
OS
go
system,
D
and
core
OS
package.
Those
folks
responded
and
they
said
they're
gonna
cut
it
as
we
are
currently
using
Shor's.
You
know,
everybody
like
continuity,
runs
see
all
of
them
use
these
two
packages
and
we
are
all
using
shafts
and
the
core
OS
repository
themselves.
B
Nobody
is
working
on
that
on
those
repositories.
Much
so
at
some
point
we
might
have
to
think
about
helping
them
or
taking
over
or
something
like
that.
But
for
now,
I
just
requested
a
tag,
and
somebody
responded
from
the
Corollas
side.
So
that's
good
news
on
those
two
packages
and
the
last
one
I
had
was
customized.
We
talked
to
the
sick
CLI.
There
is
a
link
there
on
that
line.
So
what
is
happening
is
we
are
stuck
in
customized
v2
in
our
dependency
tree
and
we
are
not
able
to
update
to
newer
versions
of
customize.
D
And
I
think
that's
a
pretty
good
summary.
The
per
critic
of
dependency
is
not
new.
It
was
a
concern
when
it
was
added
initially
and
the
tooling
we
used
at
that
point.
Sort
of
it
actually
didn't
have
visibility
to
the
recursive
dependencies,
so
it
existed
and
we
knew
it
existed
and
it
was
raised
as
a
concern,
but
it
was
mechanically
possible
to
do
go
mode.
Go
modules,
make
our
existing
problems
more
visible.
D
That's
how
I
like
to
frame
it,
and
so
now
that
customise
has
actually
declared
its
dependencies
and
we
have
tooling
to
avoid
taking
on
recursive
dependencies
that
we
know
causes
problems
that
that
visible
problem
is
blocking
updates.
So
I,
don't
have
a
lot
else
to
say:
I
haven't
kept
up
a
lot
with
what
the
plan
is.
I
think
Jeff
left
a
comment
recently
that
customizers
working
to
actually
drop
its
community's
dependencies
in
the
code
path
that
we
depend
on
so
I
haven't
really
paid
a
lot
of
attention.
D
D
D
D
Those
three
aspects
are
the
the
primary
things
that
I'm
seeing
in
the
API
review
requests
and
then
the
last
item
I
also
added.
This
is
just
a
follow-up
for
my
discussion
on
the
noodles
module.
A
few
weeks
ago,
we
had
an
action
item
to
add
tests
to
check,
go
api
compatibility,
that's
something
that
we're
actually
trying
to
do
better
at
on
the
repositories.
We
are
wanting
people
to
treat
as
libraries
and
so
I
linked
to
the
PR,
where
we
added
that
presubmit.
This
is
using
an
API
diff
tool
that
comes
from
the
NGO
project.
D
B
Oh
one
more
thought
about
the
kids
on
your
uterus
was
we
now
build,
or
at
least
yeah
build,
but
not
test
the
windows
as
well
using
github
actions?
So
we
did
break
once
when
we
had
updated
something
which
works
on
Linux,
but
not
on
Windows.
So
we
went
back
and
added
an
additional
test
to
build
it
on
Windows
latest
in
using
github
actions.