►
From YouTube: 20200326 SIG Arch Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
I
guess
to
start
out
start
with
the
high
status
from
last
time,
looks
actually
like
we've
got
whips
if
I
recall,
for
each
of
these
I
know,
Daniel
sent
something
out,
but
this
so
I
guess
I'll
just
confirm
with
each
person.
I,
don't
think
Tim's
yeah,
but,
oh
yes,
so
Jim
we're
just
still
still
have
this
whip
right.
We're
just
waiting.
Did
it
review
it.
B
B
C
C
B
D
A
C
And
then
there
are
two
long-lived
beta
api's
that
are
working
through
their
graduation
criteria,
so
cron
job
and
pod
destruction
budget.
Those
have
been
going
on
for
a
while,
but
the
goal
is
to
get
movement
on
those
so
that
we
can
finish
those
up
and
get
them
to
be
one
so
feel
free
to
jump
into
those.
If
you
have
interest
or
would
like
to
help-
and
those
are
the
main
things
planned
for
1:19-
that
I
am
aware
of
at
this
point.
C
A
C
Sort
of
along
with
that
another
process
item
is
beta,
free
conformance
runs,
so
we
got
the
job
in
place
last
release
that
is
running
conformance
jobs
with
only
GAAP
eyes
and
the
certificates
API,
and
so
as
part
of
moving
certificates
to
be
one
that
will
get
locked
down
and
will
run
ga-oh
microns.
This
release
all
right
progress.
C
F
C
So
see
advisor
has
been
a
dependency
magnet
for
a
long
time.
It
basically
integrated
with
every
storage
thing
in
the
world
and
every
container
one
time
in
the
world
and
every
cloud
in
the
world.
It
was
exciting
and
we
found
a
way
to
isolate
those
into
so
that
the
see
advisor
binary
can
continue
building
with
all
of
those
and
support
all
of
those
without
disrupting
anyone
who
depends
on
it,
but
our
use
as
a
library.
C
We
have
a
way
to
trim
down
all
the
transit
of
things
we
pick
up
so
that
landed
last
week,
which
is
fantastic,
and
it
gives
us
a
really
easy
way
to,
as
we
continue
to
turn
down
things
and
communities,
just
move
them
into
the
binary.
Only
portion
see
adviser,
Rico,
so
dims
already
updated
in
kubernetes
to
use
that
version.
It's
a
visor
and
we
dropped
like
in
xdb
dependencies
and
like
all
kinds
of
random
things.
So
this
is.
This
is
really
good.
This
was
one
of
the
most
entangled
parts
of
our
dependency
tree.
E
Yeah
definitely
and
I
find
some
follow-ups
as
well.
There's
a
couple
of
peers
and
one
knighting,
a
couple:
more
merged,
I,
just
read
the
open
PRS
there,
so
one
question
I
did
have
for
this
group
here
is:
there
is
support
for
in
media
metrics
in
see
advisor?
Does
anybody
use
it
because
I
don't
know
if
we
need
to
pull
that
into
the
library
you
know
into
our
library
or
not.
E
C
C
E
E
And-
and
we
are
also
using
somebody's
private
repository
in
Prince,
his
private
repository
for
one
of
the
dependency
there,
so
that
was
the
next
thing
that
I
was
going
to
go.
Look
at
is
like
what
are
the
personal
repositories
that
we
are
using
and
aren't
there
any
alternatives
available.
This
there's
a
bunch
of
those
that
we
use
as
well
so.
E
Exactly
so,
I'll
do
that.
So,
while
we
were
doing
this
changes,
the
other
thing
that
was
happening
in
parallel
was
the
docker
list
cubelet,
and
the
idea
here
was:
can
we
segment
the
usage
of
docker
docker
and
things
that
we
pull
from
other
mobi
repositories
into
into
something
that
we
can
drop
just
like
the
provider?
Let's
tag
that
we
have
currently,
so
there
is
a
cap
and
a
PR,
it's
turned
out
really
well,
especially
with
the
changes
in
C
advisor
week.
E
We
can
definitely
you
know
the
scenario
like
kind
where
a
kind
doesn't
it
uses
continuity
and
it
does
not
need
bucker,
so
we
can
definitely
use
it
for
that
scenario.
The
second
scenario
that
we
have
is
a
cluster
API
cluster
API.
We
know
that
it
doesn't
use
a
doctor.
It
uses
continuity
at
this
point
as
default.
So
so
the
idea
would
be
to
get
to
the
point
where
we
merge
this.
E
We
get
it
working,
we
add
CI
jobs
for
CRI,
only
scenarios,
and
then
once
we
are
comfortable,
we
can
go
to
a
deprecation
mode
for
dr.
shim
that
will
be
in
a
subsequent
kept.
This
kept
just
focus
on
making
sure
that
we
can
add
a
tag.
We
can
do
a
build
of
cube
cubelet
without
docker
and
we
are
able
to
use
it
in
some
limited
scenarios.
E
So
we
are
looking
for
signal
folks
to
help
with
approval
and
review
off
the
cap
and
the
PR.
This
is
just
FYI
here.
For
you
all.
The
other
thing
that
came
out
of
this
sea
advisor
and
continuity
was
the
recursion
between
some
of
the
CRI
runtimes
and
cubelet
and
CRI
API.
Basically,
so,
if
you
take
continuity,
continuity
has
two
repositories:
continuity,
/
continuity
and
continuity,
/
CRI
so,
and
they
have
a
recursive
dependency
on
Cuba
notice
and
CRI
API.
E
The
same
is
true
with
the
Windows
folks
are
also
the
HCL
h,
CS
shrimp
as
a
recursive
dependency
on
things
in
in
our
repository.
So
the
best
way
to
cut
this
dependency
is
to
get
you
know
not
staged
CRI
API.
Now
this
might
get
entangled
with
the
previous
one,
where
we
were
talking
about
accumulated
of
danke
Schoen,
because
you
know
to
get
to
the
point
where
we
deprecated
Akash
M.
We
have
to
get
CRI
API
to
a
v1
level,
so
then
that
goes
into
the
other
discussions
about
okay.
What
is
missing
in
CRI
API?
E
C
E
C
E
E
Speedy
package
from
docker
speedy,
streamings
package
from
docker,
and
it
seems
we
have
an
issue
which
is
five
years
old
now
to
drop
speedy
and
the
the
main
consumer
of
that
is
cube.
Ctl
and
the
question
is:
are
there
alternatives
that
are
available
to
to
switch
over
from
speedy
and
talking
to
Mike
Denise
and
looking
through
several
TRS
and
issues
raised
in
this
area?
E
D
I
see
yes,
cube,
control
uses
it
and
it
doesn't
probably
doesn't
need
to
use
it
as
much,
but
the
path
between
API
server
and
cube
it
uses
speedy
and
that
I,
don't
think,
has
a
web
socket
replacement,
so
just
removing
it
from
Cuba
is
not
or
acute
control
is
not
going
to
get
it
out
of
the
dependency
tree.
Okay,.
C
D
Yeah
on
the
repo
and
I
just
looked
at
it
because
I
came
across
it
in
another.
There's
this
long-running
bug.
That's
occasionally
things
disconnect
pretty
sure
that
there's
some
binary
sequence
that
you
can
transmit
over
a
speedy
Channel
that
confuses
the
library
and
makes
it
disconnect
and
I
went
looking
at
that
dependency
and
they
don't
have
any
randomized
testing.
So
I
suspect
that
that
may
is
true.
Yeah.
E
E
E
It
doesn't
it.
It
came
out
when
I
was
raising
this
question
with,
like
Justin
Carmack,
saying:
okay,
what
what
are
we
using?
What
can
we
drop?
You
know
he
said.
Okay,
this
is
one
thing
that
has
been
bothering
me
for
some
time.
Can
you
guys
do
something
about
it?
So
that's
what
I
brought
it
up
here
and
the
other
aspect
to
this
WebSocket
thing
was
we
already
support
WebSocket
in
API
server
for
like
the
Python
we
for
usage
from
Python,
so
we
already
support
WebSockets
yeah.
E
Console
that
kind
of
scenarios,
so
if
we
could
switch
over
cube
CTL
to
WebSocket
but
then,
like
Daniel,
said
we
can't
cut
the
cord
between
the
API
server
and
cubelet
and
until
we
switch
that
will
still
be
using
that
library.
So
maybe
it
makes
sense
for
us
to
take
over
that
library
and
like
when
Daniel,
when
you
want
to
start
fixing
stuff
there.
Maybe
we
can,
we
can
have
it
in
Cuba
native
sinks
and
work
on
it.
Yeah.
D
Fork
there
we'll
make
it
easy
to
fix
bugs
I'm,
not
sure,
if
that's
easier
or
harder
than
replacing
it
entirely.
If
we
were
to
try
and
replace
it,
since
we
have
to
work
with
cubelets
that
are
at
least
two
versions
old
from
api
server
like
we
would,
we
wouldn't
be
able
to
like
assuming
we
get
it
in
this
release.
It
would
still
be
like
two
more
releases
before
we
could
actually
drop
the
dependency.
Oh
yeah.
E
D
E
D
E
E
For
the
so
the
background
for
the
next
topic
is
a
sick
PM
sent
a
message
to
stealing
asking
to
figure
out
how
they
could
wind
down
their
activities.
There
was
nobody
active
in
the
sig
and
a
the
only
/.
Only
chair
who
is
active
is
you
know,
steven
augustus,
and
he
gets
to
do
all
the
work
and
all
the
power,
but
there's
nobody
to
help
him,
so
they
he
is
asking
to
see.
E
If
we
could
wind
down
the
Signia,
the
main
activity
that
they
have
been
working
on,
or
at
least
they
are
supposed
to
be
working
on,
is
caps.
So
the
thought
here
is
I'm.
We
talked
about
this
and
steering
a
little
bit
and
I'm
bringing
it
to
this
group
here,
because
cig
architecture
can
we
do
some
accusations
of
project
for
caps,
which
will
take
care
of
things
like
publishing
the
caps
as
a
website
searchable.
E
You
know
making
sure
that
there
is
probably
the
data
we
have
in
the
caps
is
up
to
date,
the
we
we've
been
talking
about
the
format
of
the
caps
and
what
is
the
data
that
goes
into
the
caps
and
making
the
caps
progress?
You
know
on
a
regular
basis
and
the
meta
data
and
the
caps,
so
basically
everything
around
the
caps,
except
for
the
technical
part
of
actually
implementing
the
caps.
So
there
are
some
people
who
have
been
working
enhancements
on
the
release
side,
so
they
do
make
like
good
candidates
to
seed
the
sub-project.
E
A
E
F
I
think
it
is
interesting
to
note
that
the
caps
actually
were
originally
in
cig
architectures,
so
the
move
to
SiC
p.m.
was
was
in
hopes
that
it
would
make
it
more
administrative,
but
I
think
that
we've
seen
in
practice
they
kept
sir
for
better
or
for
worse,
get
into
a
lot
of
detail
that
actually
does
have
relevance
to
the
architectural
roadmap.
So
I'm
having
lived
through
that,
and
also
being
one
of
the
failed
psyche
p.m.
chairs
really
guilty
you're
like
I.
Don't
do
anything
there's
bunch
of
lazybones
except
for
Stu.
F
E
D
E
For
the
tooling,
we
already
have
POC
for
for
a
website
that
looks
like
you
know,
other
release
notes,
but
is
that
what
we
want
is
the
question
right
now
so
but
yeah?
That's
why?
In
cig
release
they
do
two
league
as
well
and
even
in
contracts.
They
do
tooling
as
well,
so
it
technical
in
the
sense
that
it
isn't
there.
The
people
who
are
Manning.
This
sub
project
are
not
going
to
be
actually
implementing
the
caps.
E
They
are
more
like
making
sure
that
the
caps
are
in
good
health,
and
there
is
progress
and
people
are
making
progress,
and
you
know
there
are
people
to
talk
to
when
when
they
have
issues
progressing
a
cap-
or
you
know,
suggestions
on
how
to
when
to
file
a
cab
how
to
file
a
cap,
all
those
kinds
of
things
as
well.
In
addition
to
I.
A
Think
some
of
those
things
you
just
said
are
things
like
that
we
have
in
goals
for
that
kept
greeting
group,
but
some
of
them
I
think
are
release
team
things
right
actually
moving
them
through
the
process.
I
wouldn't
necessarily
want
this
approach,
but
you
thought
before
I
would
more
definition.
The
process
and
I
mean
I
think
we
can
do
tooling
around
some
of
the
process.
Some
other
time
I
mean
it
is
going
to
be
split
between
that
and
probably
really
is
a
really
great.
E
Right
also,
there
is
traffic
cop
duties
right
like
go.
Talk
to
this
SIG,
make
sure
that
you
get
sign-off
from
that
person.
This
person,
that
is
half
the
effort
and
people
who
are
new
to
the
process,
don't
know
who
to
talk
to
or
who
to
get
guidance
from
right,
you,
okay,
so
there
is
interest.
So
can
you
please
add
your
name
there
if
you're
interested
in
this
so
I
know
who
to
think
when
we
get
this
going.
E
E
C
Was
this
was
something
that
we've
sort
of
talked
about
a
few
different
times
here,
developing
in
multiple
repositories,
sort
of
breaking
things
down
into
smaller
components
that
are
better
tested
and
better
maintained
and
then
reassembling
them?
And
so
the
Cates
IOU
tolds
repo
has
accumulated
a
number
of
packages,
most
of
which
are
fairly
scoped
and.
C
That's
been
going
on
for
the
last
couple
years.
In
the
past
release
big
chunk
of
storage,
mount
utility
code
was
moved
there,
which,
on
the
one
hand,
was
good
because
it
led
all
the
CSI
drivers
make
use
of
that
without,
depending
directly
on
the
kubernetes
monolith.
That's
positive,
but
there
were
some
things
that
we'd
seen
in
the
past
few
months.
Around
testing
of
that
repository
that
were
concerning,
and
so
I
wanted
to
raise
that
here,
just
for
awareness
and
to
see
if
people
had
thoughts
about
how
we
can
improve
this.
C
C
C
That's
the
easy
answer,
but
because
this
repository
has
a
lot
of
different
utilities
packages
for
a
lot
of
different
things
that
are
unrelated
to
each
other,
reverting
to
a
known
good
version
to
fix
the
storage
regression
would
have
reintroduced
bugs
that
were
fixed
in
like
networking
utilities,
and
so
we
kind
of
have
this
shared
fate
where
a
lot
of
unrelated
things
are
in
one
package
and
we
lack
test
coverage
on
some
of
those
things.
I
mean.
G
Would
suggest
kind
of
a
two-step
approach?
One
is
splitting
the
map
library
out
into
its
own
repo
and
then
once
it's
in
its
own
repo.
A
few
things,
one
is
add,
a
bunch
of
a
lot
more
testing.
We
need
end-to-end
tests
that
actually
exercise
the
code
path
with
the
mount
library
and
kubernetes
end-to-end.
G
And
a
good
thing,
I
was
thinking
of
so
split
it
off
into
its
own
repo,
oh
and
then
start
doing
releases
right
now.
I
think
what
we
do
is
just
floating
head.
Instead
of
having
official
releases
cut
for
the
utils
lid,
because
there's
so
many
different
sub
packages,
it
doesn't
really
make
sense
to
do
releases.
If
we
split
off
into
our
own
repo,
we
could
actually
have
official
releases
that
we
do
each
cycle
and
have
those
picked
up
by
kubernetes
and
have
something
fixed
to
fall
back
on
that
kind
of
thing.
Yeah.
B
Package
kumaradas
utils
is
pretending.
We
still
have
a
mono
repo
and
we
don't
and
it
you
know,
I
Jordan
I
know
you
saw
some
pr's
this
last
week
or
two
were
the
dependencies
were
ridiculous
and
we
shot
them
down.
For
those
reasons
too
and
I
I
questioned
whether
the
future
of
cameras,
utils
involves
communities
utils
or
it
involves
20
smaller
repos
that
are
more
focused
like
does
it
make
sense
to
move
to
storage,
utils
or
I
mean
many
of
these
libraries
are
like
25
line,
libraries
they're,
really
our
utility,
so.
A
B
E
B
G
B
C
A
B
G
C
We
had
the
three
options
are
to
like
put
a
notice
there
and
then
a
lot
a
long
time
in
the
future.
Remove
it
or
you
can
sunset
the
package
like
say
it
documented,
is
deprecated
point
to
the
one
the
place
you're
attracted
into
and
then
freeze
it
and
just
say
it
never
changes
and
you
just
let
it
linger
there
and
it's
ugly.
But
you
know,
and
then
the
third
option
is
you
like,
have
a
noodles
which
no.
B
C
Creatively
at
compile
time-
oh
okay,
that's
basically
the
same
as
deleting
so
the
immediate
action
items
that
I
see
are
to
add
a
clear
policy
about
what
things
should
be
in
util
I.
Think
the
two
things
you
talked
about
and
make
sense
minimal
to
no
non
standard
library
dependencies
and
must
be
fully
testable
with
unit
tests.
I.
C
Have
a
tool
that
or
the
go
team
has
a
tool
called
API
diff,
which
you
can
run
against
a
repo
and
it
will
like
capture
the
go
signatures
I.
Think
for
something
like
you
Dalls,
where
we
actually
want
to
maintain
signature
compatibility.
We
should
set
that
up
and
run
that
so
that
we
know
if
something
is
gonna
break.
Okay,.
C
Sod,
if
you
want
to
look
into
having
a
repo
where
we
can,
that
is
limited
just
to
the
mount
library
that
we
could
version
and
set
up
edie
tests.
I
know
Ben
elder
who
runs
the
kind
project,
has
a
pre
submit
on
the
kind
repo
that
will
actually
make
sure
that
changes
to
it
can
work
with
kubernetes
masters
and
different
release
branches.
So
he
might
be
a
good
resource
for
figuring
out
how
to
do
that
kind
of
cross.
Repo
yeah.
G
C
C
Yeah
timbi
I'll.
Let
you
take
a
look
at
the
networking
or
delegate
some
of
the
networking
stuff.
I
I.
Don't
actually
know
how
many
of
the
networking
utilities
can
unit
tested.
It
was.
It
was
more
I
just
saw
sort
of
storage
bugs
and
networking
buggers
getting
interleaved
and
not
being
able
to
Gilbert
won
because
it
would
induce
the
other.
So
maybe
that
maybe
does
much
of
an
issue
for
the
networking
you
do.
B
A
B
Sorry,
sorry,
let
me
throw
one
thing:
I'm,
sorry,
I'm,
just
more
of
a
notification
than
anything
else.
The
infra
working
group
is
planning
next
week
to
flip
the
GCR
Kate
Stata
Kate
study,
cRIO
vanity
name
to
point
to
a
Google
after
a
community-owned
GCR,
instead
of
a
Google
owned
GCR.
This
has
been
months
and
months
in
the
making,
the
promotor
and
they're
all
the
other
stuff
that
people
have
done
there.
Big
thanks
to
Linus
in
particular,
but
everybody
else,
who's,
helped,
review
and
and
prep.
For
this,
the
plan
is
to
do
it
next
week.
B
Hopefully,
nobody
notices
anything.
There
will
be
a
temporary
freeze
on
pushes
into
Kate's
dodgy
CR
did
IO,
while
we
make
sure
that
any
back
population
is
synchronized
and
that
we're
not
racing
against
ourselves
and
causing
chaos
will
generate
a
full
manifest
of
everything.
That's
in
the
old
repository
make
sure
that
it
is
all
in
the
new
repository
and
will
flip
the
domain
over.
That
main
flip
takes
a
couple
of
days
to
roll
through
google
systems
and
then,
after
that,
we
will
release
the
freeze
and
hopefully,
nothing
significant
will
have
changed
and.