►
From YouTube: Kubernetes WG K8s Infra Bi-Weekly Meeting for 20200916
Description
Kubernetes WG K8s Infra Bi-Weekly Meeting for 20200916
A
Hi
everyone,
my
name,
is
bart
smikela
and
I
will
be
our
host
for
today's
meeting.
Let's
start
with
some
introductions
is
there?
Anybody
who
would
like
to
introduce
is
new
here
and
would
like
to
introduce
themselves
to
us.
A
Please
add
your
name
to
or
attend
this
list,
and
I
think
we
should
start
with
the
billing
preview.
As
the
last
meeting
was
like
more
than
a
month
and
a
half
months
ago,.
C
No,
no
okay,
I'm
mark
rossetti,
and
I'm
here
with
ernest
huang
and
claudu
balu
and
we're
I'm
the
one
of
the
co-chairs
for
sig
windows
and
ernest
has
works
with
me
at
microsoft,
and
we
work
with
clydu
from
cloudbase
and
we're
here
to
get
more
involved
with
working
with
this
working
group,
specifically
with
some
goals
around
potentially
importing
some
azure
resources
into
this
group's
kind
of
purview
and
also
getting
some
of
the
windows
test
passes
easier
to
maintain
with
less
kind
of
manual
intervention.
D
Else,
hey
so
yeah,
I'm
ernest,
I'm
also
from
microsoft,
as
smart
mentioned
and
looking
forward
to
getting
more
involved
in
this
working
group.
E
A
Good
to
see
you
here
too
okay,
so
I
think
that
we
can
jump
into
the
billing
report.
I
will
share
my
screen,
I'm
not
sure
if
we
want
to
cross
check
it.
A
I
don't
know
if,
for
example,
justin
has
some
more
data
or
not,
but
I'm
not
sure
if
it's
necessary
at
this
point,
the
differences
were
not
huge
and
my
time
frame,
which
I've
put
is
the
from
the
last
call,
which
was
the
july
22nd,
and
from
that
time
we
spent
like
almost
200
000
dollars
and
actually
I'm
not
sure
it
looks
like
a
pattern
to
me
most
of
these
costs.
It
looks
like
for
computer
engine,
of
course,
and
pod
storage.
What
we
expect.
F
So
I
know
I
can
speak
to.
I
can
speak
to
two
things,
so
the
majority
of
this
comes
from
the
kate's
artifacts
project,
which
is
the
project
that
serves
kates.gcr.io.
F
So
what
is
so
your
base?
What
you're
seeing
is
like
weekly
traffic
of
various
people
or
ci
systems,
pulling
down
artifacts
from
kate's
dot,
g
c
io.
That's
why
it's
got
that
pattern.
What
I'm
not
super
aware
of
like!
I
guess
I
don't
understand
why
there's
so
much
compute
coming
from
the
broad,
a
project
that
I
believe
to
be
solely
about
serving
images,
but
I
haven't
looked
too
deeply
into
it
kind
of
operating,
oh
god,
sorry.
B
I'm
just
gonna:
if
we
go
to
the
next,
I
think
it's
the
next
page.
We
see
a
gce
breakdown
and
you'll
see
that
gce
includes
some
bandwidth
and
some
compute
compute.
I
do
think
there
is
a
question
is
why
there
is
so
much
compute
compute,
but
it
may
be
that
it's
by
project
that
it's
okay.
So
yes,
now
you
can
draw
down
by
projects
and
get
more
insight.
I
think.
F
Okay,
that
that
would
be
good
to
know
so,
but,
like
I
said,
I
haven't
looked
too
deeply
at
it,
I'm
kind
of
going
off
the
fact
that
neither
linus
nor
tim
freaked
out
about
it
when
they
saw
what
the
costs
were,
but
it
would
be
worth
digging
into
the
other
thing
that
is
not
necessarily
reflected
on
this
chart
and
it's
less
clear
until
you
filter
out
the
artifacts
project
is
over
the
last
month
and
a
half
we've
migrated
most
of
the
merge
blocking
and
release
blocking
jobs
for
kubernetes
over
to
the
community,
build
cluster,
so
we've
definitely
seen
some
increased
traffic.
F
The
really
big
hump
that
you
see
in
the
last,
like
two
or
three
weeks,
was
migrating
over
the
100
node
clusters
that
are
stood
up
for
every
single
pull
request
for
the
scalability
jobs.
So
I
know
like
last
time,
tim
looked
at
it.
It
was
like
ballpark,
a
million
a
year
for
the
artifact
stuff
and
ci
at
the
moment
is
ballpark
500
000
a
year,
if
I
just
sort
of
took
like
the
last
14
days
and
averaged
that
down.
F
So
that's
where
we're
at
with
that.
I'm
still
kind
of
wary
of
costs
and
stuff
here
because,
like
the
jobs
that
have
been
moved
over
thus
far
are
200
something
out
of
1500
something.
So
there
are
lots
of.
There
are
many
many
jobs
that
still
run
over
in
the
google.com
build
cluster.
I
think
it's
probably
we're
probably
going
to
need
to
spend
some
time
evaluating
which
of
those
jobs
are
really
necessary
and
it
could
be.
We
want
to
look
at
further
ways
of
optimizing:
the
cost
of
the
existing
jobs.
F
I
know
we
have
a
number
of
jobs
that
kind
of
hog
as
much
cpu
as
they
can,
which
you
know
it's
great,
that
we
have
cluster
auto
scaling
to
deal
with
this,
but
could
be.
Could
we
pack
things
a
little
bit
better
at
all?
Somehow,
so
that's
what
I
know
of
from
a
billing
perspective.
A
G
A
I
actually
maybe
I
will
give
a
few
updates.
There
was
not
not
a
lot
going
on
from
my
site.
I
actually
helped
a
little
bit
with
one
or
two
jobs
moving
one
or
two
jobs
and
the
the
only
like
a
maybe
the
bigger
thing
is
that
the
dns
automation
is
kind
of
successfully
working
right
now
using
pro.
So
whenever
there
is
some
dns
update
it's
automated.
Currently
there
was
so
I
I
was
hoping
that,
like
with
the
slack
infrastructure
a
little
bit,
we
have
the
new
tool
deployed
or
lately
and
yeah.
A
F
So
my
goal
is
to
first
finish,
migrating
over
all
of
the
release
blocking
and
merge
blocking
jobs.
The
jobs
that
can't
yet
be
migrated
over
are
jobs
that
write
to
gcs
buckets
that
can't
be
written
to
by
non-google.com
service
accounts.
So
we
need
to
collaborate
with
release
engineering
to
migrate
from.
I
think
it's
the
kubernetes
release
bucket
to
kate's
release
bucket,
which
is
owned
by
kate
sinfra.
F
F
I
just
I,
unfortunately,
don't
have
a
ton
of
time
to
just
describe
the
work
involved
and
it
would
be
really
helpful
if
that
was
somebody
who
knew
how
knew
how
to
contact
the
release,
engineering
team
or
had
a
good
rapport
with
the
release
engineering
team,
because
this
is
largely
about
release
engineering
artifacts.
A
I
think
I'm
the
person
who
has
that
kind
of
that
I
can
definitely
and
I
have
have-
will
to
help,
but
the
problem
is
for
the
next
two
weeks.
I
will
be
on
pto
without
electronic
devices.
That's
our
goal
with
my
wives,
but
after
that
I
will
be
able
to
and
willing
to
help
from
that
side.
F
Okay,
that
sounds
healthy.
I
may
I
may
free
up
some
bandwidth
during
that
time,
but
if
not,
oh,
that
would
be
super
appreciated.
F
Then
I
feel
like
the
next
step
would
be
well.
There
are,
you
know,
1300
other
jobs,
to
consider.
F
I
think
I
would
like
to
get
the
ci
signal
team
more
empowered
to
operate
and
maintain
this
stuff.
So
one
of
the
fun
adventures
I
went
through
last
week
was
upgrading
the
cluster
from
114
to
115.,
I'd
like
to
upgrade
it
from
115
to
116.
If
I
can
to
get
back
within
the
community
support
window,
it
raises
the
question
for
me
of
who's
in
charge
of
migrating
or
like
upgrading
clusters
for
the
rest
of
our
infrastructure.
F
It's
not,
I
think
we
have
them
all
set
to
auto
upgrade,
but
I
personally
haven't
gone
and
checked
them
all.
I
just
I
did
the
build
cluster
at
a
time
in
place
of
my
choosing,
so
I
could
keep
an
eye
on
it.
F
So
yeah,
like
I
mentioned,
be
great
for
the
ci
signal
folks
to
be
able
to
like
create
and
save
log
searches,
but
I
think
I'd
really
like
the
pool
of
people
who
can
like
upgrade
the
cluster
or
deploy
new
node
pools
or
whatever
to
be
not
me
so
there's
that,
like
I
said
in
terms
of
cost
optimization,
it
would
be
cool
to
figure
out
what
more
we
could
do
to
bin
pack
things
more
tightly.
F
I
have
a
suspicion
that
our
workloads
are
pretty,
I
o
bound,
and
so
I
have
an
open
issue
about
ways
we
could
increase
the
size
of
either
our
persistent
discs
or
our
vms
to
get
us
more.
I
o,
but
thus
far
the
tweaking
and
tuning
that
we've
done
hasn't
had
much
effect.
That's
all
I
can
think
of
for
now.
F
With
regard
to
prowl
for
sorry
for
the
build
cluster,
I
feel
like
again
the
most
the
most
value
for
us
comes
from
getting
the
ci
jobs
running
in
community
infrastructure,
so
that,
like
the
node
team,
for
example,
right
like
they
couldn't
figure
out
why
no
jobs
were
failing
and
they
couldn't
like
get
access
to
the
projects.
To
see
all
the
logs,
they
needed
to
see
what
was
happening
back
in
april,
and
so
I
want
to
make
sure
like
we
have
that
in
place.
First
migrating
prow.kates.io
from
one
cluster
to
another.
F
I
am
not
sure
how
to
do
that
in
a
non-disruptive
way.
That's
going
to
be
a
slightly
heavier
lift
it's
something
we
could
probably
talk
about
with
the
proud
folks
to
see
what
they
think
and
if
it's
feasible,
to
try
doing
something
like
that.
You
know
early
in
the
release
cycle
or
after
the
release
is
out
the
door
or
something.
A
Okay,
yeah,
like
if
at
least
I
have
some
knowledge
where
to
poke
after
I
will
be
back
from,
sounds
good.
As
I
don't
see
any.
A
H
I
can
talk
about
about
it,
since
all
those
requests
are
authored
by
by
me,
it's
going
to
be
a
very
long
point,
so
I'll
just
get
started,
so
I've
been
working
a
lot
on
the
kubernetes
e2e
test
images
also
created
diagnosed
image
and
centralized
dozen
a
couple
dozen
images
into
it,
and
one
of
our
goals
was
to
have
official
window
support
for
every
e3
test
image
merged
in
in
kubernetes,
which
basically
are,
I
think,
the
last
two
requests
from
that
list.
H
H
I
don't
know
if
you
know,
but
at
this
moment
the
image
promoter
image
builder
currently
builds
the
the
kubernetes
we
test
images
and
for
the
windows
images.
It
uses
a
couple
of
remote
windows,
docker
nodes
in
azure,
supported
by
by
microsoft,
and
basically
the
idea
was
to
maybe
just
don't
and
that's
something
I've
been
working
on
for
the
past
few
weeks.
I
actually
have
a
request
for
that,
which
is
this
one.
H
Pasting
in
chat
right
now-
and
I
think
add-on
also
suggested
something
like
this
a
few
weeks
ago-
that
we
should
use
community
or
community
owned
infra
for
all
the
image
building.
H
So
basically,
I've
managed
to
build
old
images
without
request.
Basically
the
request
switches,
the
usage
from
docker
build
to
docker
build
x,
which
can
also
build
windows
images
with
some
caveats
I'll
get
into
them
shortly
after,
but
just
so
you
know,
I've
successfully
built
all
the
images
with
build
x
and
I've
also
run
a
full
conformance
run
with
for
the
windows,
images
and
all
the
tests
passed.
H
So
that's
the
the
good
news.
The
second
good
news
is
that,
with
the
current
implementation
that
I've
sent,
we
no
longer
have
to
rely
on
any
windows
build
nodes
ever
that
also
includes
any
future
windows
source
versions,
because
at
the
moment
we
use
basically
1909
windows
windows,
server
nodes
for
building
win,
1909
container
images.
H
Basically,
you
cannot
build
for
a
newer
os
yeah.
Basically,
you
cannot
build
windows
images
from
a
newer
os
than
you
are
on,
but
buildex
basically
eliminates
that
entirely.
H
You
know
we
can
just
use
a
linux
now
to
build
those
as
well,
but
there
are
two
important
decisions
that
you
will
have
to
agree
on.
H
H
Those
the
docker
files
for
those
images
are
also
included
in
the
pull
request.
If
anyone
wants
to
build
them
for
any
reason,
and
basically,
we
won't
change
those
images
ever
basically
they're,
just
gonna
be
there,
and
basically
the
the
image
promoter
will
just
pull
a
couple
of
files
and
things
that
it
needs
from
there
more
specifically,
the
busy
box,
expanded
things
and
powershell
and
all
its
preparations
from
those
images
and
if
they
are
going
to
change
ever
of
course,
it's
going
to
be
a
different
version,
so
it
won't
affect
any
current
builds.
H
That
is
the
first
thing
the
committee
will
have
to
agree
on
that.
The
image
promoter
will
pull
from
those
images,
some
bits
minor
bits,
but
still
and
the
second
one.
H
H
So
it
really
helps
a
lot
when
building
and
of
course
it
will
also
help
a
lot
buildex
even
more
than
that,
especially
because
we
have
to
build
for
multiple
those
versions
currently
1809
1903,
1909
and
2004.
Currently,
so,
basically,
the
image
promoter
will
not
have
to
pull
eight
gigabytes
of
images
every
job.
H
So
that
is
one
dependency.
We
are
gonna
need
to
merge.
First,
I
think
that's
pretty
much
non-controversial.
We've
also
had
a
lot
of
test
passes
green
for
weeks
or
months
on
a
server
using
those
images.
So
that's
pretty
good,
but
the
idea
about
the
nanoserver
image
is
that
it
doesn't
have
all
the
requirements
we
need
for
some
images.
We
actually
need
some
dlls
which
can
be
found
in
the
server
core,
and
the
last
proposition
is
to
have
one
additional
periodic
job
or
some
we.
H
H
H
So
those
are
the
things
we
kind
of
need
to
agree
on.
All
of
this
has
already
been
included
in
the
pull
request.
H
H
I
included
the
docker
files
in
the
pull
request
as
well.
If
anyone
ever
needs
to
build
those
images,
they
can
do
so.
Of
course,
they're
gonna
need
windows
nodes
for
that,
okay,
but
they
can
build
their
own
images
if
needed,
but
ideally
there
will
no
longer
be
any
case
for
that.
F
Okay,
I
definitely
appreciate
the
thorough
walkthrough
of
all
this.
It
sounds
reasonable.
H
F
I
just
like
I
don't.
I
have
not
had
enough
time
to
prepare
for
this
to
like
look
into
all
the
details
so,
and
I'd
also
like
to
get
ben's
eyes
on
this,
since
I
think
he
proposed
build
x
is
the
path
forward,
but
this
it
sounds
reasonable.
Just
a
point
on
terminology,
you
kept
saying
image
promoter,
but
I
think
what
we're
talking
about
here
is
the
image
the
image.
H
D
F
Yeah,
the
jobs
that
run
in
google
cloud
build
yeah
yeah,
I
I
think,
like
yeah,
being
able
to
to
have
the
google
cloud
build
job,
be
fully
self-contained
instead
of
kind
of
calling
out
to
external
windows.
Server
nodes
sounds
great,
so
this
seems
reasonable.
I
just
I
can't
give
you
a
hard
yes
or
hard,
no
right
now,
but
it
it's
on
the
radar
and
and
we'll
get
to
it,
as
as
I
have
bandwidth.
H
Yeah
at
the
at
the
very
least,
the
we
could
at
least
go
for
go
forward
with
the
non-controversial
stuff.
Is
the
nano
server
pull
request,
which
is
the
second
in
the
the
list?
H
We
basically
decided
that
it's
it's
good,
it
works.
We
had
full
test
passes
with
nano
nano
server
based
images,
and
we,
the
build
expo
request,
depends
on
that
pull
request.
I
F
I
F
G
C
Well,
we
have
ever
well,
we
have
folks
here,
it's
like
I
post
in
the
chat
too,
but
I'll
reiterate
is
like
please
let
us
know.
If
there's
any
clarification
we
can
provide
around
all
of
this.
We
understand
windows,
server,
containers,
work
quite
a
bit
differently
than
linux,
and
that
a
lot
of
people
who
are
primarily
work
with
linux
containers
may
not
be
from
like
may
not
be
or
aren't
familiar
with
these,
and
I
think
we
can
some
of
the
things
that
we
can
help.
The
most
with
is
probably
provide
any
clarifications
for
reviewers
around.
F
Yeah,
I
appreciate
you're
all
here.
I
guess
I'm
just
trying
to
say,
like
I'm,
not
ramped
up
enough
to
be
able
to
take
advantage
of
your
time
right
now,
though,
I
do
really
appreciate
it.
So
I
think,
as
long
as
y'all
are
lurking
in
like
in
slack
in
case
and
for
say,
testing
or
wherever
we'll
reach
out.
If
we
have
questions.
F
Okay,
I
think
you're.
I
think
your
proposal
for
like
the
cash
image
also
sounds
reasonable
to
me.
H
I
mean
the
idea
of
the
cache
to
be
built.
Monthly
is
because
microsoft
releases,
new
container
images
every
month
or
so
so
it
would
be,
would
be
a
good
idea
to
always
have
you
know,
fresh
images
of
flash,
dls
and
dependencies
to
you
to
use
when
building
images
right.
F
I
guess
I
mean
help
me
help
me
understand
why
we
need
the
complexity
of
the
cache
in
the
first
place.
I
get
that
five
gigs
is
large,
but
in
theory
most
of
this
is
going
to
be
running
in
ci,
where
network
is
less
of
an
issue
right
yeah,
I
guess
like
what's
the
concern?
Is
it.
H
H
Basically,
we
also
have
we
are
currently
building
for
three
different
os
versions
of
windows
and
we
are
planning
to
introduce
the
fourth
one,
because
that
is
our
support
plan
for
the
moment.
As
far
as
I
know,
and
that
adds
a
lot
of
execution
time
to
the
image
builder
job
and
it
most
likely
could
end
up
in
time
modes
and
stuff
like
that,
while
if
you
just
have
a
cache
that
the
cache
basically
is
like
two
or
three
megabytes
in
size
or
something
like
that,
it's
it
doesn't
affect.
H
The
the
the
building
job
at
all
in
terms
of
execution
time
additionally-
and
this
might
just
be
me,
but
while
I
was
working
on
this,
it
looked
to
me
that
docker
build
x,
tended
to
always
report
the
windows
images
over
and
over
and
over
and
over
again.
F
F
So
I
mean
like
yeah
in
general.
The
the
idea
of
caching
sounds
great.
Caching
down
to
megabytes
also
sounds
good.
I'm
just
trying
to
apply,
I
guess
the
xkcd
rule
of
thumb
here.
So
it
looks
like
I
think.
That's
the
image
building
job
we're
concerned
would
be
effective
right,
the
one
that
builds
the
e2e
test
images
and
it
looks
like
it's
ball-
part
40
to
50
minutes
right
now.
What
kind
of
increase
are
you
talking
about.
H
From
my
experience
just
pulling
the
image
which,
on
a
node
which
already
is
in
azure,
I
was
taking
something
like
20
minutes:
okay,
that
that
also
includes
decompression
and
just
for
one
image,
one
windows
image
and
you
have
to
consider
three
of
them:
one
for
1809,
1903
and
1909
and
planning
another
one.
F
Okay,
that
makes
sense.
Yeah
caching
sounds
like
it
would
be
helpful
there.
I
think,
yeah.
I
think
we're
just
going
to
have
to
see
what
this
does
to
image
build
times
in
general
and
if
we
find
that
something
becomes
becomes
unreasonable,
we'll
have
to
figure
out
what
we
want
to
do
about
that.
F
But
everything
you
proposed
sounds
reasonable
to
me.
I
think
it's
just
a
matter
of
getting
ben
and
maybe
folks
from
the
release
engineering
team
to
just
kind
of
dig
into
the
details.
Make
sure
this
all
looks
good.
C
Also
provide
some
of
the
motivations
from
our
end.
For
wanting
to
do
this
is
that
a
lot
of
times
the
prs
will
go.
Prs
will
go
in
and
update
the
test
images
in
the
kubernetes,
like
the
imagery
references
in
the
e
to
e
test,
and
then
all
of
the
windows
tests
that
use
those
images
will
fail
for
a
couple
of
well
until
somebody
on
from
sig
windows
goes
in
and
runs
this
process
manually
and
pushes
new
image
tags
to
match
what
was
updated
right
into
the
infra.
C
So
we're
hoping
that
this
can
help
clean
up
our
test
signal
a
lot
too,
and
we
will
definitely
reach
out
to
sig
release
about
that.
I
think
that
would
be
happy
to
help
see
this.
F
I
think
the
the
main
compromise
we're
making
at
the
moment
just
in
light
of
what
we
we
saw
with
what
we
saw
back
in
july,
where,
like
everything,
kind
of
locked
up,
because
we
had
like
a
ridiculous
amount
of
resources,
resource
usage
from
a
lot
of
ci
jobs-
is
we're
just
trying
to
take
a
more
prudent
look
at
like
what
we're
spending
our
resources
on,
and
why
so
that
that's
part
of
where
the
like.
I
can't
give
like
hard
answers
off
the
top
of
my
head.
C
I
just
I'll
just
throw
one
more
thing
on
there,
and
this
is
more
for
the
sig
release
team
as
well,
but
so
in
1.18
and
in
119,
we've
had
p.
We've
had
windows,
prs
or
we've
had
prs
go
in
in
between
rcs
that
have
completely
broken
windows
functionality,
and
we
are
hoping
that
with
this
work,
since
we
won't
have
to
worry
about
having
to
like
a
manual
step,
unblock
test
passes
to
get
some
pr
at
least
informing
jobs
out,
hopefully
soon
to
help
do
this,
so
that
we
don't
have
this
again
for
120.
C
F
F
Here
at
motivation,
in
that
context,
I
think
there's
like
we
there's
some
question
of
like
we.
Don't
really
have
pre-submit
tests
for
every
single
architecture
supported
by
kubernetes
right,
we
kind
of
have
it
just
for
linux
amd64.
F
You
know
good
clean
signal
for
things
that
have
merged,
so
I
know
like
finding
out
something
broken
post,
a
bit
isn't
ideal,
but
there
are
we
do
this
for
a
number
of
other
things,
but
I
I
hear
you
loud
and
clear,
and
I
think
this
is
a
good
step
forward
in
general.
F
Yeah,
like
I,
I
would
say,
kate's
infra
is
largely
concerned
with,
like
is
the
project
fully
in
control
of
all
of
its
infrastructure,
and
so
the
fact
that
we
have
to
depend
on
some
mysterious,
azure
nodes
out
there
somewhere
we'd
be
just
as
concerned
about
it.
If
we
had
to
depend
on
some
mysterious,
gce
nodes
that
are
running
in
somebody's
personal,
google
cloud
project
or
whatever
right
we're
just
trying
to.
F
That,
like
the
community,
has
the
capacity
to
rebuild
from
scratch,
if
it,
if
it
needed
to,
and
that
it
could
run
the
infrastructure
necessary
to
host.
You
know
to
do
the
building
and
to
do
the
hosting,
and
I
think
everything
claudia's
talked
about
here-
moves
us
toward
that
goal.
C
Yeah-
and
that's
also
why
I
mentioned
at
the
beginning,
potentially
onboarding
some
azure
resources
into
the
kind
of
governance
of
this
group.
C
We
have
some
subscriptions,
which
I
believe
are
the
equivalent
of
projects
in
google
cloud
that
lockheed
for
microsoft
maintains
that
are
dedicated
to
the
cncf
like
build
kind
of
up
to
through
cncf
kind
of
budgeting
on
our
end,
and
we,
the
resources
that
we
use
to
run
our
periodic
jobs
are
running
on
there,
as
well
as
the
the
images
that
we're
currently
building
these
on
and
a
number
of
other
resources
for
cluster
api
and
cap
z.
C
Since
it,
I
guess
we
weren't
aware
of
this
kind
of
working
group
and
the
like
the
gov,
the
idea,
the
desire
to
have
governance
over
these
test
infrastructure
resources.
So
we'd
be
more
than
happy
to
work
towards
onboarding.
Some
of
those
as
well.
F
Okay,
I
don't
have
a
solid
answer
for
you
on
like
what
onboarding
for
that
would
look
like,
but
I
I
hear
that
and
appreciate
that
for
sure-
and
I
think
bart
and
dims
and
I
can
figure
out
what
the
appropriate
path
forward
is.
There
yeah.
A
I
don't
hear
any,
and
that
was
the
last
thing
in
the
agenda
and
the
one
thing
which
I
would
like
to
add
is
as
nikita
is
not
here.
She
asked
us
yesterday
about
prepare
the
annual
report
from
our
working
group
and
she
asked
her.
She
asked
us
also
to
involve
the
community,
so
if
anybody
would
like
to
kind
of
give
some
feedback
or
informations
and
help
us
kind
of
prepare,
these
documents
feel
free
to
do
so.
G
E
F
A
Yeah
yeah-
I
didn't
know
sorry
about
that,
so
I
will
also
make
sure
and
maybe
put
some
information
in
this
live
channel.
A
G
A
F
Oh,
I
went
on
mute,
sorry
about
that
is
justin
still
here
I
I
noticed
there
are
a
bunch
of
pr's
against
the
kate's
repo
from
justin
that
have
been
sitting
there
and
I
wasn't
sure
if
there's
anything
we
needed
to
do
to
help
on
unblock
you
or
if
you're,
you're
blocked
or
what
the
story
is
there.
B
I
don't
feel
blocked.
I
am
going
to
work
on.
Actually
I
can
ask
a
question,
so
I'm
gonna
work
on
getting
the
binary
artifact
promoter
to
be
automated
in
the
same
way
as
the
image
promoter,
I
was
waiting
on
the
image
promoter
being
done
and
dusted.
Is
it?
Is
it
finished?
Are
we
good
at
this
point?
Is
it
are
we
do
we
have
capacity
to
take
on
the
next
challenge.
F
We're
almost
there,
I
guess
you
can
start
on
it.
So
the
thing
for
me
is
the
image
promoter
is
not
running
in
kate's
in
for
clusters.
It's
still
running
over
at
google.com
cluster.
That's
just
because,
like
we
didn't
have
a
k10
for
clusters
at
the
time,
so
I
kind
of
want
to
see
it
moved
over
before
you
just
use
that
as
your
copy
paste
template
to
stamp,
but
otherwise
it's,
I
think,
you're
good
to
go.
G
B
Of
the
a
lot
of
the
most
of
the
pr's
that
I
have
opened
are
promoting
the
binary
artifacts,
which
I
think
we
can
merge.
I
am
basically
documenting
the
procedure
which
will
be
automated
but
is
currently
being
run
manually.
E
A
Okay,
as
I
don't
see
any
other
topics,
I
would
like
to
propose
going
moving
the
ball
going
through
the
board
for
the
next
meeting,
because
there
is
not
a
lot
of
us
today
and
maybe
that
would
be
kind
of
better
with
more
people
and
maybe
give
us
back
the
15
minutes
today
or
if
any
topics
suggestions
questions
are
still
present.
We
can.