►
From YouTube: GMT20230802 000326 Recording 1920x1200
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
today
is
the
awkward
Second
and
they
did
the
belarian
committee
meeting
and
first
for
the
overall
status
we
1.11.1
had
been
released
on
July,
25th
and
401.12.
A
We
have
to
reached
the
update
State
and
to
write
any
sales,
and
now
we
are
working
on
the
magnitudes
and
the
marketplaces,
and
we
will
run
the
the
manual
task
for
by
all
the
members
in
Benin
team
and
and
the
recalls
the
Box
and
to
see
to
see
when
we
can
fix
all
of
them
and
to
see
we
can
reach
the
RT
and
the
four-way
1.13.
We
are
still
calling
for
30
days
and
if
you
have
any
requirements
or
YouTube
that
need
to
go
into
with
1.13.
A
You
can
add
this
candidates.
B
C
Just
in
case,
you
cannot
I
start
to
interrupt
just
in
case
you
don't
have
permission
to
add
labels
you'll,
please
feel
free
to
Ping
any
maintainers
in
the
channel
and
we're
gonna
charge
them
accordingly.
D
A
Yeah
yeah
and
we
may
start
the
first
one:
the
review
of
the
YouTube
over
1.13
candidates
around
Netflix.
A
D
Yeah
I
actually
picked
the
pr
I
made
earlier,
the
that
you
requested
so
yeah.
That's
my
update.
A
E
And
just
to
be
clear
with
the
the
issue,
the
issue
with
this
PR
is:
if
you
use
the
profiles
for
your
AWS
credentials
credentials,
it
doesn't
currently
work
with
copia
if
you,
if
you're,
not
using
the
default
profile.
So
this
change
is
needed
to
support
that,
and
so
this
is
an
issue
for
both
110
and
111,
and
you
know
on
Main
requesting
already
works,
fine
with
non-default
credentials,
but
it's
not
working
properly
within
the
copia
path.
F
A
Yes,
okay,
all
right!
Now,
let's
go
to
the
this
current
topics.
The
first
line
from
myself
exactly
still
about
the
feel
about
the
the
requirement
for
you
know
to
have
data
more
and
and
the
data
more
data
and
the
local
stem
cloud
data
together-
and
you
know
we
have
discard
its
both
internally
and
through
the
community
meeting
and
we
found
out
there
are
some.
A
You
know,
details
or
some
complexity
to
to
to
make
this
based
on
the
current,
the
current
architecture
of
the
data
move
or
workflow
other
than
more
so,
and
on
the
other
hand,
we
we
still
want
to
catch
more
inputs
about
the
future.
The
value
of
the
feature.
A
As
far
as
we
know
that
we
we
have
cleared
that
you
know
in
the
public
Cloud
as
in
Ireland
users,
can
produce
some
local
examples
and
when
they
do
that,
when
they
resolve
of
the
consulate
the
locals,
I'm
not-
and
that
will
be
much
faster
than
the
data
moment.
And
besides
that,
we
still
want
to
know
more.
A
User
cases
on
both
primates
on
premise
and
the
public
Cloud.
So
if
you
have
any
just,
you
know
just
suggest
or
propose
under
it's
a
under
these,
you
walk
through
the
channel
thanks.
That's
that's!
For
this
one,
and
next
one
Daniel.
Please.
C
Yeah
I
just
want
to
quickly
double
check.
You
know,
based
on
our
discussion,
that
we
marked
a
few
different
candidates.
You
open
the
link.
C
Yeah
yeah
I
believe
we
don't
have
any
disagreement
regarding
the
copia
does
not
respect
like
that
was
credentials
because
that's
has
been
depends
on
the
copia
and
could
not
retrieve
CRS
or
crds
in
Valero
server
after
resourcing
for
a
while,
I
think,
that's
probably
okay,
because
That
Type
window
is
relatively
small
and
but
the
third
one
I
want
to
double
check
with
subang
I
I.
C
Think
there's
some
disagreement
regarding
the
implementation
about
this
in
the
discussion
of
the
pr,
so
so
I
believe
it's
okay,
that
we
we
kick
it
out
of
1.12
right.
B
Yeah,
oh,
that
is,
we
should
be
good
with
113
or
112
Z
stream.
Sorry
again,
sorry,
we
should
be
good
good
to
defer
it
to
the
one.
D
D
E
Thing
we
should
probably
talk
about
is
that
this
just
does
involve
some
API
changing,
so
you
know
if
it
makes
sense
for
one
to
have
one,
or
is
that
too
too
much
of
a
change
for
for
in
a
batch
release?
That's
a
different
discussion
depending
on,
ultimately
what
we
end
up
deciding,
especially
if
we
ended
up
at
you,
know
that
there's
going.
F
E
E
You
know
crd
changes,
then
you
know
that's
one
thing,
but
I
think
there's
certainly
some
I'm
not
sure,
but
but
again,
I
guess
that
that
is
the
question
so
shubham
you
know
if
this
is
something
that
you
know,
is
it
okay
to
be
1.13
versus
it
was
something
that
we'd
really
want
to
have
in
one
of
the
ones
wealth
releases.
That's
something
we'd
have
to
probably
work
through
as
well.
E
Yeah
I
think
we
all
agree
that
1.12.0
is
is
not
essential
for
so
deferring.
It
absolutely
makes
sense.
You
know,
since
we
have
open
questions,
I
think
the
priority
now
is
to
work
out
the
open
questions
and
start
exactly
what
we
want
to
do
and
once
we
know
that,
then
we
can
make
that
all
about.
You
know
where,
if
it's
in
the
release
schedule.
B
C
C
Yeah
I
think
we're
good
for
this
topic.
Yeah.
The
next
one
is
about
Andrew
as
a
maintainer
I
opened
an
issue
with
that
and
I
communicated
with
the
West
I
think
so.
I
I
think
we're
okay
with
the
boat
here
in
this
issue.
Please
add
a
thumb
up
and
all
plus
one
by
adding
a
comment
and
I
think
I'm
gonna
open
the
issue
for
two
weeks
and
in
the
next
community
meeting
in
Beijing.
C
We're
gonna
see
the
result
of
the
voting
and
make
another
round
of
discussion
to
it'll
either
add
him
as
a
A
maintainer
or
you
know,
take
action
accordingly.
B
C
Okay
and
the
third
one
is
regarding
the
bilateral
contributor,
so
I
I
was
a
little
confused
about
this
one.
So
is
this
mainly
about
the
recognition
or
appreciation
for
the
contributions,
or
do
we
need
a
custom
role
to
allowing
a
certain
group
of
users
to
do
something?
C
So
so,
what's
the
biggest
issue
here
I
see,
there's
some
tiger
suggested
the
non-maintainers
should
be
able
to
add
labels
and.
E
G
I
can
I
can
chip
in
a
little
here.
Oh.
G
Yeah
I
think
I
probably
brought
up
in
some
way
because
I
wanted
to
assign
issues.
You
know,
David
was
working
on
block
PVC,
support
and
I
said
that
it
would
be
kind
of
easier
if
I
can
assign
issues
because
cloudcaster
people
worked
on
at
least
340
shoes.
In
the
last
you
know
a
month
or
so
I
guess
it
came
up
in
that
context.
So
the
main
idea
is
to
be
able
to
assign
issues
and
maybe
add
labels.
It's
not
about
recognition.
It's
not
about
anything
else.
Yeah.
C
So
so
yeah
because
I
double
checked
the
roles
in
GitHub.
We
will
need
to
create
a
custom
row
and
there's
no
measure
in
the
governance
model.
So
how
do
we
you
know,
control
the
life
cycle
of
a
contributor
is
so
I
think
a
shortcut,
maybe
just
ping
us
and
ping
any
maintainers
say
I
want
to
work
on
this.
One
Please
assign
it
to
me.
Yeah.
B
C
Yeah
I
see
thyroid
also
suggesting
to
to
create,
you
know,
add
some
bot
and
to
do
that.
I
think
that
that
may
also
work,
but
then
we
need
some
investigation.
C
If,
if
you
think
that's
a
valuable
Improvement,
please
go
ahead
and
open
an
issue,
and
we
can,
you
know,
follow
up
that'll,
be
easier
to
track.
C
So
but
I'm
a
little
curious
about
this
file,
so
does
it
mean
any
one
kind
of
sign
anyone-
or
this
is
just
for
the
maintainers
or
are
there
any
like
access
control
or
some
restriction
for
this
command?
D
I
don't
know
either
I
was
just
suggesting.
That's
one
mechanism.
B
D
C
Yeah
so
yeah,
so
so
yeah,
please
go
ahead
and
open
an
issue
and
we
can.
We
can
take
double
check
but
I.
Think
in
the
short
term.
We
are
okay
with
you
know,
just
reach
out
to
the
maintainers
and
do
the
labeling
or
assignment.
H
I
do
think
that
there's
I
have
seen
in
other
open
source
projects
where
you
have
like
maintainers,
reviewers
type
of
split,
where
you'll
have
folks
who
are
working
their
way
towards
maintainership
get
reviewer
rights
that
will
allow
them
to
approve
certain
PRS
and
be
one
of
the
approvers
on
a
PR
fund.
You
can
make
it
so
that,
like
they
can
approve
like
a
subset
of
the
project
so
like
only
in
the
plugins
or
something
that
could
be
one
thing
that
you
could
consider
just
as
like
a
stepping
stone
tier
to
maintainership.
H
But
if
yeah.
B
H
C
Yeah
yeah
one
challenge
here,
is
that
we
all
work
on
the
product
like
as
at
least
for
me
that
I'm
working
on
the
product
part-time,
so
just
really
wanna,
don't
wanna,
make
it
too
complicated
in
terms
of
the
overhead
in
governance.
If
you
are
okay
with
current
maintainer
and
all
the
permissions
I
think
we
just
keep
it,
as
is
in
the
short
term,
and
if,
if
that
suggestion,
you
think
it's
really
a
strong
requirement,
please
feel
free
to
open
an
issue
and
I.
C
E
One
other
thing
is
specifically
to
the
comments
around
you
know
to
Sean
is
saying
about
you
know.
If
we
wanted
to
allow
contributors
to
to
you
know,
review
vrs,
that's
a
conversation
that
you
know
an
issue
there
would
primarily
be.
This
is
not
one
of
automation
or
of
groups
it's
one
of
what's
the
governance
doc
say
because
the
governance
stock
right
now
says
two
maintainers
most
review
so
to
change.
That
is,
to
change
the
governance
doc.
So
that
would
be
a
conversation
around.
C
Yeah
yeah
yeah
I
would
agree
that
if
we
are
to
make
any
significant
change,
we
want
to
change
the
governance.
You
know
the
current
stock
first
and
then
take
it.
Reflected
in
this
process
is
always
suggesting
stuff.
Yeah.
E
And
the
governor
does
mention
contribute
contributors.
It's
just
that.
There's
not
really
much
I
mean
it's
not
very
clear
as
to
what
the
role
would
be,
what
permissions
they
would
need
it's
more
of
a
vague
kind
of
reference
to
you
know
those
to
contribute
to
the
project.
There's
not
much
in
government
stock
around
contributors
right
now,
but
they
are
mentioned.
C
Yeah
I
see
but
yeah.
That
sounds
like
a
role
but
doesn't
map
to
any
permission
on
the
GitHub,
because
right
currently
I
believe
as
a
regular
user
say
tiger
or
anyone.
You
can
also
like
do
code
reviews
or
read
boundary
issues.
I,
don't
think.
Currently
the
current
model
is
blocking
them
from
doing
anything
like.
E
This,
the
only
the
only
area
where
that
matters
is
right.
Now
the
governor
stock
says
two
maintainers
must
review,
must
approve
a
PR
before
I
can
emerge,
which
is
what
the
governance
stock
calls
for.
If,
if
we
wanted
to
change
that
in
the
future
to
allow
contributors
to
be
one
of
those
two
approvers,
for
example,
that
would
be
first,
the
governance
change
and
then
a
GitHub
missions,
change,
yep,.
B
E
And
I
don't
believe
this
proposal
had
any
you
know
it
hadn't
mentioned
that
I
think
Sean
mentioned
it
as
a
possibility,
but
I
don't
think
the
issue
raised
here
actually
even
talks
about
that.
That's
that's
a
separate
issue
that
you
know.
If
someone
wants
to
bring
it
up
and
you
know
what
was
it
you
know,
that's
fine,
we
can
talk
about
it,
but
I
don't
believe.
It's
been
currently
proposed.
Officially.
C
Yeah
yeah,
so
so
I
just
double
check.
So
we
are.
We
are
generally
happy
with
the
current
model.
Right.
If
there's
any
concerns,
I
mean
for
the
suggestion
by
tiger
or
or
Sean,
we
can
open
the
issue
and
follow
them
follow
up
them,
but
with
her
model
we
don't
have
you
know
anyone
has
any
strong
feeling
that
we
want
to
change
it
right.
C
So
there
are
a
few
other
issues,
but
I
think
they
are
relatively
low
priority.
The
first
one
is
relatively
easy:
it's
a
question
for
stuff
yep
for
the
oadp
side,
yeah
that
oabbed.
E
C
E
No,
no
there's,
there's
no
ODP
code
that
does
anything
with
leading
backups.
This
is
all
would
be
El,
Valero
and
I.
Believe.
The
issue
here
is
the
the
backups
are
referencing
a
backup
storage
location
that
no
longer
yeah
back
up
kind
of
the
garbage
collected
because
backup,
storage,
location,
yeah.
C
E
Doesn't
exist
so
so,
basically,
we're
not
actually
I,
don't
think
we're
actually
creating
delete,
backup
requests,
we're
just
so
I
think
the
title
is
slightly
incorrect.
Basically,
the
the
garbage
collection
controller
runs
once
an
hour.
It.
It
Loops
over
all
expired
backups.
E
If
the
backup
storage
location
doesn't
exist,
it
logs
a
warning,
saying
I
can't
delete
this
now
and
moves
on.
If
the
backup
storage
location
is
available,
then
it
creates
a
delete,
backup
request
and
then
the
backup
deletion
controller
handles
it.
E
But
right
now
and
my
understanding
this
is
the
way
it's
worked
since
before
I
was
on
the
project.
But
my
understanding
for
this
is
the
the
idea
being.
The
main
points
in
expiring
backups
is
to
delete.
You
know:
remove
the
storage
in
S3
you're,
not
using
it
in
the
bucket
anymore.
If
the
vsl
is
not
available,
we
can't
touch
the
bucket.
We
can't
clean
up
anything.
E
E
C
E
Well,
you
know
the
only
thing
Valero
can
do
is
does.
Does
the
BSL
exist
with
this
name
in
the
cluster
I
mean
you
can't
prove
the
bucket
doesn't
exist
anymore.
All
you
can
say
is:
is
there
a
so
if
a
user
goes
and
deletes
the
backup
storage
location
from
the
cluster,
then
Valero
has
no
way
of
accessing
that.
E
This
doesn't
delete
it,
so
I
think
what's
happened
is
and
I've
seen
this
on
with
oap
other
ideas
they
have
some
backups,
then
they
decide
they're
going
to
stop
using
that
bucket.
They
delete
the
backup,
storage
location,
create
a
new
one
and
then
these
little
backups,
you
know
when
they
expire
Valero,
can't
delete
them
well.
Clearly,
Valero
is
not
expected
to
be
able
to
clean
out
a
bucket
that
no
longer
has
a
reference,
but
when
they
do
a
you
know,
Valero
backup
get
and
they
see
all
these.
E
You
know
backups
that
should
have
been
expired.
You
know
this
example
here
shows
fail
validation
because
the
BSL
doesn't
exist,
so
they're
never
cleaned
up.
They
can
just
do
a.
You
know,
delete
the
CR
because
there's
no
bucket
data
associated
with
it
so
Valero
could
so
I
mean
we.
We
could
update
things
in
the
garbage
collection,
controller
and
say:
oh,
the
BSL
doesn't
exist,
so
we're
going
to
delete
it
anyway
and
when
we
say
deleted
anyway,
this
is
going
to
be
a
limited
delete.
We
obviously
can't
run
the
delete
item
actions.
E
C
Yeah
I
I
personally,
don't
think
that's
a
I
mean
that
has
a
you
know
both
side
of
good
or
bad.
But
if
we,
you
know
the
garbage
collection
controller
decides
to
remove
the
CR
without
touching
the
BSL,
because
it's
not
available,
then
the
thing
may
be
orphaned
there
or
forever.
But
if
garbage
crashing
just
silently
failed-
and
you
know
tried
in
the
nice
cycle,
it's
possible
that
the
user
can
do
some
debugging
create
the
BSL
accordingly
and
the
garbage
collection
can
handle
them
right.
Yeah.
D
C
G
A
An
annotation
on
the
backup
you
could
do
that
yeah,
maybe
not
just
a
simple,
simple
retry
number,
because
right
now,
maybe
the
best
storage
location
does
not
exist,
but
afterwards.
D
A
So
we
need
the
a
mechanism
to
retrigger
the
the
backup
deletion.
A
Okay,
but
let's
say
the
retry
number
is
five
and
below
retry
five
times,
but
after
the
that
time
the
BSL
is
graded
back.
That's
your
problem.
E
A
A
A
Well,
you
know,
you
know,
you
know
see
now
that
the
the
without
the
background-
lots
of
data,
for
example,
the
the
volume
data
backup,
for
example.
It
had
backed
up
500,
GB
or
more,
and
we
may
delete
the
the
backup
VR,
but
leave
the
Vietnam.
The
data
in
the
BSL
and
the
data
or
the
repository
will
be
open
forever
and
that's
right.
Yeah.
E
It's
only
going
to
be
orphaned
as
long
as
the
the
BSL
is
inaccessible
anyway,
in
which
case
Fuller
is
never
going
to
delete
it.
If
you,
if
the
user
actually
later
goes
back
and
recreates
that
BSL
or
the
network
problem
resolves
itself
and
it's
accessible
again,
then
the
backup
scene
controller
is
going
to
run
and
say:
oh,
this
backup
is
not
in
the
cluster.
It's
in
the
BSL
I'm
going
to
recreate
the
CR
and
then
on
the
next
run
of
the
backup
deletion.
The
the
garbage
collection,
controller,
it'll,
clean
it
up.
C
C
E
Other
point:
you
know
you
don't
want
I
think
we
don't
want
to
change
the
default
behavior
because
you
know
you
don't
want
this
to
suddenly
change,
but
I
think
I.
Think
if
we
added
something
some
kind
of
you
know,
then
some
server
setting,
for
example,
for
a
number
of
retries
where
the
default
is
due
to
stripe.
We
try
forever
and
basically,
if
you
set
it
to
five
or
ten,
then
you
then
every
time
you
fail.
You
increment
that
number.
E
Then,
once
the
number
gets
the
limit,
then
you
delete
it
anyway.
You
also
need
to
change
the
backup
deletion
controller
code
because,
right
now,
if
you
create
it,
create
a
delete,
backup,
request
and
the
BSL
is
inaccessible,
it'll
bail
out
and
not
to
do
anything
so
you'd
probably
need
to
add
a
either
an
annotation
or
a
field
that
will
delete
backup
request.
E
You
know
kind
of
a
force
option,
spec
field,
you
know
to
say,
I,
want
you
to
delete
this
anyway,
even
if
the
BSL
is
invalid
and
just
delete
the
CR,
so
so
that
would
be
an
API
change.
E
So
this
isn't
something
you
know:
you'd
want
to
throw
into
1.12,
but
I
think
that
approach
could
probably
resolve
this
that
when
the
default
Behavior
changes
for
nobody,
but
if
you
choose
to
set
you
know,
Max
garbage,
collectibility
deletions,
then
once
you
hit
those
number
of
deletions
for
a
backup,
you
delete
the
CR
and
move
on.
C
Yeah,
so
I
think
that's
I'll,
doubt
happy
that
really,
if
he's
not
happy
with
the
garbage
connection
controller
and
make
the
train
accordingly
and
the
next
one
is
just
a
very
early
like
checking
the
thoughts
because
I
used
to
think
we
are
over
the
time
where
we're
adding
more
and
more
attributes
to
the
CRS
I
found
these
attributes
become
to
conflicts
with
each
other.
C
One
example
is
the
existing
resource,
including
food
resource
flags,
and
the
other
background
is:
we
are
trying
to
deprecate
The
Rustic,
so
there
are
many
rustic
related
attributes
will
not
be
you
know,
making
sense
anymore.
So
what
do
you
guys
think?
Is
that
a
good
idea
or
that
we
thinking
about
bumping
up
the
API
version
of
Valero
from
we
want
to
V2
Alpha
One.
C
I
I
I
there's
some
investigation,
but
thought
realized
that
even
for
kubernetes
native
resources,
they
are
really
reluctant
to
do
the
bump
up.
They're
they're
still
trying
to
adding
new
attributes
and
fabricating
things
without
changing
the
version.
So
yeah.
E
I
mean
I
would
say:
I
would
wait
to
actually
increment
it
until
you
actually
make
that
Jay
King
James,
because
you
you
don't
want
to
increment
it
and
then
later
make
a
baking
change
and
then
you
have
to
increment
it
again.
So
you
know
just
so
if
you're
adding
a
new
field,
for
example,
and
as
long
as
that
field
is
optional,
and
you
know
a
previous,
you
know-
and
you
know
that,
then
that's
not
so
much
an
issue.
It's
when
you
remove
a
field
that
previous
backup
server
stores
might
have
used.
E
E
What
I
guess
we
should
identify
what
actual
fields
are
being
removed,
but
for
example,
you
know
I
think
I
think
right
now,
the
you
know
the
field
in
the
backup
where
it
says
whether
you're
using
copy
or
rustic.
That's
not
a
field
that
we
necessarily
remove,
but
once
we
remove
Valero
support
if
a
user
sorry
rustic
support,
if
a
user
specifies
restic
that
that'll
be
validation
error,
but
that
field
still
exists
because
we
might,
in
the
future
add
another
file
system,
backup
other
than
copia.
E
So
I
think
we
wouldn't
want
to
remove
that
uploader
type
field,
for
example.
E
So
I
don't
know
exactly
what
we
need
to
remove
for
rustic,
but
I
think
when
we
get
to
the
point
where
we
actually
have
to
make
a
change
to
a
crd.
That
is
a
breaking
change.
That's
when
you
know
the
incoming
certainly
would
be
required.
C
H
H
Well,
it's
a
good
example.
I
mean
I.
Think
crd
V1,
beta
1
to
crdb1
is
a
good
example.
They
change
defaults
moving
between
the
two
of
them
and
that's
you
know,
technically
a
breaking
change.
You
would
have
to
be
aware
of
there's
some
stuff
like
that.
So
you
know,
because
even
just
changing
a
default
value
is
a
is
changing
the
semantical
value
of
a
of
a
thing,
so
even
just
changing
the
default
from
rustic
to
copia
or
that
uploader
type
or
something
like
that
would
be
a
breaking
change.
E
Were
an
actual
field
in
a
and
a
CR
right.
H
What
have
we
like
kind
of
silently
stopped
using
or
like
changed
the
semantic
value
of,
and
how
would
we,
if
we
were
going
to
make
a
V2?
How
what
would
the
conversions
be
between
these
new
fields
and
right?
That
kind
of
thing,
because
I
think,
when
you're
thinking
about
this,
you
can't
just
bump
it
and
then
remove
the
old
one.
I
think
you
kind
of
have
to
you
have
to
have
them
exist
for
two
or
three
releases.
E
E
D
E
To
you
know
refactor
and
kind
of
redo,
some
of
this
stuff-
that's
you
know,
makes
it
easier
for
users
to
understand,
because
maybe
we
have
a
bunch
of
these
random
fields
that
we've
added
you
know
in
response
to
individual
user
feature
requests
over
the
last
four
or
five
years
and
in
a
kind
of
haphazard
way,
which
is
you
know,
I
mean
you
do
that
one
at
a
time.
You
add
this,
you
had
that
because
you
need
the
functionality,
but.
H
E
You
really
want
to
go
to
and
make
it
go
from
a
V1
to
a
V2,
Alpha
One
and
that's
an
opportunity
to
refactor
it
into
in
a
way
that
makes
more
sense
it's
easier
to
understand.
Maybe
it
has
fewer
Fields.
Maybe
there's
some
features.
We
don't
don't
use
anymore,
but
then
to
Sean's
point.
You
then
also
have
to
have
the
two
coexist
side
by
side
have
a
way
of
converting
from
one
to
the
other.
So
there's
a
lot.
It's
a
lot
more
than
just
declaring
a
new
API
version.
C
Yeah
yeah
sure
so
so
I
I
checked
this
conversion
level
thing
so
so
it
seems
like
we
can.
You
know
in
Valero's
call
you
only
support
V2,
but
we,
whenever
user
you
know,
try
to
write
a
b1cr.
We
convert
them
to
V2,
silently
I.
Think
that
might
work,
but
but
I'm
really
curious.
Is
that
when
I
look
at
the
current
status
in
kubernetes,
people
are
still
making
change
to
the
V1
native
resources.
Yeah,
it's
a
little
surprising
to
me
so
so.
H
Editing
Fields,
adding
Fields,
is
not
a
breaking
change
and
is
sure
is
completely
reasonable
to
do
on
a
V1,
and
it
is
what
a
lot
of
a
lot
of
what
the
apis
do
so
they'll
add
fields
and
then
and
then
it
gets
weird,
because
it's
like
oh
this
new
field,
let's
call
it,
you
had
a
field
that
was
like
a
singular
value
and
it's
like.
Oh
actually,
we
need
multiple
values.
So
you
add
a
new
field.
H
That's
a
multiple,
like
you
know,
things
sort
of
thing
and
then
now
you
have
to
say
like.
Oh,
if
thing
is
set
and.
F
H
F
F
H
E
And
yeah,
and
eventually
yeah
Services
did
that
with
with
them
like
the
I,
don't
know
IP
address
or
something
I
remember
what.
H
It
was
dealing
with
that
yeah
when
they
haven't
had
when
they
had
to
add
ipvcx
I
think
they
had
to
do
something
like
that
and
that's
like
completely
reasonable
to
do
and
I
think.
It's
also
reasonable
that
if
we
have
a
bunch
of
stuff
like
that
over
five
years
of
adding
features
that
we
say,
let's
look
at
what
a
V2
would
look
like,
and
maybe
it
makes
sense
to
do
this
so
I
think
the
exercises
100
worthwhile
right.
H
E
F
E
Oh,
it's
now
time
to
simplify
this.
It's
now
time
to
refactor
this
into
some
a
smaller
set
of
fields.
That
does
everything
we
need
and
makes
more
sense
and
has
a
clear
conversion,
and
that's
when
you
make
the
V2
so
I
don't
know
that
we're
there
yet
I,
don't
know
we
have
to
look
at
the
list.
You
know.
Are
there
fields.
H
The
V2
Are
there
specific
things
that
you
were
looking
at
with
this
V2
move
that
you
would
want
to
do
or
is
this
more
like?
Are
we
ready
to
like
you
know,
do
you
want
to
do
an
exercise,
or
do
you
have
specific
ones
that
you're
thinking
about?
As
far
as
the
API
is
concerned,.
C
Yeah
yeah,
for
example,
in
1.11
we
introduced
some
include
filters
which
does
not
work
together
with
the
older,
including
exclude
Fields,
okay,
and
as
I
mentioned,
there
are
some
restic
specific
Fields.
If
we
remove
rustic
from
Valero,
those
fields
doesn't
make
sense
anymore,
and
the
third
reason
is
that
we
are
with
you.
We
are
discussing
some
rethinking
the
relationship
between
the
backup
repository
and
the
backup
storage
location
we
saw
there
was
some
problem
in
the
original
design.
We
are
trying
trying
to
change
it,
but
I
did
some
investigation.
C
I
found
that
it.
It
is
not
really
common
that
you
know
no
matter
how
old
the
resources
or
how
long
it's
been
there.
People
seem
really
reluctant
to
bump
up
from
V1
to
B2
they're,
seeing
a
lot
of
hassle,
or
maybe
some
yeah.
It's.
C
H
Give
you
the
like
every
time,
I
think
kubernetes
from
like
one
like
1.0
until
was
it
like
1
15
or
something
never
bumped
like
deployment
or
like
whenever
deployments
were
added
I
think,
like
maybe
one
three
one
four
right,
like
they
never
bump
deployments
to
V1
until
like
115
like
multiple
years
after
so
like
they've
kind
of
set
the
standard
that
like
once,
you
create
an
API
enough
people
use
it
like
you,
never
change
it.
E
Yeah
and
I
think
I
think
is
what
you're
saying,
which
I
think
is
what
you
are.
You
need
to
change
it.
You
know
we
can
be
strategic
about
it
like
okay,
we
need.
Maybe
we
need
to
change
backup
repository
in
BSL
because
we're
going
to
make
some
breaking
changes
there,
but
we're
not
changing
backup
now,
so
we
don't
need
to
touch
that
one.
You
know.
F
F
E
We
basically,
if
a
specific
crd
needs
to
be
simplified
needs
to
be
kind
of
you
know.
If
we
need
to
make
a
breaking
change,
that's
only
what
I
would
increment
it.
You
know
you.
E
B
B
C
E
Yeah
yeah
be
careful
with
it
incrementally
and
also
do
it
only
when
it
adds
value.
You
know,
don't
do
it
just
to
change
it,
but
if
it
makes
sense,
like
you
know,
oh
this
crd
now
has
30
fields.
We
don't
need,
because
we've
added
new
ways
to
specify
the
same
things
and
we
don't
need
to
keep
all
of
them
around,
and
then
we
can
say.
E
Okay
from
this
version,
for
you
know,
from
this
version
of
the
crd,
is
forward
we're
no
longer
supporting
the
old
way
and
with
you
know,
this
label
selectors
is
an
example
where
it's
easy
to
do
the
conversion
if
it
specifies
the
other
way
we
just
use
the
new
one.
You'd
have
to
make
sure
that,
for
example,
with
the
includes
excludes,
if
you
wanted
to
drop
the
old
way
of
doing
it
and
only
do
the
new
way
that
there
was
a
a
way
to
validly
map.
C
So
you
can
convert
that
sure
sure.
That's
that's
the
conversion
about
hook.
E
But
and
that's
where
it
might
get
challenging,
because
if
you
just
have
a
list
of
a
list
of
strings
that
says
these
are
all
our
resources
and
you
know
we
have
to
say
which
ones
are.
You
know
cluster
scope
versus?
You
know
that
you
know
that
could
be
pretty
complicated
and
so
we'd
have
to
figure
out
whether
it
was
the
value
of
simplifying
is
worth
the
complexity
of
dealing
with
conversions.
E
But
those
are
all
case-by-case
situations
that
you
know
you
don't
handle
in
the
big
conversation
of.
Do
we
want
to
deal
with
incrementing
but
say:
oh
here's,
the
problem,
this
CR
needs
to
crd
needs
to
be
updated
and
simplified
to
solve
these
problems.
Let's
talk
about
the
issues
you
know
field
by
field
or
whatever,
and
do
we
want
to
be
two.
H
Get
rid
of
semantic
value
of
rustic
fields
and
not
like
convert
them
to
a
different
feel
for
other
things,
or
something
like
that.
Then
you
are
going
to
need
to
bump
the
values.
Yeah
yeah
you're,
going
to
need
a
bunch
of
versions.
So
you're
you're
not
wrong
about
that.
That
is
a
you
could
do
something
like
keep
those
fields
around
and
then
make
new
ones
that
do
that
and
then
convert
them
to
those
new
Fields.
H
But
if
you
want
to
remove
rustic
from
the
thing
which
I
think
makes
sense,
I
do
think
you're
going
to
have
to
bump
the
API.
So
it
does
make
sense
if,
if
this
is
part
of
the
removing
resting
from
the
thing,
I
would
just
say
like.
Let's
take,
let's
take
the
learning
experience
there
and
start
to
name
things
in
a
way
that
isn't
tied
to
a
specific
implementation
of
whatever
tool
we're
using
just.
E
C
Okay,
so
yeah,
thank
you
guys
and
probably
in
one
third,
you
know
after
this
guy
we
probably
will
not
do
any
bump
hop
but
yeah.
We
should
look
into
it
and
have
a
plan
and
you
know
do
in
our
communication.
We
probably
take
action.
Okay,.
D
A
Okay,
thanks
and
the
next
one
is
about
the
delivery,
support
the
Block,
Level
or
block
mode
volumes.
So
we
have
discount
here
and
is
there
any
more
comments
or
or
more
yeah.
I
I
wanted
to
one
Advocate
and
to
get
a
clarification
that
that
was
tag
need
design,
and
it
it
made
me
fear
that
perhaps
that
would
need
to
go
to
the
next
y
release.
The
113
and
I
wanted.
If
there's
no
API
changes
and
we're
able
to
come
with
a
little
design.
C
So
you're
you're
saying
that
you
want
this
to
to
be
in
112.
112.
E
No,
no
one,
twelve,
zero,
we're
cool,
there's,
not
time
for
that,
but
in
other
words,
if
this
is
a
change
that
can
make
it
work
with
no
API,
no
crd,
no
user,
Visible
Changes,
it's
something
we
could
fit
into
a
patch
release.
You
know
one
twelve,
one,
one,
twelve
two!
You
know
whenever
it
was
ready.
C
Well
then,
I
think
discussion
that
those
things
will
be
some
mismatch
in
terms
of
understanding
like.
What's
this
zero
phase
zero,
and
we
really
want
to
do
the
phase
level
right.
A
Yeah,
actually,
implementation
exactly
not
compatible
with
our
Target
or
Block
Level
backup.
So
actually
I
have
clarified
our
targets
here
and
but
I
would
say
that
the
current
approach
or
not
support
that
and
it
it
means
that
we
can
make
the
you
know
the
the.
A
Volumes
backup
basically
work,
you
know
for
the
for
for
the
data
approach,
but
finally
we
will
have
the
final
Block
Level,
backup
solution
and.
A
That
I
believe
that
we
went
out
to
be
able
to
use
this
one
so,
but
on
the
other
hand,
that
the
the
final
solution
will
be
needed
to
be
down
step
by
step,
because
it
have
automatic
change
for
the
actor
level
and
also
for
the
repo
format.
That's
the
reality
so
about
the
fifth
zero.
As
we
have
discussed
here,
it's
like
we
can
write
a
letter
to
the
current
through
go
approach,
go
first,
but
finally,
we'll
have
that
we
will
replace
it
with
the
final
Solution.
A
That's
that's
what
we
have
discussed,
but.
C
E
E
Now
and
then
in
the
future,
yes,
I
think
we
all
agree.
We
want
a
better
solution
and
even
if
that's
a
breaking
change,
that
means
this
phase.
Zero
could
maybe
throw
away
code
and
then
in
1.13
for
example,
maybe
we
don't
use
that
anymore.
I
think
that's.
E
F
E
Like
making
sure
the
users
know
and
then
when
we
document
the
the
support,
this
is,
you
know
not
in
you
know.
The
performance
here
will
not
be.
You
know
as
good
as
the
eventual
solution
we're
going
to
change
this
in
the
future,
but
this
should
work
for
now.
G
Yeah
I
support
that
Scott
and
I
can
give
a
brief
update
on
you
know
what
David
has
been
able
to
do
so
far.
So
he's
he
added
a
bone
point
to
this.
G
You
know
to
the
node
agent
so
far,
I
think
the
pods
right
that
host,
where
lid
cubelet
part,
that
only
contains
the
file
system
volume
so
he's
adding
the
cubelet
plugins
and
then
he
he's
able
to
read
the
device
successfully
and
now
I
think
he
just
needs
to
implement
the
uploader
just
to
make
make
full
copy
I
think
that
it's
a
file,
not
a
device.
G
So
I
that
that's
where
the
you
know
the
changes
are
currently
are,
so
it
will
be
without
any
APA
changes
and
without
the
user,
Visible
Changes
just.
G
Said
so
it's
going
to
be
like
phase
zero,
I
agree
and
as
and
when
any
good
design
and
final
implementation
comes
into
the
picture
we
can
always.
You
know,
tell
users
that
here
is
a
better
approach,
more
Conformity
approach,
but
I
don't
see
any
reason
to
not
put
this
change.
Assuming
this
turned
out
to
be,
you
know
completely
invisible
change
to
the
users.
A
Yeah
from
these
comments,
it
looked
like
we
want
to
to
you,
know,
to
make
the
copy
uploader
think
that
the
block
device
to
Integrity
file,
because
the
item
8
part,
will
work
because
for
the
next
everything
is
a
file
right.
So.
A
Exactly
yeah,
we
just
want
to
install
actually
for
everything
else,
like
the
important
I
I.
Just
I
just
saw
I
just
saw
this
PR
this
morning,
so
I
need
to
further
look
at,
but
for
now,
I
have
a
specific
question
about
this
one.
Why
do
we
need
to
you
know,
add
the
other
amount
punch
for.
G
G
You
know
there
is
one
difference
between
Cloud
Casa
and
whether
or
the
way
we
are
doing
is
we
have
a
more
part
where
we
Mount
the
snapshot
as
PVC.
That
is
exactly
what
Valero
is
doing,
but
it
differs
from
there
and
Valero
is
not
using
that
part
at
all
and
it's
letting
the
node
agent
yeah
do
the
backup.
G
In
our
case,
our
part
which
mounts
the
PVC
actually
runs
the
the
backup,
so
I
would
suggest,
at
least
for
1.13
I,
think
you
should
add
it
to
the
list
at
least
to
consider
this
design
change,
because
it'll
help
in
two
cases.
First
of
all,
this
accessing
from
the
host
is
not
may
or
may
not
work.
In
all
cases,
we
don't
know
right,
especially
devices
and
second
for
performance
reasons.
If
you
do
the
transfer
from
the
lower
pod,
you
can
automatically
scale.
G
If,
depending
on
how
many
more
points,
you
can
start
right
so
I
highly,
because
I
know
that
you're
asking
for
1.13
roadmap
items,
I
would
recommend
considering
this
as
a
design
change,
because
keeping
the
nodes
in
do
the
backup
is
not
necessary
right
once
you
have
the
PVC
from
snapshot.
A
Yeah,
actually,
we
have
discussed
this
during
our
the
original
design
for
data
Mower
and
that's
the
two
options
that
we
we
launched
a
backup
pod
and
we
we
run
the
data
parts
inside
the
port,
all
the
current
one.
A
We
run
the
data
part
inside
node
items
and
we
we
use
the
port
only
for
for
for
people,
for
water
mounting
that's
option
and
actually
so
the
reason
why
we
chose
the
other
one
or
the
second
one
like
way
if
they
put
the
data
part
into
the
Pod,
that
will
make
the
architecture
more
complex.
That
is,
for
one
reason
and
for
the
other
reason
for
the
concurrency
control.
A
Actually
it
doesn't
mean
that
the
more
you
know
in
parallel
tasks
or
backups,
run
together
or
in
a
current
currently
it's
better
because
they
need
to
control
the
the
Google
number,
because
the
the
backups
are
consuming
results,
CPUs,
memories
and
network
support.
So
we
need
to
have
a
have
a
you
know,
a
better
way
to
control
that
and.
G
I
think
the
part
I
remember
one
issue
where
somebody
is
talking
about
using
the
kubernetes
jobs
for
running
these
backups
again
just
to
help
with
the
scalability
concurrency
and
all
that,
and
you
already
are
halfway
there
right.
Even
a
job
starts
a
new
pod
and
you
are
already
starting
a
pod,
so
you
did
half
the
work.
G
So
it's
just
a
matter
of
running
from
that
part,
but
I
agree
with
you,
but
you
need
to
put
in
a
lot
of
controls
how
many
more
parts
you
can
start
at
the
same
time-
and
things
like
that
and
another
Advantage
is
a
pod-
can
start
on
the
road
as
long
as
the
PVC
is
accessible
right.
G
So
that's
that
might
help
concurrency.
All
I'm
saying
is
that
at
least
for
1.13
or
newer
versions.
Maybe
this
design
has
to
be
kind
of
Revisited
and
and
see
if
it
makes
sense.
Yeah.
A
Maybe
yeah
you
can
discuss
yeah
yeah.
Actually,
since
the
you
know,
the
this
solution
will
make
the
architecture
or
workflow
more
complex.
We
need
to
see
how
much
benefit
we
can
get
from
that.
I
C
I
Go
too
far,
yeah
yeah,
so
I
I
think
we're
gonna,
try
and
make
sure
the
intentions
are
clear
by
adding
a
design,
but
it
sounds
like
most.
Everyone
is
okay
with
it
going
into
a
future.
A
Point
yeah
personally
I
think
they
could
do
about
for
for
the
three
zero
I
graduated
to
a
major
to
sorry
to
a
minor
already.
But
if
you
are
going
to
add
it
to
a
patch
I
think
we
need
to
be
more
careful
because,
even
though
we
don't
change
any
API
level
things
or
alternative
things
or
repo
for
nothing,
but
we
actually
add
a
new
feature
right
so
and
this
changes
we.
A
We
also
need
a
cover
test
for
for
all
the
scenario,
to
make
sure
that
the
quality
so
so
I'm
just
curious
about.
Why
do
we
need
to
add
this
to
a
to
the
to
the
patch
very
the
way?
1.12,
and
is
there
any
specific
user.
I
G
A
Two
projects:
yes
for
the
VM
accumulative,
managing
the
VM
solution
that
actually
absolutely
needed
the
block
mod,
and
because
that
can
we
get
to
lead
to
some
some
some
users
or
what
kind
of
workloads
they
are
using,
that
requesting
this
volume
mode
into
the
to
the
issues
so
that
we
can
evaluate
that
or
you.
C
I
Yes,
yeah,
that's
right
and
and
waiting
for
v113,
since
we're
probably
pretty
we're
not
that
far
off
from
at
least
enabling
this
from
a
phase
zero
perspective
would
be
really
unfortunate.
Yeah.
E
I
E
To
make
sure
that
and
I
think
that
you
know
that
it's
important
for
for
what
we're
doing
here
is
that,
especially
if
we're
talking
about
minimizing
risk
is
to
make
sure
that
you
know
for
those
use
cases
that
currently
work.
This
is
not
is
not
really
changing
anything
about
the
way
things
are
done
so
for
for
a
backup
that
works
in
1.12.
E
This
shouldn't
change
anything
about.
What's
about
the
way
the
backup
is
made,
and
so
so
most
of
the
risk
of
bugs
here
in
theory
should
be
in
that
and
for
those
use
cases
that
are
broken
anyway,
I
mean
again.
Obviously,
testing
is
needed,
you
know,
and
all
that
too,
but
that
that's
the
Hope
here
is
that
we're
trying
to
you
know
fix
things
that
are
broken,
but
be
very
careful
about
minimizing
changes
to
things
that
are
currently
working.
C
Yeah
I
think
I'm,
okay
to
add
it
in
a
patch
but
I,
hopefully
I
think
there's
a
design
that
is
needed
because
actually,
in
the
original,
that
I'm
already
live
mainly
focusing
on
explaining
the
file
system
path.
But
since
this
adds
a
lot
more
support
either
we
want
to
make
updates
or
we
need
a
new
design.
E
Yeah
I
think
I
said
it
maybe,
instead
of
a
new
design
document,
just
you
know
make
the
changes
to
that
design
document
as
relevant.
You
know,
you
know,
because
that
designed
document
specifically
says
block
mode
is
not
in
scope,
and
so,
if
we're
making
it
in
scope
in
a
limited
way,
maybe
we
can
make
references
to
that
I.
Don't
know
that.
B
E
I
And
I
think
also
I've
heard
folks
discussing
a
user's
ability
to
turn
this
feature
on
or
off,
and
that
could
probably
limit
the
scope
of.
B
C
Follow
up,
let's
write
a
design
and
we'll
see
to
you
know,
schedule
it
before
1
30,
but
even
with
the
design
we
decided
to
put
in
the
patch
it,
it
may
be.
1
12,
not
one
112.2,
depending
on
the.
E
Program,
that's
fine!
Okay,
let's
just
get
the
design
to
prove
to
the
first,
and
then
we
put
the
Ovation.
As
you
know,
one
step
at
a
time
here,
I
think
it
makes
sense
as
long
as
I
think
most
of
what
we're
looking
for
is
that
a
kind
of
a
general
idea
of
no
one's
opposed
to
putting
this
into
one
of
the
1.12
releases.
E
Know
we're
not
saying
it
has
to
be
112
one,
because
we
don't
even
have
a
you
know.
It's
not
done
yet
so,
but
I.
G
E
B
H
Will
hopefully
have
something
up
based
on
our
conversation,
cool
that
we
had
the
other
day
and
I
I
will
get
something
up
and
make
sure
that
we
cover
some
of
the
intricacies
of
everything
yeah.
Okay,.
I
So
we'll
we'll
follow
up
in
in
the
next
meeting
at
this
time,
I
had
one
more
quick
question:
red
hat
is
we
have
a?
We
got
a
talk
approved
for
Shanghai
we're
considering
getting
some
Engineers
out
there
and
in
one
of
the
primary
reasons
we'd
want
to
come
out.
There
is
to
to
visit
mingle
and
design
and
code
and
hack
with
some
of
you
folks.
I
C
F
Yeah
I
I
have
asked
the
senior
imagine
being
real
for
sale
time
to
where
do
we
can
have
somebody
to
support
it.
A
lot
of
attendances,
even
but
I,
didn't
get
a
final
response
on
this
I'm
a
follow-up
with
internal
VMware,
and
if
we
got
some
information
we
can
have
some
T-Mobile
front
page
in
the
Royal
team
to
the
tenancy
coupon.
We
will
share
with
you
just
with
you
guys
so.
F
Need
I
need
to
check
check
this
okay,
because
situation
is
always
changed
right,
yeah.
C
B
I
G
I
think
we're
already
over
time,
so
I
won't
take
much
I
mean
Scott
already
approved
this.
So
if
somebody
else
can
take
a
look
and
and
see
if
it
can
make
it
to
wonderful
I
think
it'll
be
very
useful
for
people,
because
couple
of
people
already
asking
for
it
in
Slack
and
I
think
it's
missing
right.
Yeah.
E
Think
yeah
I
think,
but
it
was
never
added
to
the
CLI.
C
D
C
E
Yeah
as
long
as
as
long
as,
if
we
consider
it
as
well
I
mean
again,
even
though,
from
a
CLI
point
of
view,
there
is
a
small
API
change,
but
it's
in
addition,
it's
not
a
removal.
I
wouldn't
have
any
up.
You
know
it's
not
a
crd
thing,
so
I'd
be
fine
with
putting
this
in
a
patch
release
as
well,
something
that
we
could
pick.
You
know
112
one
or
1122,
whatever.
D
G
I
mean
I.
Also,
note
I
didn't
add
it
in
the
list,
but
cloudcaster
fixed
one
CSA
timeout
issue
and
CSA
plugin
I,
don't
think
it
was
released
in
Wonder,
11.1
I
think
the
plugin
version
changed
it
to
zero
five
one,
but
I
don't
think
David's
fix
was
picked,
Cherry
Picked,
so
that's
another
one
that
should
go
to
either
1.12.
It's
really
a
bug
fix,
so
I
would
say
it
should
go
to
1.12.0.
That's
the
CSI
timeout,
not
being
honored
in
the
CSA
plugin
right.
It
was
hard
coded
to
10
minutes.
E
Anything
that's
currently
merged
to
Maine
will
be
in
112
because
we
haven't
branched
yet
right.
C
Right
normally,
normally
we
call
the
brand.
When
the
you
know,
we
tag
the
RC
for
the
next
Miner
release,
and
for
for
this
particular
one
I
would
suggest
we
put
that
in
112
hash.
A
Okay,
anything
else.
A
Okay,
it's
not
I
think
we
can.
We
can
down
here
for
the
today's
community
meeting
and
for
Donnie
and
have
a
good
day
and
good
evening.
If
you
now
come
on.
Thank.