►
From YouTube: SIG Cloud Provider 2022-05-11
Description
Agenda: https://docs.google.com/document/d/1OZE-ub-v6B8y-GuaWejL-vU_f9jsjBbrim4LtTfxssw/edit#
[joelspeed] extraction/migration schedule
[steve wong] We (Nick and Steve Wong) need “lightning talk” presentations from individual cloud providers now for KubeCon Europe
event link https://sched.co/ytow
draft of deck is here - https://docs.google.com/presentation/d/1x6igOAIJmNN7xlEiiX98TTufsqztXbbu/edit?usp=sharing&ouid=118341252518163971765&rtpof=true&sd=true
A
All
right
apologies
in
advance.
If
my
dog
barks,
I
am
gonna,
move
my
office
downstairs,
so
it
shouldn't
be
a
problem
in
the
future,
but
we'll
have
to
get
through
one
more
one
more
meeting
so
welcome
everyone.
It
is
may
11th.
Today
this
is
sig
cloud
provider.
We
looks
like
we
have
a
couple
items
on
the
agenda,
so
let's
go
ahead
and
get
started.
A
C
This
yeah,
I
definitely
accept
this,
but
I
I
would
suggest
nick.
We
may
want
to
actually
fragment
this
into
one
bug
per
cloud
provider
because
it
seems
like
we
have
at
least
azure
and
gce,
so
we
probably
want
to
break
it
and
break
those
up.
C
You
can
assign
the
gce
ones
to
me
and
maybe
we
can
find
out
from
bridget
if
she's
got
someone
for
azure.
A
Okay,
this
is
the
issue
that
we
talked
about
in
the
past
two
extraction
meetings.
So
I'm
familiar
with
this.
B
I
thought
matt
might
join
us
today,
but
he
must
have
had
a
conflict
or
something
god.
I
did
poke
him.
B
A
Yeah
yeah,
so
does
this
issue
make
sense
to
people?
Basically,
what
we
saw,
which
I
believe
is
the
same
thing,
is
that
cubelet
and
the
ccm
will
like
on
on
upgrade
will
fight
over.
A
Yeah,
what's
even
more
interesting,
is
that
like,
if
you
have
I,
I
guess
there
might
be
some
cloud
fighters
where
you
know
you
don't
you're,
not
passing
the
the
note
ip
to
the
cubelet.
But
if
you
are
that
actually
filters
down
the
list
of
addresses
and
then.
A
We
you
basically
get
into
this
case.
Where,
let's
see-
I
guess
maybe
it
says
it
in
here,
but.
A
A
Adding
all
of
the
addresses-
and
it
would
go
back
and
forth
between
just
having
a
single
address,
filtered
down
by
node
ip
and
then
the
ccm
updating
this
house
with
all
the
addresses
which
is
obviously
you
know,
broken
behavior.
So
I.
B
Think
that
the
next
paragraph,
too,
is
where
they
were
hitting
a
lot
of
problems
like
upgrading.
Where
the
you
know,
the
kcm
you
know
is
not,
or
you
know
the
kubelet
when
it
has
cloud
provider
equal,
openstack,
it's
not
adding
that
annotation
and
so
the
transit
between
from
the
kcm
to
the
ccm.
Then
you
know
again,
this
flapping
can
occur.
A
A
Okay,
this
is
only
added
when
clever
yeah,
so
you
will
see
the
flapping
if
yeah,
so
we
were
seeing
this
before
this
annotation
actually
existed,
but
because
it's
only
added
when
cloud
fighter
equals
external.
We
still
we
still
have.
This
occur
during
the
upgrade
right.
That
makes
sense.
B
B
A
B
A
Well,
I
appreciate
your
lgtm
on
it
as
well,
but.
A
A
I'm
not
sure
if
this
is
something
that
we've
fixed,
I'm
guessing
it's
something
that
doesn't
occur
in
the
bouncer
controller
key
shorter.
So
if
that's
the
case,
you
can
comment
that.
B
Yeah,
this
is
mostly
the
entry
controller,
not
the
load,
balancer
controller,
because
we
don't
run
into
the
leakage
issues
there,
and
this
is
entry.
So
I
don't
know
if
we
want
to
fix
this.
We
should
look
into
the
cloud
provider
aws,
let's
see
if
they
choose.
A
Yeah
and-
and
we
I
think,
we
really
need
to
decide
what
our
support
model
is
and,
like
you
know,
if
we
have
load
balancer
code
that
we're
not
supporting
in
in
the
external
and
or
entry
there's
really
three
places
now
yeah.
We
should
be
much
more
explicit
in
our
documentation
about
like
what
we're
supporting
and
what
we're
not.
A
B
C
B
A
And
let
me
just
make
sure
I
actually
do
you
want
me
to
triage,
accept
this.
Yes,
please.
A
Familiar
bridgette,
are
you,
have
you
read
this
one
yet
are
you
familiar
with
it.
A
B
D
It,
yes,
is
my
audio
working
by
the
way.
Yes,
awesome,
yeah,
sorry
for
missing
the
earlier
discussion.
Do
I
need
to
explain
this
from
the
start?
How
much
context
have
folks
got
already.
A
We
discussed
it
briefly,
but
it
might
be
helpful
for
you
to
explain
it
since
you're.
You
probably
have
more
recent
context
than
the
rest
of
us.
D
Okay,
so
so
this
handles
a
a
temporary
situation
during
a
migration
from
entry
to
to
external
ccm,
and
specifically,
it
covers
the
case
where
the
cubelets
are
still
running
with
the
entry
provider
and
the
ccm
is
also
running.
D
So.
The
problem
with
that
situation
is
that
both
the
cubelet
and
the
node
controller
running
on
the
ccm
will
manage
node
addresses
on
the
node.
So
if
there
is
any
disparity
between
what
the
ccm
thinks
the
node
addresses
should
be
and
what
the
entry
cloud
provider
thinks
the
node
addresses
should
be,
then
they
are
going
to
flap.
D
D
So
we
would
have
something
like
the
the
external
cloud
provided
taint.
I
can't
remember
exactly
what
it's
called,
but
the
the
tank
added
to
a
node
when
it
comes
out-
and
this
means
that
we
can
safely
run
ccm
and
interoperate
with
both
entry
provider,
tubelets
and
external
cloud
provider
keyboards.
D
D
Unless
I've
missed
something,
we
have
a
situation
where
two
controllers
are
controlling
the
same
field
on
a
single
object,
but
we
may
decide
that
because
because
this
is
an
upgrade
issue-
and
hopefully
this
situation
isn't
going
to
last
for
very
long
as
long
as
we
can
make
its
effects
not
too
serious,
it
may
not
be
worth
completely
solving
it
and
just
make
it
livable
and
get
through
it
as
quickly
as
possible,
and
my
patch
is
very
much
in
that
vein.
D
So
we
we
identified
an
issue
where
the
kubernetes,
if
kubernetes
is
running
with
external
cloud
provider
and
also
kubernetes,
was
started
with
node
ips
on
the
command
line.
D
Oh
sorry,
keebler
then
cubelet
will
annotate
its
node
with
the
with
the
node
ip
that
it
was
provided,
and
this
means
that
the
cloud
provider
can
filter
the
node
addresses
that
it
is
that
it
is
adding
to
the
node,
and
in
fact
this
is
cubelet.
Does
this
itself
so
keyblock
running
with
entry
gets
a
bunch
of
node
addresses
from
the
entry
cloud
provider
and
then
it
filters
them
by
node
id,
and
this
was
a
change
in
124.
D
We
added
so
that
now
external
cloud
provider
is
able
to
do
the
same
thing
just
so
that
the
the
resulting
node
addresses
are
consistent,
whether
we're
running
entry
or
external.
D
D
That
means
that,
in
this
upgrade
situation,
cubelet
is
not
configured
with
external
cloud
provider.
So
it's
not
providing.
So
when
the
the
node
controller,
running
and
ccn
runs,
it
can't
see
the
node
ip
and
therefore
it
node
controller
is
writing.
The
unfiltered
addresses
and
cubelet
is.
Writing
writing
the
filtered
addresses,
so
they
flap
between
two
potentially
very
different
sets
of
node
addresses
and
this
causes
breakage
in
practice.
D
So
my
patch,
which
I
readily
admit,
is
a
get
us
through
this
clutch,
simply
causes
cubelet
to
unconditionally
add
that
annotation.
D
That
means
that,
if
keebler
is,
is
running
yeah,
whether
it's
configured
for
entry
or
out
of
tree,
it
will
always
add
that
annotation
so
that
when
we're
in
this
dual
controller
situation,
the
ccm
can
always
see
that
annotation
and
therefore
we
flap
between
a
consistent
set
of
addresses
so
still
not
a
great
situation.
But
it's
a
small
fix
rather
than
a
big
fix.
D
Is
that
yeah.
A
Is
that
no
yeah?
It's?
I
think
it's
it's
definitely
clear
to
me,
although
I
we
did
encounter
this
this
problem,
so
I
have
some
context
and
I
agree
that
the
ideal
fix
would
involve
not
having
both
cubelet
and
the
the
node
controller
control
the
address
fields,
but
I
think
we
can
merge
this
even
if
we
were
to
fix
it.
You
know,
even
if
we
were
to
fix
that
issue
as
well
like.
I
don't
think
this
would
cause
any
issues
right,
we're
just.
C
D
That
is
deployment
specific,
so
yeah.
It
really
depends
how
how
your
nodes
are
deployed.
D
We
we
hit
this
in
openstack
on
openshift
the
because,
firstly,
we
frequently
use
multiple
networks
on
all
machines,
for
example,
for
example,
for
connecting
to
external
storage,
the,
and
also
we
have
historically
and
therefore
it's
done
so
we
can't
change
it
specified
our
node
addresses
on
the
machine
in
an
order
that
doesn't
have
the
primary
one.
D
First
now,
to
be
honest,
I
don't
think
that's
actually
documented
anyway,
that
that's
a
thing
but
or
if
it
is,
I've
never
found
it,
but
yeah.
We
have
these.
We
have
these
existing
configurations
where
the
non-primary
network
is
listed
on
the
machine
first
and
and
we
also
specify
node
ip
on
the
on
the
keyblade.
So
when
cubelet
comes
up,
it
knows
which
of
its
node
addresses
is
actually
the
one
that
can
route
to
the
to
the
cube
api
server.
D
So
yeah,
it's
entirely
deployment
specific.
If
you
don't
have
a
deployment
that
is
relying
on
node
ip,
then
I
guess
this
isn't
going
to
affect
you.
C
That's
fair,
I
the
reason
I
ask
and
my
information
is
woefully
out
of
date.
The
last
time
I
looked
at
this
was
about
four
years
ago,
but
when
I
looked
at
this
and
I
went
to
sig
node
and
chatted
with
them,
the
general
impression
of
the
community
was
that-
and
this
had
been
a
recent
change
at
that
point-
was
that
I
p
main
the
the
ip
address
in
the
node
object
should
either
be
controlled
by
the
controller
manager
or
the
cubelet,
but
not
both
and
that
having
the
no
the
cubelet
control
it.
C
A
I
I'm
interested
as
to
how
it's
deprecated,
because,
like
every
cluster,
like
every
eks
cluster,
for
example,
uses
the
node
node
ip
flag
by
default,
the
cubelet
passes
the
node
I
p
flag
by
default
right.
C
Well,
there's
also
there's
a
difference
between
the
node
ipflag
and
writing
to
the
node
object
and
in
fact
the
node
ip
flag
is
not
deprecated.
It's
all
the
rest
of
those
flags
and
again
I
haven't
looked
in
the
code
in
four
years,
so
I
I'm
not
the
best,
I'm
just
trying
to
understand
and
just
wanted
to
sort
of
yeah.
B
A
Interested
to
understand
what
exactly
is
deprecated
about
that,
but
as
far
as
I
understand
that
the
default
behavior
is
that
cubelet
with
the
internal
or
you
know
with
one
of
the
cloud
providers
selected,
will
update
the
the
node
status
with
addresses
and
that's
not
like
unexpected
behavior
and
then,
when
you
set
a
club
fighter
to
external,
then
that's
when
cubelet
stops
doing
that.
A
So
that's
where
your
problem
occurs
is
when
you
still
have
the
the
cloud
provider
set
and
and
then
and
to
be
clear,
the
node.p
flag
is
just
it
just
makes
it
more
obvious.
A
D
D
A
change
we
made
recently
was
yeah.
Previously
cubelet
was
was
annotating.
The
known
object
with
with
its
node
ip,
but
ccm
wasn't
doing
anything
with
it,
which
meant
that
when
you,
when
you
switched
from
ccm
to
external,
then
your
your
node
behavior
changed.
So
all
we
did
was
we
moved
the
the
node
filtering
code
into
cloud
provider
and
now
both
cubelet
and
ccm
use
the
same
code,
but
it's
yeah.
It's
just
yeah
the
in
the
external
case.
It's
only
the
filtering
is
only
done
by
by
ccm.
D
B
C
Yeah
I
mean
I
just
I
wanted
to
understand
and
wow
and
also
exactly
to
the
point
you
just
made.
I
wanted
to
understand
how
we
ended
up
with
two
and
that
you
know
that
wasn't
meant
to
you
know.
Obviously,
that's
not
meant
to
be
the
case,
and
I
just
wanted
to
make
sure
that
we
were,
you
know
properly
dealing
with
it
and
it
sounds
good.
A
Cool,
so
I
think
consensus
here
is
that
it's
low
risk
to
merge
the
proposed
fix
it
doesn't
prevent
us
from
some
other
change
that
might
prevent
both
actors
from
changing.
A
The
note
addresses
in
the
case
that
matt
is
talking
about
so
yeah.
I
I'll
go
ahead
and
review
that
walter.
If
you
want
to
take
a
look
at
it,
and
hopefully
we
can
get
that
merged
as
a
temporary
improvement.
There.
D
A
Cool
all
right.
A
Go
ahead
and
just
go
through
really
quick.
I
don't
think
I
have
any
anything
for
aws
key
sure.
If
you
do
speak
up
azure,
do
you
have
anything
if
anyone's
here
from
azure
so
bridgette's,
not
here,
google.
C
I
mean
we
have
a
couple
of
fixes
coming
through.
I
think
you
you've
even
taken
a
look
at
least
one
of
them
nick,
but
nothing
huge
going
on
that.
I'm
aware
of,
but
I
know
we
we
are
looking
at
some
some
of
our
own
load
balancer
ip
address
changes
coming
through.
A
Got
it
all
right.
B
Ibm,
did
you
hear
me?
Yes,
yeah,
we
merged
our
two
open
source
repos
for
the
ccm
logic
down
to
a
single
repo
back
middle
of
last
month
and
going
forward.
We
have
a
single
repo
instead
of
multiple
repos,
which
should
simplify
things.
A
Got
it
all
right,
it
looks
like
a
vsphere
also
has
an
update.
B
C
Well,
just
quickly,
I
asked
on
the
the
forum,
but
I
hadn't
gotten
an
answer.
Yet
we
are
moving
the
extraction
migration
meeting
earlier
so.
A
Yeah
we
took,
I
think,
when
andrew
was
still
here,
we
had
taken
a
vote
of
some
kind.
Did
we
message
it
to
the
mailing
list?
I
I
think
we
were
planning
on
doing
that,
but
I
don't
know
if
we
ever
did.
A
Yeah
we
ended
up
settling
on
moving
to
9
30
a.m.
So
I
think,
unless
anyone
has
like
a
strong
objection,
we
just
do
kind
of
a
lazy
consensus.
I
haven't
heard
any
objections
yet.
D
Yes,
I
I
have
an
openstack
thing
which,
which
I
didn't
put
in
there,
because
I
don't
know
what
I'm
doing
the
I
I
have
a.
I
actually
filed
a
ccm
related
pr
against
the
openstack
legacy
cloud
provider,
which
I
understand
requires
act
of
god
to
to
to
gain
approval.
D
But
it's
it's
kind
of
ccm
related.
I
I
wonder
if
it
fits
at
this
point
in
the
agenda,
essentially
it
it
makes
a
small
change
to
conflict
parsing
in
the
legacy
cloud
provider
so
that
it
doesn't
bath.
If
you
send
it
confident
directives
that
it
doesn't
understand,
for
example,
because
they're
only
present
in
external
cloud
provider.
D
And
the
reason
for
that
is
because
complicated
csi
things
which
are
described
in
the
pr
kcm,
continues
to
need
the
legacy
cloud
config
for
at
least
two
more
releases.
A
A
D
Upgrade
thing:
it
just
means
that
for
the
next
two
cube
releases
before
we
can
get
rid
of
a
cloud
provider
for
kcm,
it
means
we
don't
need
to
maintain
two
different
cloud
conflicts.
We
can
just
use
one
and
and
kcm
that
entry
cloud
provider
and
casein
will
ignore
anything
that
it
doesn't
understand,
as
as
it
is
as
it
is.
If,
if
the
user
adds
external
ccm
config
to
their
cloud
comp,
then
their
cluster
dies
because
kcm
doesn't
come
up.
A
A
A
All
right
now,
I
think
we
can
move
to
the
rest
of
the
agenda.
Sure
do
you
want
to
go
ahead.
E
Yeah
thanks,
so
this
is
probably
more
a
question
for
the
extraction
meeting,
but
that's
a
little
bit
late
in
the
day
for
me,
so
I've
not
managed
to
attend
it.
So
far
I
had
heard
on
the
grapevine
that
there
was
some
delays
with
the
webhooks
project,
so
I
was
kind
of
wondering
what
the
schedule
with
the
extraction
migration
stuff
is
and
when
we're
thinking
of
clicking
the
feature
gates
so
they're
on
by
default
for
the
saving
cloud
provider
stuff
and
also
where's
best
to
track
that
in
future.
A
Yeah
I'd
say:
that's
definitely
a
good
question
walter.
Do
you
have
an
answer
for
when
on
like
the
earliest,
we
would
flip
feature
gates
on
by
default.
C
Would
be
so
right
now,
gated,
sorry,
overuse
of
the
word
gate
right
now.
This
is
gated
on
the
fact
that
the
world
breaks
if
we
turn
them
on,
and
this
is
actually
I'm
I'm
the
the
culprit
on
this
one
right
now,
I'm
afraid
y'all.
C
So
I
have
a
pr
that
is
supposed
that
that
is,
and
that
is
supposed
to
turn
on
have
a
sorry.
Let
me
start
again:
I
have
a
pr
for
for
the
proud
to
set
up
a
proud
job
that
will
actually
show
all
of
the
breakage
it.
C
I
haven't
landed
it
yet
because
I
haven't
actually
sent
it
out
yet
so
that's
my
fault,
but
essentially
we
need
to
send
that
pr
out
and
then
we
need
to
start
getting
everyone
to
take
a
look
at
the
level
of
breakage
and
I'm
also
due
to
write
a
pr,
a
document
that
I'll
send
out
to
this
sig
first
and
then
it
needs
to
go
to
sig
release.
C
That
explains
all
of
the
problems
that
we
have
with
testing.
The
issue
is
that
a
huge
number
of
our
tests
are
google
specific,
I
mean
I
say
huge.
It's
probably
less
than
100
of
the
tests
are
google
specific,
but
while
we
have
a
working
ccm
distribution
in
sig
cloud
provider,
gce,
that's
outside
of
kk
and
inside
of
kk,
with
the
cube
up
scripts
and
no
ccm,
we
don't
actually
get
a
completely
healthy
cluster.
C
When
you
turn
off
all
the
cloud
provider
code
and
as
a
result,
quite
a
few
of
the
tests
fail,
and
so
we
need
to
get
people
on
board.
With
fixing
the
tests,
we
need
to
off
basically
say
we're
okay
with
not
running
those
tests
or
we
need
to
build
a
test
system
which
would
allow
us
to
push
off
running
those
tests
to
a
separate
repo
that
then
takes.
C
The
current
code
builds
the
distribution
on
top
of
it
runs
those
tests
there
and
then
pulls
the
test
results
back,
and
that's
that
document
that
I
said,
I
need
to
write,
there's
an
earlier
version
of
it.
That's
actually
already
out
for
cl
sig
cloud
provider,
which
explains
that
last
option
I
just
talked
about
it
was
written
by
joe
betts
and
it's
called
last
known
good.
So
if,
if
what
I
said,
you
seemed
kind
of
complicated
and
you
want
to
better
understand
it.
C
E
C
That
that
is
oddly
a
kind
of
a
call
for
for
the
storage
team,
so
I
mean
the
best
call
for
that
is
going
to
be
michelle
ow,
but
specifically
that
that
has
to
do
with
a
very
spectacular
legacy
use
case
where
I
cut
a
customer.
So
I
mean
this
is
generally
going
to
be
like
an
azure,
an
amazon,
a
google
customer
has,
let's
say
a
database
with
a
bunch
of
storage
and
they
want
to
migrate
to
kubernetes,
and
they
want
to
bring
that
that
disk
with
them
the
ccm
web
hook.
C
Work
has
to
do
with
extracting
the
little
bit
of
cube
api
server
code
that
allows
that
to
happen
today.
It's
basically
an
admission
controller,
not
an
interesting
web
hook,
controller
but
just
an
admission
controller
that
calls
to
the
cloud
provider
code,
and
so
we
want
to
pull
that
out
and
replace
it
with
a
web
hook.
That
does
the
same
thing
so
that
we
no
longer
need
to
link
the
the
cloud
provider
code
into
the
cube
api
server.
C
A
A
Have
I
have
two
things
to
add
to
this
one
is
I
am
planning
on
continuing
that
work
and
getting
to
alpha
in
the
next
release
so
that
I
guess
125.
and
second
one
would
be
one
thing.
That's
just
kind
of
an
interesting
note
on
this
is
in
eks.
We
actually
don't
have
that.
I
guess
it's
a.
A
E
Yeah,
I
know
it's
enabled
in
open
shift.
This
is
this.
Is
why
one
of
the
reasons
I'm
keen
to
follow
that
web
hook
work.
So
if
there's
any
way
that
the
red
hat
team
can
help
them
do
that
snow?
Okay,
given
it
sounds
like
it's
quite
a
lot
to
do.
C
E
A
Awesome
all
right,
thank
you,
joel
next,
we
have
steve.
Do
you
want
to
give
us
an
update
on
kubecon.
F
Sure,
nick
and
I
are
planning
to
be
there
physically
and
I
think
if
we
look
at
our
slack
channel,
there
were
maybe
five
other
people
who
are
going
to
be
there
too
we're
going
to
try
to
do
this
like
we
did
for
kubecon
north
america,
where
there'll
be
an
opening
that
covers
the
overall
cloud
provider
sig
status.
F
F
So
and
with
andrew
gone,
I
guess
maybe
walter
has
volunteered
for
a
role
there,
if
I'm
not
mistaken
too,
but
that
doesn't
appear
to
have
been
updated
in
the
deck.
It's
still
the
stuff
that
I
cut
and
pasted
over
from
the
last
time,
then
ibm
has
submitted
their
little
lightning
talk
for
everybody
else.
F
It's
due.
If
you
can
get
that
to
me,
I'm
flying
out
thursday.
So
I'd
really
like
to
have
that.
If
you
intend
to
give
me
a
video
I'd
like
to
have
it
before
I
get
on
the
plane,
because
if
I
have
to
do
any
video
editing
or
transcoding
to
get
it
in
the
deck,
I
really
would
prefer
to
do
it
on
my
home
system,
rather
than
the
laptop
in
a
hotel
room
with
lousy,
wi-fi.
F
A
I
have
a
I
have
a
comment
there,
I'm
wondering
because
in
the
last
one
we
had,
I
think
one.
A
Maybe
two
cloud
fighters
do
actual
demos
and
then
we
had,
I
think,
a
handful
just
kind
of
talked
through
some
slides.
So
I
propose
that,
if
you're
planning
on
just
giving
slides,
you
don't
need
to
to
submit
it
in
video
format
you
can
just
give
us.
The
slides
is
that,
okay
with
you,
steve,
yeah.
F
A
F
Vmware
is,
and
it
isn't
it
isn't
in
the
deck
yet,
but
blue
braun
is
going
to
give
me
one
and
then
unclear
yeah
whether
ours
is
a
video
or
just
slides.
Only.
A
Okay,
I
will
say
that
aws
is
and
and
I'm
gonna
do,
just
slides
and
I'll.
Just
read
it
myself,
google,
walter,
are
you
planning
on.
C
F
A
F
A
All
right
anyone
else
like
ibm
or
openstack
planning
on
submitting
anything.
A
Sorry
for
for
doubting
you.
F
F
And
I
think
we
better
go
for
wednesday,
because
I
am
signed
up
for
one
of
those
all-day
pre-events
on
on
tuesday,
unless
you
you
want
to
make
it
tuesday
evening.
A
F
I
think
we
can
sign
up
to
host
a
meet
and
greet
during
the
contributor
summit,
which
I
believe
is
monday,
and
we
may
have
missed
the
deadline
for
it.
But
if
the
past
is
any
guide,
we
might
be
able
to
talk
our
way
into
getting
a
table,
even
if
we
did
miss
that
deadline.
F
Now,
that's
monday,
so
it's
possible
that
people
won't
be
there
by
monday,
but
anyway,
I
think
that
was
one
procedure
to
get
yourself
a
table
in
some
room
and
if
we
wanted
to
make
it
later
in
the
week
things
I've
seen
before
you
could
just
the
conference,
I
think,
provides
lunch.
So
we
could
just
make
a
declaration
that
we'll
find
each
other
in
a
table
at
a
lunch
at
the
convention
center
or
we
could
try
to
meet
during
one
of
the
evenings.
F
What
was
your
intent
just
a
meeting
of
the
people
in
the
sig
who
will
be
there
or
do
we
want
to
open
it
up
to
the
general
public.
A
I
think
we'd
open
it
up
to
anyone
who
feels
like
attending.
I,
I
doubt
we'll
get
a
very
large
crowd.
Yeah.
F
So
I
think
the
contributor
summit
thing
might
not
be
the
best
option
there,
because
you
have
to
be
a
kubernetes
community
member
to
get
in
the
to
get
the
badge
tag
to
get
into
that
area
wherever
they
hold
it.
So
some
of
the
users
I
think,
have
trouble
qualifying
on
that
basis,
got
it.
A
Cool,
I
think
we
are
pretty
much
out
of
time,
so
I'm
gonna
go
ahead
and
stop
the
recording
and
thanks
for
joining
everybody
see
you
in
see
you
next
week,
if
you're
going
to
kubecon.
Otherwise
in
two
weeks.